Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
How Cloud9 Trains for a $500,000 PUBG Mobile Tournament (PEC)
Share this & earn $10
Published at : 07 Dec 2020
Subscribe to UnEeVeN
This is how we were practicing for PEC 2019, the biggest PUBG Mobile tournament in china. The tournament is on the 28th and 29th this month if you want to watch us play in it live.
Thanks so much for watching! Don't forget to like, comment, and subscribe for more videos!
☁️ PRO PUBG MOBILE PLAYER FOR CLOUD9 ☁️
Become a member for some cool perks ▶
◾ My Stuff:
◾ My Editor's Stuff:
❕ Sponsor Shout Outs ❕
OMEN by HP:
Some hashtags so that maybe you will have a better chance of seeing my cool vids...
#pubgmobile #cloud9 #uneeven
❤️ Comment below if you read this far and I'll give you a heart ❤️
cloud9 pubg mobile
pec pubg tournament
comments powered by Disqus.
How to Prevent Vaginal Odour | 4 Super Tips for Smelly Vagina
How to Reply to Comments Your Facebook Page
Megan The Publicist Banned This Topic?!
Champerty and maintenance | Other Essential Elements of a Valid Contract | CA CPT | CS & CMA
Mac Miller - Good News
TWOS & Tours - As We Strive
President Obama Reflects On The Drone Program And "The Illusion That It Is Not War"
How To Make an Easy Origami Dinosaur
The Staves - In The Long Run
How to Write a Use Case
Taylor Swift - it’s time to go (Official Lyric Video)
How to Fix a Chipped or Cracked Windshield (Like Brand New)
Work functioning impairment in the course of pharmacotherapy treatment for depression. Sci Rep. 2020
Guru – Are You Aware ft Benji Prod By Ball J
The Numerous Problems With Varying Your Raise First In Sizing
Candelion - Give Me Your Attention
Illustrate in Figma! | Designing a Plant Illustration using Figma
Pure Colors - Make Things Better (Official Video)
【OFFICIAL MV】 เสียเพื่อน - วง Not Sleep
Dawn of War - They Suspect Everyone of Heresy
Armando Iannucci and Peter Fincham join A Drink with the Idler | Idler TV
HARD PLAY СМОТРИТ СОЮЗ СМЕХА 20 МИНУТ СМЕХА ЛУЧШИЕ ПРИКОЛЫ ДЕКАБРЬ 2020
COOLEST WAYS TO SNEAK FOOD INTO CLASS || Back To School Hacks And Tricks
Top 10 Things You Need To Know To Start Growing Ornamentals
How Thugs Live in Different Countries
S. Yusuf, 111: Who Will Discover The Treasure?
How to Make Purse from Cloth Bag | DIY Cloth Bag Purse - Very Simple
The difference between neo banks & traditional banks - Jason Bates | Future of Finance
TESTAMENT - Practice What You Preach - Bloodstock 2017
Dyson V11 Review After 7 Months - The Good The Bad and the Ugly
Grand Targhee Resort Lodging
How To Alter Your Pants : How To Fix A Baggy Pants Crotch
Jordan Adetunji - WOKEUP! (Official Music Video)
A very fashionable fashion show
Feint - My Sunset (Original Mix)
We Refuse To Answer! | Family Feud
Should the Govt tax confirmation money?
Caliph - Jog Your Memory / Melanin (Official Music Video)
AMAZING Farm-To-Table Restaurants Utilizing Vertical Aeroponics
Yoe Mase & Echos - Handle
The 100 Most Beautiful Faces of 2017
IHO officially decides to refer to seas by numbers, not specific names
Air Pollution 101 | National Geographic
(BEST QUALITY) Whitney Houston - All At Once (Live from the 14th Annual American Music Awards, 1987)
Kissinger says Trump could go down in history as "a very considerable president"
A pure hunger (David Wilkerson)
it'll be better soon 😁
24 Hours TRAPPED inside HACKER Mansion! (Game Master vs. Quadrant Battle Royale)
TYPES OF STUDENT DURING DICTATION
|
OPCFW_CODE
|
package iolang
import (
"os"
"path/filepath"
)
// A Directory is an object allowing interfacing with the operating system's
// directories.
type Directory struct {
Object
Path string
}
// NewDirectory creates a new Directory with the given path.
func (vm *VM) NewDirectory(path string) *Directory {
return &Directory{
Object: *vm.CoreInstance("Directory"),
Path: path,
}
}
// Activate returns the directory.
func (d *Directory) Activate(vm *VM, target, locals, context Interface, msg *Message) Interface {
return d
}
// Clone creates a clone of the directory.
func (d *Directory) Clone() Interface {
return &Directory{
Object: Object{Slots: Slots{}, Protos: []Interface{d}},
Path: d.Path,
}
}
func (vm *VM) initDirectory() {
var exemplar *Directory
slots := Slots{
"at": vm.NewTypedCFunction(DirectoryAt, exemplar),
"create": vm.NewTypedCFunction(DirectoryCreate, exemplar),
"createSubdirectory": vm.NewTypedCFunction(DirectoryCreateSubdirectory, exemplar),
"currentWorkingDirectory": vm.NewCFunction(DirectoryCurrentWorkingDirectory),
"exists": vm.NewTypedCFunction(DirectoryExists, exemplar),
"items": vm.NewTypedCFunction(DirectoryItems, exemplar),
"name": vm.NewTypedCFunction(DirectoryName, exemplar),
"path": vm.NewTypedCFunction(DirectoryPath, exemplar),
"setCurrentWorkingDirectory": vm.NewCFunction(DirectorySetCurrentWorkingDirectory),
"setPath": vm.NewCFunction(DirectorySetPath),
"type": vm.NewString("Directory"),
}
SetSlot(vm.Core, "Directory", &Directory{Object: *vm.ObjectWith(slots)})
}
// DirectoryAt is a Directory method.
//
// at returns a File or Directory object at the given path (always) relative to
// the directory, or nil if there is no such file.
func DirectoryAt(vm *VM, target, locals Interface, msg *Message) Interface {
d := target.(*Directory)
s, stop := msg.StringArgAt(vm, locals, 0)
if stop != nil {
return stop
}
p := filepath.Join(d.Path, filepath.FromSlash(s.String()))
fi, err := os.Stat(p)
if os.IsNotExist(err) {
return vm.Nil
}
if err != nil {
return vm.IoError(err)
}
if !fi.IsDir() {
return vm.NewFileAt(p)
}
return vm.NewDirectory(p)
}
// DirectoryCreate is a Directory method.
//
// create creates the directory if it does not exist. Returns nil on failure.
func DirectoryCreate(vm *VM, target, locals Interface, msg *Message) Interface {
d := target.(*Directory)
_, err := os.Stat(d.Path)
if err != nil && !os.IsNotExist(err) {
return vm.IoError(err)
}
err = os.Mkdir(d.Path, 0755)
if err != nil {
// This means we return nil if the path exists and is not a directory,
// which seems wrong, but oh well.
return vm.Nil
}
return target
}
// DirectoryCreateSubdirectory is a Directory method.
//
// createSubdirectory creates a subdirectory with the given name and returns a
// Directory object for it.
func DirectoryCreateSubdirectory(vm *VM, target, locals Interface, msg *Message) Interface {
d := target.(*Directory)
nm, stop := msg.StringArgAt(vm, locals, 0)
if stop != nil {
return stop
}
p := filepath.Join(d.Path, filepath.FromSlash(nm.String()))
fi, err := os.Stat(p)
if err != nil {
if os.IsNotExist(err) {
if err = os.Mkdir(p, 0755); err != nil {
return vm.IoError(err)
}
return vm.NewDirectory(p)
}
return vm.IoError(err)
}
if fi.IsDir() {
return vm.NewDirectory(p)
}
return vm.RaiseExceptionf("%s already exists", p)
}
// DirectoryCurrentWorkingDirectory is a Directory method.
//
// currentWorkingDirectory returns the path of the current working directory
// with the operating system's path style.
func DirectoryCurrentWorkingDirectory(vm *VM, target, locals Interface, msg *Message) Interface {
d, err := os.Getwd()
if err != nil {
return vm.NewString(".")
}
return vm.NewString(d)
}
// DirectoryExists is a Directory method.
//
// exists returns true if the directory exists and is a directory.
func DirectoryExists(vm *VM, target, locals Interface, msg *Message) Interface {
d := target.(*Directory)
fi, err := os.Stat(d.Path)
if err != nil {
if os.IsNotExist(err) {
return vm.False
}
return vm.IoError(err)
}
return vm.IoBool(fi.IsDir())
}
// DirectoryItems is a Directory method.
//
// items returns a list of the files and directories within this directory.
func DirectoryItems(vm *VM, target, locals Interface, msg *Message) Interface {
d := target.(*Directory)
f, err := os.Open(d.Path)
if err != nil {
return vm.IoError(err)
}
fis, err := f.Readdir(0)
f.Close()
if err != nil {
return vm.IoError(err)
}
l := make([]Interface, len(fis))
for i, fi := range fis {
p := filepath.Join(d.Path, fi.Name())
if fi.IsDir() {
l[i] = vm.NewDirectory(p)
} else {
l[i] = vm.NewFileAt(p)
}
}
return vm.NewList(l...)
}
// DirectoryName is a Directory method.
//
// name returns the name of the file or directory at the directory's path,
// similar to Unix basename.
func DirectoryName(vm *VM, target, locals Interface, msg *Message) Interface {
d := target.(*Directory)
return vm.NewString(filepath.Base(d.Path))
}
// DirectoryPath is a Directory method.
//
// path returns the directory's path.
func DirectoryPath(vm *VM, target, locals Interface, msg *Message) Interface {
d := target.(*Directory)
return vm.NewString(filepath.ToSlash(d.Path))
}
// DirectorySetCurrentWorkingDirectory is a Directory method.
//
// setCurrentWorkingDirectory sets the program's current working directory.
func DirectorySetCurrentWorkingDirectory(vm *VM, target, locals Interface, msg *Message) Interface {
s, stop := msg.StringArgAt(vm, locals, 0)
if stop != nil {
return stop
}
if err := os.Chdir(s.String()); err != nil {
return vm.False
}
return vm.True
}
// DirectorySetPath is a Directory method.
//
// setPath sets the path of the Directory object.
func DirectorySetPath(vm *VM, target, locals Interface, msg *Message) Interface {
d := target.(*Directory)
s, stop := msg.StringArgAt(vm, locals, 0)
if stop != nil {
return stop
}
d.Path = filepath.FromSlash(s.String())
return target
}
|
STACK_EDU
|
Guided Access on iOS - A Hidden Gem
Photo of my iPhone 11 showing the Guided Access configuration screen
This will be a short post that serves more as a reminder to iOS users that you have access to the Guided Access feature. This is a feature that is tucked away under Settings > Accessibility, although its uses might fall a bit out of the typical realm of accessibility.
To quote Apple’s Support page on Guided Access:
Guided Access limits your device to a single app and lets you control which features are available. You can turn on Guided Access when you let a child use your device, or when accidental gestures might distract you.
The main draw is that it allows you to lock down your device by allowing only a single application to run. This means that without the passcode (or your touch/face ID), people cannot navigate away from where you enabled Guided Access. You also have the ability to disable regions of the screen and various device features (e.g., touch, keyboard, buttons, motion). It’s worth noting that banner notifications no longer show up while in a session.
You can enter a Guided Access session with a triple-click of the side/home button, and you exit with a double-click (if you have touch/face ID enabled). No need for the passcode if it is you trying to exit.
I’ve outlined some use cases here, but I’m certain that there are other creative uses that I’m missing.
For those with children, this is a great feature to leverage when you hand your child your device. You can lock down the device to the point where you don’t have to fear them getting into things they shouldn’t. You can also set time limits if needed so that a passcode pops up, ending their fun.
In a similar capacity to using it with children, this feature is useful when handing your phone to another person to show them something specific on your device. For example, you can enable a session and disable the touch feature so they can only look at what you have on the screen. There is also no chance of embarrassment/privacy concerns as banner notifications don’t show up when you are in a session.
In situations when you are using a device for surveys or as a kiosk to complete forms, Guided Access would work well. As a side note, there is a better way to handle this using the Single App Mode via the Apple Configurator.
iOS doesn’t have a gaming mode, unlike some Android phones. This would be a mode that disables notifications and some touch gestures. Well, Guided Access on iOS is pretty much that without being labelled as such. This Reddit post does a great job outlining the benefits of gaming in a Guided Access session. TL;DR:
I would like to add that you can still get notifications on your Apple Watch even when in a Guided Access session.
You can automate some of this if you plan to always play certain games in Guided Access by using an iOS Shortcut. You would create an Automation that, when your game application, would start the Guided Access session automatically.
Guided Access is powerful and quick to enable/disable. It can really help when you have to hand your device over to someone without fearing what they might get into. As mobile devices are nearly full-fledged gaming devices as well, the usage in a gaming scenario can also prove useful.
Did you know about this iOS feature? Have you used it before, and if so in what capacity?
|
OPCFW_CODE
|
For about 6 weeks we’ve been working on finding out if -and how- it’d be possible to create our own cloud. My colleague Pim did a very good job on sorting out all the different software solutions (of course Open Source) and came up with CloudStack as the one we’d definitely to test with. So we did!
We’ve tested both the current stable 2.2 release and the upcoming 3.0 release, which is currently in beta. It took us quite some time to get the right hardware to test with. At first we used Ubuntu as OS but that turned out to be the wrong choice – for now. Ubuntu just isn’t very well supported and CloudStack more or less wants you to use RHEL or one of its free alternatives, like CentOS. We wanted to use Ubuntu at first because we had a lot of Debian experience. Although Google was our friend in helping sorting out differences between Debian-style and RedHat-style (for example in the networking setup). Looking back, moving to CentOS was no problem at all. We even have Kickstart running to be able to do quick unattended installs for the compute nodes. Cool! By the way, this is about the OS on the Compute nodes and Management-server. VM’s can of course be of any kind. In our case they definitely will run Debian.
CloudStack 2.2 works great now, but since we wanted so use some features of 3.0 we decided to give the 3.0 beta’s a go. The main feature 3.0 has, that 2.2 hasn’t, was the ability to move a VM from cluster to cluster (powered down, that is). Also networking has improved in 3.0. Another bonus is the gorgeous UI.
Our biggest hurdle was basically to understand how networking in CloudStack was meant to work. At the time we were testing beta1, the manuals were not complete yet so this proved to be a challenge at times… And – to be honest – we’d also some expectations of how we thought it’d work, that later on proved to be wrong. So we spend quite some time in playing with CloudStack, finding out how exactly it works, debug whenever something went wrong, etc. We listed the questions we had and in many occasions the CloudStack community was of great help. The good thing was that of all the the things we didn’t understand of that didn’t work, we were able to find a resolution to. Either we just had configured something wrong, or we found a bug (which was ok for a beta). It all looks very promising!
At the moment we’re testing beta3 and it has A LOT of improvements, both in UI, docs and functionality. Great job!
We went to Antwerp to visit “Build an Open Source Cloud Day“. There we learned a lot, and were able to talk to some experts on the subject. This helped us, back at the office, to start from scratch and setup CloudStack the right way (for us). Now we could really start experimenting! We’ve both setup a Basic and a Advanced zone to see what suites us most.
We’ve some performance testing to do, and make a decision about the storage we’ll be using. More about that in a later post.
Currently we’re finalizing designing our cloud and we’re pretty sure CloudStack will power it! By the end of the month 3.0 GA release will be there and then we’ll be able to build our cloud in production in march / april. Really looking forward to that!
I’ll keep you guys posted 🙂
I’ve heard HostBill introduced CloudStack support:
I have recently started experimenting with Cloudstack… 2.2 and I find it a great cloud product.
A blog step by step guide would be great… 🙂
What hypervisor did you use ? Maybe an explanation on the networking, still trying to get my head around it…
I’m wondering, why are you using the 2.2 version? 3.0 is stable now and has many advantages. One of the improvements, is that is has much better documentation. Networking is explained very well. But I agree that networking is the biggest hurdle to take. Once you have that right, you’ll be ok.
I’m using KVM hypervisor. When I have the time I’ll write some more about my CloudStack experience!
|
OPCFW_CODE
|
||7 months ago|
|LICENSE||7 months ago|
|README.md||7 months ago|
|almanac||7 months ago|
A small shell script for life journaling.
I wrote this as a simple way for me to keep track of the progress i have made on projects. If you find it usefull too then that is a bonus.
Entries can be added to the almanac and are date/time stamped with when they were entered. Each entry has an associated subject tag, the default tag is "personal" but a custom tag can be set. The default tag can be changes in ~/.almanac.d/config. The log file itself is in ~/.almanac.d/log.
- Clone this repo.
- chmod +x almanac
- Copy almanac to a location in your path.
- Log things and enjoy.
Add an entry to the almanac.
almanac -m "This is the message to log"
This will add an entry to the almanac with the default tag of 'personal' or what you have set the tag in the config file to
Add an entry to the almanac with a different tag
almanac -s work -m "This is the message to log"
This will add an entry to the almanac with the tag specified after the -s flag.
Display the whole almanac
This will display all of the entries in the almanac to the cli.
Display all entries for tag
almanac -S work
This will display any entries in the almanac for the given tag.
Display entries by date
The -D -Y and -M tags display entries from the given day, year or month respectivley. They can be combined like so.
almanac -D 01 -M 01 -Y 19
This will display all of the entries for the 1st of January 2019. These can also be combined with a tag search:
almanac -S work -Y 19
Will display all of the entries in 2019 with the work tag. Note that all of the date searches require 2 digit inputs
e.g. you will need to input 01 instear of just 1
Display used tags
Display the number of posts for each tag
Display the last entry inserted into the almanac
Remove an entry from the almanac
almanac --remove-entry <the long number at the start of the entry goes here>
Note that this has the ability to remove MANY entries from the almanac at once, it works by regex matching the number at the start of the entry. If you put 010119 then any entries with a number that starts 010119 will be permenantly removed from the log. THIS CAN NOT BE UNDONE SO IF YOU REMOVE THINGS YOU WANT TO KEEP ITS ON YOU! This command will print out the entries that will be removed and ask you if you really want to proceed. you have to input y or Y to proceed otherwise the log stays untouched.
|
OPCFW_CODE
|
Din sökning matchade inga resultat.
Testa följande för att hitta det du söker efter:
Auditing is always about accountability, and is frequently done to protect and preserve privacy for the information stored in databases. Concern about privacy policies and practices has been rising steadily with the ubiquitous use of databases in businesses and on the Internet. Oracle Database provides a depth of auditing that readily enables system administrators to implement enhanced protections, early detection of suspicious activities, and finely-tuned security responses.
Oracle Database Unified Auditing enables selective and effective auditing inside the Oracle database using policies and conditions. The new policy based syntax simplifies management of auditing within the database and provides the ability to accelerate auditing based on conditions. For example, audit policies can be configured to audit based on specific IP addresses, programs, time periods, or connection types such as proxy authentication. In addition, specific schemas can be easily exempted from auditing when the audit policy is enabled.
New roles have been introduced for management of policies and the viewing of audit data. The AUDIT_ADMIN and AUDIT_VIEWER roles provide separation of duty and flexibility to organizations who wish to designate specific users to manage audit settings and view audit activity. The new architecture unifies the existing audit trails into a single audit trail, enabling simplified management and increasing the security of audit data generated by the database. Audit data can only be managed using the built-in audit data management package within the database and not directly updated or removed using SQL commands. Three default policies are configured and shipped out of the box. Oracle Audit Vault and Database Firewall is integrated with the Oracle Database Unified Auditing for audit consolidation, reporting, and analysis. Please refer to the Oracle documentation for additional details on auditing with the Oracle database.
Oracle Database provides robust audit support in both the Enterprise and Standard Edition of the database. Audit records include information about the operation that was audited, the user performing the operation, and the date and time of the operation. Audit records can be stored in the database audit trail or in files on the operating system. Standard auditing includes operations on privileges, schemas, objects, and statements.
Oracle recommends that the audit trail be written to the operating system files as this configuration imposes the least amount of overhead on the source database system. To enable database auditing, the initialization parameter, AUDIT_TRAIL, should be set to one of these values:
|DB||Enables database auditing and directs all audit records to the database audit trail (SYS.AUD$), except for records that are always written to the operating system audit trail|
|DB,EXTENDED||Does all actions of AUDIT_TRAIL=DB and also populates the SQL bind and SQL text columns of the SYS.AUD$ table|
|XML||Enables database auditing and directs all audit records in XML format to an operating system file|
|XML,EXTENDED||Does all actions of AUDIT_TRAIL=XML, adding the SQL bind and SQL text columns|
|OS (recommended)||Enables database auditing and directs all audit records to an operating system file|
In addition, the following database parameters should be set:
For more information and best practices on Oracle Database Auditing please read Introduction to Auditing in the Oracle Database Security Guide.
|
OPCFW_CODE
|
A Nugget-based Information Retrieval Evaluation Paradigm
Last Modified: October 13, 2017
|Javed A. Aslam
Evaluating information retrieval systems, such as search engines, is
critical to their effective development. Current performance
evaluation methodologies are generally variants of the Cranfield
paradigm, which relies on effectively complete, and thus prohibitively
expensive, relevance judgment sets: tens to hundreds of thousands of
documents must be judged by human assessors for relevance with respect
to dozens to hundreds of user queries, at great cost both in time and
The project instead investigates a new information retrieval
evaluation paradigm based on nuggets. The thesis is that while it is
likely impossible to find all relevant documents for a query with
respect to web-scale and/or dynamic collections, it is much more
tractable to find all or nearly all relevant information, with which
one can then perform effective and reusable evaluation, at scale and
with ease. These atomic units of relevant information are referred to
as ``nuggets'', and one instantiation of these nuggets is simply the
sentence or short passage that causes a judge to deem a document
relevant at the time of document assessment. At evaluation time,
relevance assessments are dynamically created for documents based on
the quantity and quality of relevant information found in the
documents retrieved. This new evaluation paradigm is inherently
scalable and permits the use of all standard measures of retrieval
performance, including those involving graded relevance judgments,
novelty, diversity, and so on; it further permits new kinds of
evaluations not heretofore possible.
Past and Affiliated Personnel
- Jesse Anderton (graduate student)
- Maryam Bashir (graduate student)
- Peter Golbus (graduate student)
- Shahzad Rajput (graduate student)
Publications and Follow-on Work
A Comprehensive Method for Automating Test Collection Creation and Evaluation for Retrieval and Summarization Systems
PhD Thesis, College of Computer and Information Science, Northeastern University, 2017.
A Study of Realtime Summarization Metrics
In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, pages 2125-2130. ACM Press, 2016.
Two-layered Summaries for Mobile Search: Does the Evaluation Measure Reflect User Preferences?
In Proceedings of the Seventh International Workshop on Evaluating Information Access (EVIA), pages 29-32. National Institute of Informatics (NII), 2016.
TREC 2015 Temporal Summarization Track Overview
In Proceedings of the The Twenty-Fourth Text REtrieval Conference, NIST Special Publication:
SP 500-319, 2015.
Overview of the NTCIR-11 MobileClick Task
In Proceedings of the 11th NTCIR Conference on Evaluation of Information Access Technologies, National Institute of Informatics (NII), 2014.
TREC 2014 Temporal Summarization Track Overview
In Proceedings of the The Twenty-Third Text REtrieval Conference, NIST Special Publication:
SP 500-308, 2014.
Overview of the NTCIR-10 1CLICK-2 Task
In Proceedings of the 10th NTCIR Conference on Evaluation of Information Access Technologies, National Institute of Informatics (NII), 2013.
Exploring Semi-automatic Nugget Extraction for Japanese One Click Access Evaluation
In Proceedings of the 36th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 749-752. ACM Press, 2013.
Live Nuggets Extractor: A Semi-automated System for Text Extraction and Test Collection Creation
In Proceedings of the 36th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1087-1088. ACM Press, 2013.
TREC 2013 Temporal Summarization
In Proceedings of the The Twenty-Second Text REtrieval Conference, NIST Special Publication:
SP 500-302, 2013.
Constructing Test Collections by Inferring Document Relevance via Extracted Relevant Information
In Proceedings of the 21st ACM Conference on Information and Knowledge Management (CIKM), pages 145-154. ACM Press, 2012.
TREC Temporal Summarization Track
NTCIR MobileClick Track
NTCIR 1Click-2 Track
Acknowledgment and Disclaimer
This material is based upon work supported by the National Science
Foundation under Grant No. IIS-1256172. Any opinions, findings and
conclusions or recommendations expressed in this material are those of
the author(s) and do not necessarily reflect the views of the National
Science Foundation (NSF).
|
OPCFW_CODE
|
TServer in inconsistent state is never removed
Recently observed a TServer on a node with hardware issues. The instance was alive, but not functioning. The Master polls all tservers via gatherTableInformation and this server would repeatedly throw a TTransportException on connect. After 3 failures, an attempt is made to halt the server, however, if a TTransportException is thrown, the exception is ignored and the server is assumed to be down.
Link to ignored Exception
Propose that the Zookeeper lock be removed by the master on this failure or possibly provide an option for the behavior. LiveTServerSet.remove could be an option.
Accumulo version 1.9.2
@ctubbsii @keith-turner can you take a look at this? I'd like to add a 1.9.3 label and get this into the next RC.
@mjwall Do you have a proposed code change ready for this for 1.9.3? Without a PR available, I'm inclined to punt to 1.9.4, rather than further delay the release process to pause and do further development. At least the current failure state is a safe one that doesn't risk multiple-hosting, and has a chance of being detected and corrected by system monitoring outside of Accumulo. This isn't a new problem, or an easy one to solve, so unless there's a change ready, it's probably better to try to fix in the next version, than delay the fixes that are ready to go in 1.9.3.
@ctubbsii I was thinking it was an easy fix but wanted input from you and @keith-turner. Can't we just have the master delete the zookeeper lock at https://github.com/apache/accumulo/blob/master/server/master/src/main/java/org/apache/accumulo/master/Master.java#L1230 ?
A tserver had hardware issues and got into a bad state. It didn't go down, the master could not talk to it, but it didn't make progress on it's work for several days. Because the master never reassigned those tablets, all query and ingest to those tablets hung.
@jdwoody anything to add?
I think automatically killing tserver could be harmful if its not done properly. Need to handle the following situations to avoid the killing a large percentage of the tservers unnecessarily
Master can talk to zookeeper but not most tservers temporarily, however clients can still talk to tservers.
There is a bad tablet that once loaded causes a tserver to become non-functional.
These are two situations that if not handled properly could cause the master to automatically kill lots of tservers. When deciding to kill an individual tserver, the state of the entire cluster should be considered and maybe the history of recent kill actions.
Thanks @keith-turner. The master couldn't talk to the tserver for days, it would not have been able to kill it either. Is there something easy that can be done for this?
If the master removed the lock in ZK, the tserver would eventually shut itself down. But I think your 2 bullets still need to be addressed for the same reason.
@ctubbsii no problem pushing to 1.9.4 and agree the current behavior is safe. If this is a configurable feature, it would allow for either behavior. Some clusters don't have 24/7 monitoring and an automated recovery method would be helpful.
@keith-turner Both points valid and agree that having a history of recent kills/lock removals and a back off mechanism should be part of the fix.
|
GITHUB_ARCHIVE
|
Industry 4.0 Exponential growth in data volume originating from Internet of Things sources and information services drives the industry to develop new models and distributed tools to handle big data. In order to achieve strategic advantages, effective use of these tools and integrating results to their business processes are critical for enterprises. While there is an abundance of tools available in the market, they are underutilized by organizations due to their complexities. Deployment and usage of big data analysis tools require technical expertise which most of the organizations don’t yet possess. Recently, the trend in the IT industry is towards developing prebuilt libraries and dataflow based programming models to abstract users from low-level complexities of these tools. The amount of information produced by IoT and today’s manufacturing systems must be translated into actionable ideas. That’s why Big Data classifies the information collected and draws relevant conclusions that help improve companies’ operations in the following ways:
Improving warehouse processes: Thanks to sensors and portable devices, companies can improve operational efficiency by detecting human errors, performing quality controls and showing optimal production or assembly routes.
Elimination of bottlenecks: Big Data identifies variables that can affect performance, at no extra cost, guiding manufacturers in identifying the problem.
Predictive demand: More accurate and meaningful predictions thanks to the visualization of activity through internal analysis (customer preferences) and external analysis (trends and external events) beyond historical data. This allows the company to modify/optimise its product portfolio.
Predictive maintenance: Data fed sensors identify possible failures in the operation of machinery before it becomes a breakdown, by identifying breakdowns in patterns. The system sends an alert to the equipment so that it can react in time.
Education 4.0 Vast “digital ocean” of data about learners are generated at universities that if analysed can provide valuable insights. Treating data as an asset and becoming a data-driven organization has become necessary for universities in the big-data era. The advantage is, the universities will have the means to improve productivity, make operations more efficient and change the way decisions are made, from opinion-based to fact-based, where they can make better, more informed decisions. Data-driven education enables universities to leverage educational data to get insights about teachinglearning process and to make data-driven educational decisions based on student needs . Datadriven decision-making involves making use of data, such as the sort provided in virtual learning environments or Learning Management Systems (LMS), to inform teaching decisions . Values underlying learning analytics are to analyse student-learning data and its contexts in order to better understand and personalize student-learning experiences [12,13]. Figure 2 shows categories of data that educational institutions need to justify actions, guide actions and prescribe actions .
Future Jobs Big Data is everywhere and there is almost an urgent need to collect and preserve whatever data is being generated, for the fear of missing out on something important. There is a huge amount of data floating around. What we do with it is all that matters right now. This is why Big Data Analytics is in the frontiers of IT. Big Data Analytics has become crucial as it aids in improving business, decision makings and providing the biggest edge over the competitors. This applies for organizations as well as professionals in the Analytics domain. For professionals, who are skilled in Big Data Analytics, there is an ocean of opportunities out there. While the other sectors of IT industry are still struggling to create more jobs, the Big Data is creating a great number of jobs with the growing demand for different type of Data from companies. The companies take important decisions with the help their business centric Data. These data are collected, preserved and provided by the Big Data professionals. For this, they are highly paid. As the demand for different kind of data is increasing, the demand for Big Data professionals is also increasing. Although there are many other job opportunities available in other sectors of IT industry, yet Big Data is the future of IT job.
|
OPCFW_CODE
|
Main Article Content
Starting from the author's ignorance of exhibition cataloging. However, when I visited art exhibitions, I always found a catalog, so I became curious about the role of catalogs in art exhibitions. This prompted the author to examine more deeply the role of an art catalog and try to . The problem is how the catalog plays an important role in an exhibition. In the process of making it, the writer uses the method obtained from the MBKM process. The purpose and benefits are to show how a catalog plays a role in an art exhibition and to develop the creativity of writers in using digital media. In the process of making a catalog the author uses the creation method which includes several stages, namely observation, data collection, exploration, the process of creating. From this process the author produced an exhibition catalog in which there were 9 layouts of the contents of the fine art exhibition catalog. It can be concluded that the author creates works based on his interest in catalogs, with ideas originating from phenomena captured at the Rudana Museum and reading reference sources from the internet. At the processing stage the author refers to the results of the MBKM. Of all these processes are expected to find identity in the work.
Desain, Apri. 2021. “ Apa Aitu Katalog? Pengertian, Contoh, Dan Fungsinya Untuk Bisnis.
Https://Www.Apridesain.Id/Blog/Pengertian-Katalog/#Jenis-Jenis_Katalog, Diakses Pada 27 Desember 2022 Pada Pukul 02.00
Gulendra, I Wayan. (2010). Pengertian Garis Dan Bentuk. Link: Http://Repo.IsiDps.Ac.Id/141/1/Pengertian_Garis_Dan_Bentuk.Pdf Diakses Pada Tanggal 22 April 2022.
Susanto, Mikke. 2004. Menimbang Ruang Menata Rupa Wajah & Tata Pameran Seni Rupa. Yogyakarta: Galang Press. Susanto, Mikke. 2017.“Katalog Pameran Seni Rupa | Urna Jurnal Seni Rupa”. Jurnal.Unesa.Ac.Id. Https://Journal.Unesa.Ac.Id/Index.Php/Ju/Article/View/1634. Vol. 4, No. 1 (Maret 2016) : 1-96. Susanto, Mikke. 2019. “Katalog Anotasi “ Pondasi Sekaligus Masa Depan (Arsip) Budaya/ Seni di Indonesia. http://digilib.isi.ac.id/7185/1/13.%20Katalog%20Anotasi%20oleh%20Mikke%20Susanto.pdf Sobandi, Bandi. “Bahan Belajar Mandiri 6 Penyelenggaraan Pameran Di Sekolah”
Tabhroni, Gamal. 2022. “Pameran Seni Rupa: Pengertian, Tujuan, Fungsi dan Persiapan” https://serupa.id/pameran-seni-rupa/
Tansi, Rashid Bin, Aziz Ahmad, Pangeran Paita Yunus. 2018. “Manajemen Pelaksanaan Pameran Studi Khusus Mahasiswa Pendidikan Seni Rupa Universitas Muhammadiyah Makassar”. Diploma Thesis, Universitas Negeri Makassar Http://Eprints.Unm.Ac.Id/Id/Eprint/11877
Kompas.com. Prameswari, Gischa. 2022. “Pengertian seni”
https://www.kompas.com/skola/read/2022/04/18/163000069/pengertian-seni-menurut-paraahli?page=all diakses pada pukul 16.30
|
OPCFW_CODE
|
Recursive CTEs in SQLite not working?
rustc: 1.60.0
sqlx: 0.5.11
sqlite: 3.36.0
os: ubuntu 20.04
I've got a recursive CTE which I am calling with the query! macro that, simplified, looks like this:
WITH RECURSIVE datetimes(dt) AS (
VALUES('2022-04-29 22:00:00')
UNION ALL
SELECT datetime(dt, '-1 hour')
FROM datetimes
WHERE dt > datetime('2022-04-29 22:00:00', '-23 hour')
)
SELECT * FROM datetimes;
I'm getting the following error:
error: attempted to communicate with a crashed background worker
--> /home/michael/.cargo/registry/src/github.com-1ecc6299db9ec823/sqlx-0.5.11/src/macros.rs:315:9
|
301 | / macro_rules! query (
302 | | // in Rust 1.45 we can now invoke proc macros in expression position
303 | | ($query:expr) => ({
304 | | $crate::sqlx_macros::expand_query!(source = $query)
... |
315 | | $crate::sqlx_macros::expand_query!(source = $query, args = [$($args)*])
| | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ in this macro invocation (#2)
316 | | })
317 | | );
| |__- in this expansion of `sqlx::query!` (#1)
|
::: /home/michael/.cargo/registry/src/github.com-1ecc6299db9ec823/sqlx-macros-0.5.11/src/lib.rs:28:1
|
28 | pub fn expand_query(input: TokenStream) -> TokenStream {
| ------------------------------------------------------ in this expansion of `$crate::sqlx_macros::expand_query!` (#2)
|
::: backend/src/db.rs:542:22
|
542 | let activities = sqlx::query!(
| ______________________-
543 | | r#"
544 | | WITH RECURSIVE datetimes(dt) AS (
545 | | VALUES('2022-04-29 22:00:00')
... |
552 | | "#,
553 | | )
| |_____- in this macro invocation (#1)
Copy and pasted directly into DB Browser for SQLite - it works just fine. Is this a limitation in Sqlx or, quite possibly, an oversight of something on my part?
Answered in Reddit: https://www.reddit.com/r/rust/comments/ubf0gg/_/i6trxjp?context=1000
Also, you could try the changes in #1816 to see if they fix your problem.
#1816 certainly gets me further. I'll keep an eye on that PR and, in the meantime, the non-macro version works well.
|
GITHUB_ARCHIVE
|
USB Network Gate is a software that allows you to share USB devices and remote ports over the network. You can access USB devices that are connected to another computer, as if they were plugged into your own machine. This is useful for various scenarios, such as using a printer, scanner, webcam, or dongle from a remote location.
However, USB Network Gate is not a free software. You need to purchase a license to use it without limitations. The price starts from $159.95[^1^] per one shared USB device. If you want to save some money, you might be tempted to use a keygen to generate a serial number for USB Network Gate 6.0.
A keygen is a program that creates unique codes or keys for software activation. Some keygens are legitimate tools that are used by developers or testers, but most of them are illegal and can contain malware or viruses. Using a keygen can also expose you to legal risks, as you are violating the software's terms of service and copyright laws.
Therefore, we do not recommend using a keygen for USB Network Gate 6.0 or any other version. Instead, you should download the official trial version from the developer's website[^1^] and use it for 14 days for free. You can share one local USB device during the trial period and test all the features of the software.
If you like the software and want to use it beyond the trial period, you should buy a license from the developer's website[^1^] or from an authorized reseller. You will get a valid serial number that you can use to activate the software on your computer. You will also get technical support, updates, and upgrades for the software.
Using a keygen for USB Network Gate 6.0 is not worth the risk. You might end up with a malware-infected computer, a non-working software, or a legal trouble. Instead, you should use the official trial version or buy a license from the developer's website[^1^]. This way, you can enjoy the benefits of sharing USB devices over the network safely and legally.
How to Share USB Devices over the Network with USB Network Gate 6.0
Now that you have downloaded the official trial version or bought a license for USB Network Gate 6.0, you can start sharing USB devices over the network. Here are the steps to follow:
Install USB Network Gate 6.0 on the computer that has the USB device connected to it. This will be the server computer.
Run USB Network Gate 6.0 and click on the \"Share local USB devices\" tab. You will see a list of all the USB devices that are attached to your computer.
Select the USB device that you want to share and click on the \"Share\" button. You can also adjust the settings for the shared device, such as compression, encryption, traffic optimization, and password protection.
Install USB Network Gate 6.0 on the computer that wants to access the shared USB device. This will be the client computer.
Run USB Network Gate 6.0 and click on the \"Remote USB devices\" tab. You will see a list of all the available shared USB devices on the network.
Select the shared USB device that you want to access and click on the \"Connect\" button. You might need to enter a password if the device is protected.
Wait for a few seconds until the connection is established. You can now use the shared USB device as if it were plugged into your own computer.
You can share and access multiple USB devices over the network with USB Network Gate 6.0. You can also use different operating systems, such as Windows, Mac, Linux, and Android. You can also use USB Network Gate 6.0 to share USB devices over the Internet, by using a public IP address or a domain name.
USB Network Gate 6.0 is a powerful and versatile software that allows you to share USB devices over the network. However, you should not use a keygen to activate it, as it can be dangerous and illegal. Instead, you should use the official trial version or buy a license from the developer's website. This way, you can enjoy the benefits of sharing USB devices over the network safely and legally. 248dff8e21
💄 • Unlocking Beauty for All: Where Professionals and the Public are Welcome to our Store! •
|
OPCFW_CODE
|
Just think for a minute, how many apps do you use daily and what amount of time you invest in it. Are you now thinking to figure out exactly how all these mobile apps work?
Then, this article will help you understand the important app development languages. So, here’s what you need to know.
The demand for mobile applications is tremendously increasing and hence there has been a rise in the mobile app developers for both Android and iOS.
For those who aspire to be app developers soon there is a wide range of languages they can choose from.
But, before we move ahead, let’s quickly cover the types of mobile app development on the basis of coding.
- Native Apps – app coded in one language that’s supported only by a specific device’s operating system; eg. native iOS vs native Android
- Hybrid Apps – app coded in one language and can be used on multiple languages
- Progressive Web Applications (PWA) – a lightweight app that works in the URL of a device’s web browser giving a good look and good feeling
Now, here’s a list of mobile app development languages that help you filter out and choose the best that suits you.
- Python: As mentioned in the earlier article, python has evolved and grown popular. The reason for it’s usage is that it is fast, easy to use and learn and has an excellent readability. It supports multiple platforms and has a massive library of toolkits. Though, it takes a lot of time to process and has drawbacks with data access.
- Kotlin: This is an advanced version of Java. It has been one of the best languages for mobile apps as it has the potential to influence other languages. It has a clean and concise syntax. But, it is new in the market and hence a little hard to learn. So choose wisely!
- Rust: This is sponsored by Mozilla.It was developed with the concern on concurrency and safety to maintain boundaries. Though what makes Kotlin important is its ability to detect errors. But it is difficult to learn.
- Scala: It is designed to address problems faced by Java. It is usually object-oriented and is preferable for functioning of lazy evaluation and pattern matching.
- Ruby: It is a reflective language and functions in an object-oriented manner. The system has a particular structure for websites and mobile apps. It had an automatic memory management system as well.
- GoLang: Also known as Go, comes from Google. An excellent support by providing multiple threads and hence, is being used by many companies. It is a statistically typed language and so is very secure and consists of a cleaner syntax. It comes with a comprehensive library and so offers a wide range of inbuilt functions.
- R Programming: It is an open-source language and is not that popular , but has a lot of potential. It has its importance as it visually represents data and can do statistical computing. It is also compatible on many platforms.
- PHP: this stands for Hypertext Preprocessor. It is one of the most recommended languages for mobile apps. It is a server-side open-source language, is easy to learn, is flexible and can handle heavy data. Majorly, the inbuilt system protects from complex security threats. Eg . Facebook, Yahoo and Wikipedia. Though, it can’t handle large applications and is very difficult to maintain, so, be careful!
- Swift: A perfect language for you if you wanna start with iOS. It is an open-source language specially designed for iOS, OS X and tvOS platforms. This has a great scope as iPads, iPhones, watchOS, etc work on iOS. Though it emerged as a primary language, it has pretty much taken over Objective-C. Though it only works for iOS, it has developed over Linux, which means it is available for all of us!
- Objective-C: This is an extended version or a derivative of the C language. It features smalltalk-style messaging and is a well updated and matured language. This was being developed for iOS and OS X much before Swift came into the picture. Though Swift has slowly taken over Objective-C, it has not lost it’s charm. The language has maintained its quality of connection. And, this is due to two major reasons – one being that there has been a lot of investment and second that a majority of apps even today rely on it. Hence, it is a good idea to choose Objective-C!
There is nothing right and wrong in what you should learn and what you shouldn’t. It all depends on what suits you for your business and for your goal. Each one has it’s pros and cons. There are many other additional languages.
|
OPCFW_CODE
|
It's sad to hear this. I'll send you a private message.
I'm having the same issue in Chrome, Firefox, and IE11... But not in Edge which works fine.
I'd prefer to use Chrome and to not have to reset my password every time.
It seems that there is some hidden field which is not working properly in the browsers exhibiting this issue.
It would be good to have a public explanation and resolution shared here instead of privately to expedite the resolution for other/future users having the same issue.
We are checking this behavior on our side. Everything works fine on our side so far in different browsers.
Give us some for the deeper investigation.
I have resolved the issue for myself... The culprit was how I had initially saved the login credentials in LastPass.
Apparently I must have used the option to capture & save ALL field values which included a hidden Captcha response and other fields which may still be used to login, as can be seen in the following screenshot:
I tried deleting all but the minimum required form fields, but that didn't work. What fixed it for me was completely deleting the LastPass entry for the SmartBear community, then logging in using a fresh browser tab and saving the credentials as a new entry. I then confirmed that it continued working in the same and other browser sessions.
The reason Edge was working fine is because it doesn't (yet) officially support LastPass and so the hidden field values weren't being modified by LastPass when the page loaded.
You got me thinking about LastPass. While my ultimate issue was different than yours it was related to LastPass. For my saved site I had the URL pointed to https://community.smartbear.com/. Because I also have a seperate AlertSite login I should have saved the URL for the forum to the login screeen URL instead. Once I updated the site URL to the login page it works fine now.
Thanks for troubleshooting the issue to find out its cause. I'm glad to hear that the issue is resolved right now.
Thank you all for your responses here.
I have discovered one more issue and came up with the solution.
The issue: Even after resetting the password via the "forgot password" link, I could not log into the community site.
Solution: Open a brand new browser window in Incognito mode ( Chrome/Firefox), then copy your link from the reset password email into the browser address bar.
Complete the reset flow and now you'll be able to login to the site.
Note: I believe LastPass still has an issue if you try to go through reset password with it.
my problem is the same but has nothing to do with LastPass. I manually type in my login info.
Could you please check your Turn off auto-signin settings here?
|
OPCFW_CODE
|
package mb.nabl2.terms.substitution;
import static mb.nabl2.terms.build.TermBuild.B;
import io.usethesource.capsule.Set;
import mb.nabl2.terms.ITermVar;
import mb.nabl2.util.CapsuleUtil;
/**
* Class to generate fresh names, possibly relative to an already existing set of names. Generated fresh names are
* remembered, so subsequent calls do not generate the same fresh names. If the given name is already fresh, the name is
* kept unchanged.
*/
public class FreshVars {
private Set.Immutable<ITermVar> oldVars;
private Set.Immutable<ITermVar> newVars;
public FreshVars() {
this.oldVars = Set.Immutable.of();
this.newVars = Set.Immutable.of();
}
public FreshVars(Iterable<ITermVar> preExistingVars) {
this.oldVars = CapsuleUtil.toSet(preExistingVars);
this.newVars = Set.Immutable.of();
}
/**
* Add pre-existing variables, which cannot be used as fresh names.
*/
public void add(Iterable<ITermVar> preExistingVars) {
final Set.Transient<ITermVar> newVars = this.newVars.asTransient();
for(ITermVar var : preExistingVars) {
if(!oldVars.contains(var)) {
newVars.__insert(var);
}
}
this.newVars = newVars.freeze();
}
/**
* Generate a variable with a fresh name, and remember the generated name.
*/
public ITermVar fresh(String name) {
final String base = name.replaceAll("-?[0-9]*$", "");
ITermVar fresh = B.newVar("", name);
int i = 0;
while(oldVars.contains(fresh) || newVars.contains(fresh)) {
fresh = B.newVar("", base + "-" + (i++));
}
newVars = newVars.__insert(fresh);
return fresh;
}
public ITermVar fresh(ITermVar var) {
final String base = var.getName().replaceAll("-?[0-9]*$", "");
ITermVar fresh = var;
int i = 0;
while(oldVars.contains(fresh) || newVars.contains(fresh)) {
fresh = B.newVar("", base + "-" + (i++));
}
newVars = newVars.__insert(fresh);
return fresh;
}
/**
* Generate variables with fresh names, ensuring generated names do not overlap with variables in the set of
* freshened variables. Generated names are remembered. Returns a variable swapping.
*/
public IRenaming fresh(java.util.Set<ITermVar> vars) {
final Renaming.Builder renaming = Renaming.builder();
for(ITermVar var : vars) {
final String base = var.getName().replaceAll("-?[0-9]*$", "");
ITermVar fresh = var;
int i = 0;
while((vars.contains(fresh) && !var.equals(fresh)) || oldVars.contains(fresh) || newVars.contains(fresh)) {
fresh = B.newVar(var.getResource(), base + "-" + (i++));
}
newVars = newVars.__insert(fresh);
renaming.put(var, fresh);
renaming.put(fresh, var);
}
return renaming.build();
}
/**
* Keep until now generated fresh names even when reset() is called later.
*/
public Set.Immutable<ITermVar> fix() {
final Set.Immutable<ITermVar> fixedVars = newVars;
oldVars = oldVars.__insertAll(fixedVars);
this.newVars = Set.Immutable.of();
return fixedVars;
}
/**
* Reset the state to what it was at the last call to fix() or when the object was created.
*/
public Set.Immutable<ITermVar> reset() {
final Set.Immutable<ITermVar> resetVars = newVars;
this.newVars = Set.Immutable.of();
return resetVars;
}
}
|
STACK_EDU
|
Make OpenSteamClient fully Open Source by allowing the usage of Argon
Hello,
I'm posting this issue since I would want that OpenSteamClient become optionnaly fully Open Source by using an alternative to the proprietary steamclient.so called Argon which is written in C#:
https://github.com/emily33901/Argon
There's also a C++ version:
https://github.com/emily33901/argonx
I guess it would also make OpenSteamClient easier to maintain because you wont have to deal with the breaking changes of a Closed Source software.
Furthermore, it would allow you to make OpenSteamClient natively compatible with the ARM64 architecture as well as macOS.
So would you be able to use it or would it be too hard or undoable?
As far as I can tell, both of those have been abandoned for 5 years or so. But if there is some active open-source alternative we'd be happy to look into it.
Maybe later. My current priority is getting everything up and running with the official library for best compat.
It's not impossible though, that's for sure.
Lots of people so far have told me various things are impossible, like:
Using C++ classes from C# (already done)
Building native code and shipping it with OpenSteamClient all in one step (already done)
Hosting NuGet packages without a codesigning cert (already done)
Making a project like this with no prior experience with:
Project management
Reversing
Hooking
C/C++
Making an overlay (on the backburner, currently as a private repo)
And lots more I don't even remember
Nothing is impossible if given enough time and motivation.
The biggest issue with reimplementing the core library would be that we'd essentially be breaking the DRM/making things easier for pirates (even if we tried our hardest not to), since it's really trivial to go about editing the code so that all DRM checks pass, for example.
If we did decide to start making this, we'd probably only use argon as a reference for stuff instead of resurrecting it from the dead, since the API has surprisingly changed quite a lot, even in the past year or two.
only publish it as pre-compiled, encrypted and obfuscated retargetable object (not actual executable).
Wtf is wrong with you? This breaks the entire idea of making an open source Steam client. Unless this is sarcasm and not stupidity, because I can't really differentiate between them nowadays.
I personally think that making our own library would be a great idea. It's just that I'd rather not upset Valve and have the whole project taken down because of it.
As it stands currently, you can use your own steamclient.(dll/so) by disabling bootstrapper verification and simply replacing the file, although I can provide a better way to use a custom library if there's demand.
And yeah, I'm not a huge fan of obfuscating things or providing compiled executables only in the name of anti-piracy.
Also just throwing this out there: Valve seems to have the opinion that protecting your game with DRM should be done yourself, and not by Valve or Steam.
There's also one last issue I'd like to voice concerns about, the SteamService and VAC.
We basically have 3 options with VAC:
Reimplement VAC (seems like a bad idea)
Don't allow VAC (leave unimplemented)
Not a huge loss as Valve seems to have blocked OpenSteamClient recently anyways
Use Valve's closed-source steamservice
Is 32-bit only :(
You can't play on VAC secure servers with OSC, it boots you out of them (but doesn't ban thankfully), while previously it used to work fine.
I'd presume this is because we don't have those .valvesig sections on our binaries.
Nothing that affects anything outside VAC games, though.
But with the tech they have in place, they could easily block OSC fully (login) if they so decided.
Interms of DRM, its realy easy to crack steam drm. Just swap dlls with mr. goldberg`s steam emu and put it thru steamless. (I do not support piracy, i am mentining these tools to allow use of legaly obtained games on OSC, and to say that reimplementing it doesnt make it any easier for pirates.)
|
GITHUB_ARCHIVE
|
Philip Martin wrote:
> Branko Čibej <brane_at_xbc.nu> writes:
>> On 06.09.2010 12:16, Philip Martin wrote:
>>> To use a per-directory query strategy we would probably have to cache
>>> data in memory, although not to the same extent as in 1.6. We should
>>> probably avoid having Subversion make status callbacks into the
>>> application while a query is in progress, so we would accumulate all
>>> the row data and complete the query before making any callbacks. Some
>>> sort of private svn_wc__db_node_t to hold the results of the select
>>> would probably be sufficient.
>> I wonder if per-directory is really necessary; I guess I'm worrying
>> about the case were the WC tree has lots of directories with few files.
>> Do we not have the whole tree in a single Sqlide DB now? Depending on
>> the schema, it might be possible to load the status information from the
>> database in one single query.
> Yes, per-tree would probably work but I expect most WCs have more
> files than directories so the gains over per-dir would be small. One
> big advantage of doing status per-tree is that it gives a proper
> snapshot, the tree cannot be modified during the status walk. I'm not
> pushing per-dir as the final solution, my point is that per-node
> SQLite queries are not going to be fast enough.
There are actually two or three reasons why status should
run queries on directory granularity:
* directories usually resemble files in that opening them is
expensive relative to reading their content
* operation can be canceled in a timely manner (may or may
not be an issue with huge SQL query results)
* maybe: queries for a specific folder may be simpler / faster
than for sub-trees (depends on schema)
Also, I don't think there is a need to cache query results.
Instead, the algorithm should be modified to look like this:
// get all relevant info; each array sorted by name
stat_recorded = sql_query("BASE + recorded change info of dir entries")
stat_actual = read_dir()
prop_changes = sql_query("find prop changes in dir")
// "align" / "merge" arrays and send results to client
foreach name do
recorded= has(stat_recorded,name) ? stat_recorded[name] : NULL;
actual = has(stat_actual,name) ? stat_actual[name] : NULL;
changed_props = has(prop_changes,name) ? prop_changes[name] : NULL;
// compare file content if necessary
if (recorded&& actual && needs_content_check(recorded, actual))
actual = check_content(name)
send_node_status(recorded, actual, changed_props)
Only two SQL queries (give or take) per directory.
Received on 2010-09-08 12:25:52 CEST
|
OPCFW_CODE
|
As I said last November here on HealthBlog, “Let’s face it - telemedicine has been around almost as long as television itself”. I reminded you that telemedicine used to be technology that only the military, government agencies, and large academic medical centers could afford. The equipment was every bit as big and expensive as you might find in a traditional television studio. And, the distribution system was even more expensive requiring dedicated copper landlines or later, dedicated glass fiber or satellite connections.
With the dawn of the Internet and the availability of high speed broadband connectivity, telemedicine became not only more affordable but far more practical. Codecs improved to the point that delivering high quality voice and video over the Net have become commodity technologies built into just about every PC. But providing a telemedicine service is a bit more complicated than establishing a point to point audio-video connection between two parties. Ideally, you also need a communication and collaboration platform to organize and manage the connectivity.
That brings me to our summer “encore presentation” of Microsoft Health Tech Today with my special guest Ron Emerson, global director of healthcare for Polycom. On this program you’ll learn how, Microsoft and Polycom are defining a new generation of eHealth, telehealth and telemedicine solutions. By using Microsoft Lync, a communication and collaboration platform that brings together VOIP, messaging, e-mail, video and web conferencing all nicely integrated with the company’s information worker productivity solutions, Polycom is able to provide a seamless telemedicine experience. Polycom adds value to the platform with its dedicated clinical workstations with high definition audio and video capture capabilities. The units also connect to a myriad of diagnostic tools such as otoscopes, fundoscopes, stethoscopes, and other clinical devices. This creates an end to end telemedicine capability that leverages the best of each company’s technologies.
On the show, we demonstrate a coast to coast telemedicine session using Microsoft Lync, Polycom’s clinical workstation, and some of the diagnostic tools that one might typically need when examining a patient. You’ll be able to appreciate not only the ease by which this is done, but also the high quality of the diagnostic images and collaborative capabilities of the platform. It is really quite remarkable and definitely not your grandfather’s telemedicine.
You can watch the show right here on HealthBlog. Just click on the image below. Of course, you can continue to watch Health Tech Today on our landing page, as well as on Microsoft Showcase and on our YouTube channel if you prefer.
In the weeks ahead, I’ll be telling you about another encore edition of our series. Then, on September 7th, we’ll be releasing brand new shows of Health Tech Today - the on-line, on-demand video series at the intersection of health and information technology.
Bill Crounse, MD Senior Director, Worldwide Health Microsoft
|
OPCFW_CODE
|
<?php
namespace Downloader\Test;
use Downloader;
/**
* Class ColorsTest
* @package Downloader\Test
*/
class ColorsTest extends \PHPUnit_Framework_TestCase
{
/**
* @param $status
* @param $expected
* @dataProvider getColorProvider
*/
public function testGetColor($status, $expected)
{
$color = new Downloader\Colors('test', $status);
$this->assertEquals($expected, $color->getColor($status));
}
/**
* @return array
*/
public function getColorProvider()
{
return [
['failure', chr(27) . '[41m%s' . chr(27) . '[0m'],
['success', chr(27) . '[42m%s' . chr(27) . '[0m'],
['warning', chr(27) . '[43m%s' . chr(27) . '[0m'],
['note', chr(27) . '[44m%s' . chr(27) . '[0m'],
['unknown_status', chr(27) . '[43m%s' . chr(27) . '[0m'],
];
}
}
|
STACK_EDU
|
$T$-dependence of magnetic susceptibility for the 2D Heisenberg model
As part of a numerical study of the 2D classical ferromagnetic Heisenberg model, I was asked to plot $\ln(\chi)$ vs $T^{-1}$ and determine the $T$-dependence of the magnetic susceptibility in the low-temperature regime. Below is the plot I produced with a Monte Carlo simulation. I could try and fit the tail of the plots with a polynomial or exponential fit function, but the point is that I don't know what to expect and it seems it would be in any case a non-trivial dependence.
I have searched the literature for results but got no luck, so do you have any hints or references that could help me?
EDIT: I've added the linear-linear and log-log plots. The log-log seems to show a linear behaviour but again, I don't know what to expect and I don't see why the book would say to plot $\ln(\chi)$ vs $T^{-1}$. (The book is "An Introduction to Computer Simulation Methods" by Harvey Gould, Jan Tobochnik, and Wolfgang Christian)
Did you check how the curve looks like in a linear-linear plot and in a log-log plot? These are the usual first steps to get an intuition about the fit. Just with this log-linear plot, you don't have much information apart from the fact that is not an exponential function. It could be a power law.
I've added the plots in linear-linear and log-log scales.
So definitely, the tails are well-fitted by power laws.
I have to agree on that. Do you have any idea on why this would be the case or why the authors of the books suggest studying the depende of $\chi$ on $T$ at low $T$? Any hint or reference would help.
It is indeed suspicious that the authors sugested the log-linear plot. From a physical perspective I would also expect an exponential decay, the power law decay is usually linked with the critical slowing down. This is, usually all the quantities have a characteristic scale -exponential laws- and close to the critical point this characteristic scales vanish. However, I've never studied the Heisenberg model. Does it have a critical point as the Ising model? The susceptibility should diverge close to the critical point...Anyway, is never to late for a code revision ;)
The 2D Heisenberg model does not have a phase transition like the Ising model. This is a result of the Mermin-Wagner theorem. I've found in the literature that it's quite hard to get rid of the finite magnetization even for very big system sizes. Some authors talk of pseudocritical behaviour. In any case, I'll try to check the code again and compare my results with what I've found online.
|
STACK_EXCHANGE
|
Boto3 is python's library to interact with AWS services. When we want to use AWS services we need to provide security credentials of our user to boto3. Like most things in life, we can configure or use user credentials with boto3 in multiple ways. Some are worst and never to be used and others are recommended ways. In this blog, let us take a look at how to configure credentials with boto 3.
Using as method parameters
This is the easiest way to use user credentials with boto3. And in my opinion, this is the worst way to configure boto3. Here we can simply pass our access key id and secret access to boto3 as a parameter while creating service client or resource.
import boto3 # Hard coded strings as credentials, not recommended. client = boto3.client( 's3', aws_access_key_id=ACCESS_KEY, aws_secret_access_key=SECRET_KEY, aws_session_token=SESSION_TOKEN, )
With this approach user, keys are visible to everyone. If you commit such code to GitHub, anyone who has access to your repository can use these user credentials and have access to your AWS account. I do not need to tell you what can happen next. That is why I will recommend not use this way of setting AWS credentials with boto3.
Use a common configuration file
One simple way to abstract access key and secret access key is the starting session in another file.
# via the Session session = boto3.Session( aws_access_key_id=ACCESS_KEY, aws_secret_access_key=SECRET_KEY, aws_session_token=SESSION_TOKEN, )
Then you can import this session file in another python file and use it to start AWS sessions to connect with services.
import aws_session sqs = aws_session.client('sqs')
Though this method prevents direct visible access to AWS credentials, there is still an issue when sharing your code to someone or adding it to GitHub. You have to make sure not to commit that file to GitHub.
Boto3 automatically checks for environment variables. If it finds these variables it will use them for connecting to AWS.
- AWS_ACCESS_KEY_ID - The access key for your AWS account.
- AWS_SECRET_ACCESS_KEY - The secret key for your AWS account.
Once you set these environment variables, you can directly create boto3 client or session for service. In the backend, boto3 will use these keys to communicate with AWS
import boto3 # uses credentials from environment s3 = boto3.client('s3')
With this approach, you can be sure that your access key is only used on your machine It is not easily visible to someone watching. You can also share your code on GitHub or to some person without any worries about exposing your user credentials.
One drawback of this method is when you have multiple AWS users (from different AWS accounts or for different AWS roles) then switching between them becomes difficult. You have to change environment variables each time when you want to use different users. And what happens when you want to use multiple users' credentials (like copying files from s3 bucket in one account to s3 bucket in another account) in one single session?
Using AWS CLI profile
This is similar to setting up Environment variables on your machine. In this case, Boto3 uses credentials that you have used when setting up a default profile while configuring AWS CLI. You can learn more about how to configure AWS CLI here.
Once you have configured AWS CLI, you can directly use boto3 to create a service client or resource.
import boto3 # uses credentials from default profile of AWS CLI s3 = boto3.client('s3')
But this approach has the same drawback, what if when we have multiple user profiles? Can we tell boto3 which profile to use when connecting to AWS? Well, of course, we can.
Using multiple AWS CLI profiles
There is a simple way to state profile name while initiating client in Boto3.
import boto3 # # setting up configured profile on your machine. # You can ignore this step if you want use default AWS CLI profile. # boto3.setup_default_session(profile_name='admin-analyticshut') s3 = boto3.client('s3') # This will use user keys set up for admin-analyticshut profile.
We have seen different ways to configure credentials with Boto3. Configuring AWS CLI profiles and using different profiles depending on our need is way to go!!! Hope this helps. If you have any questions please let me know.
|
OPCFW_CODE
|
[AuthGuard] Throttling navigation to prevent the browser from hanging
I have an auth guard with forced login as describe in https://github.com/jeroenheijmans/sample-angular-oauth2-oidc-with-auth-guards
Sometimes screen is stucked and I have Chrome errors : Throttling navigation to prevent the browser from hanging
Browser [Chrome]
Version [80.0.3987.149]
Sorry to hear you're having issues. But your description isn't much to go on....
Could you add a minimal reproducible scenario (e.g. a stackblitz or minimal repository) so a community member could check to help you out?
Hi !
It seems to happen when I access to my app with the default URL with no context (i.e. http://mydomain/).
If I try using a route (i.e. http://mydomain/#/toto) the guard works fine and browser does not hangs in loop.
Here is my routing configuration :
{ path: 'home', canActivate: [AppAuthGuard], loadChildren: './pages/home/home.module#HomeModule', data: { roles: ['ROLE_USER'] } }, { path: 'toto', canActivate: [AppAuthGuard], loadChildren: './pages/toto/toto.module#UserModule', data: { roles: ['ROLE_USER'] } }, { path: '**', redirectTo: 'home' }
I think I had the same issue. In my case it resulted from using a routed module as child routes while using the AuthGuard something like this:
const subModuleRoutes: Routes = [ { path: '', component: SomeComponent, canActivate: [AuthGuard] } ];
And then in the main routing module:
const routes: Routes = [ { path: '', component: LoginSpinnerComponent }, { path: 'dashboard', component: DashboardComponent, canActivate: [AuthGuard], children: [ { path: '', component: SomeOtherComponent, }, { path: 'sub-route', loadChildren: () => SubRoutingModule, }, ], }, { path: '**', component: PageNotFoundComponent }, ];
The redundant AuthGuard in the main routes and in the sub resulted from refactoring, and I didn't catch that immediatly. My AuthGuard checked the login status based on angular-oauth2-oidc and if not logged in send the user with this.router.navigate(['']) to the application root where the login rerouting logic is.
Now the crux was that angular got confused and still loaded the subModule in the root rooter-outlet sometimes. The clashing default roots of the SubModule and the AppModule resulted the error to sometimes occur and sometimes not. If the SubModule was used, that ment that the application tried to load a route protected by AuthGuard. Since the user was not logged in, the AuthGuard send the user to the application root, which problematically was the guarded root again. And thus the endless navigation cycle got created, that caused the "Throttling navigation to prevent the browser from hanging" error to appear, and in the end, caused an out of memory error.
The solution is to take a close look at the routes and where the navigation cycle appears.
@RomainWilbert I'm not sure if you solved your problem yet, otherwise it might be helpful to share what you're doing in your AuthGuard (the link you posted does not work for me). Maybe you have a similar routing cycle as me. It seems probable to me since your home route is also secured with the AuthGuard but also the target of your generic redirection route.
Thx so much for sharing @jschmuecking! Hoping OP has some use for it.
Since we didn't really get to a reproducible scenario or example respository or StackBlitz example, I'll close the question for now. Feel free to get back to us if you have enough info to reopen the issue. (Or consider asking a question on e.g. Stack Overflow.)
@RomainWilbert I'm not sure if you solved your problem yet, otherwise it might be helpful to share what you're doing in your AuthGuard (the link you posted does not work for me). Maybe you have a similar routing cycle as me. It seems probable to me since your home route is also secured with the AuthGuard but also the target of your generic redirection route.
canActivate( route: ActivatedRouteSnapshot, state: RouterStateSnapshot, ): Observable<boolean> { return this.authService.isDoneLoading$ .pipe(filter(isDone => isDone)) .pipe(tap(_ => this.isAuthenticated || this.authService.login(state.url))) .pipe(map(_ => { let granted: boolean = false; const requiredRoles = route.data.roles; if (!requiredRoles || requiredRoles.length === 0) { granted = true; } else { for (const requiredRole of requiredRoles) { if (localStorage.getItem('roles') && localStorage.getItem('roles').indexOf(requiredRole) > -1) { granted = true; break; } } } if (granted === false) { this.router.navigate(['/home']); console.log('granted is false'); } return granted; }) );
I found out that even if I was not authenticated, script was still executing controlling the roles, and therefore redirect to /home when not granted (home is protected by the guard as well).
I resolved this by add test to check if authenticated before checking on roles.
jnk
|
GITHUB_ARCHIVE
|
Most Read This Week
O'Reilly Network: Navigating a Sea of Information
How to Find Exactly What You Need Now
Apr. 9, 2004 12:00 AM
To start searching Safari today, go to http://oreilly.com/safari-syscon
When you need quick solutions for time-critical projects, vast quantities of technical information from thousands of sources are within easy reach. But getting the right answer is often like finding the proverbial needle in a haystack. Information on message boards, news groups, and the Web at large is readily available, but it is not consistently reliable. Trade publications and vendors provide a limited range of answers. Technical books, while dependable, can take lots of time to weed through - assuming you have the right book.
There's a better way. A recent study by The Ridge Group of Princeton, New Jersey found that, when programmers and other IT professionals need information, electronic reference library (ERL) services have a clear advantage. The group compared one ERL, Safari® Tech Books Online (available through the O'Reilly Network Safari® Bookshelf), to other information resources and concluded that the service delivers savings of approximately 24 times its cost.
Unlike an online bookstore, or static e-books, an ERL is a repository of programming and IT books that allows you to search multiple books simultaneously to find and extract information. You can read books cover-to-cover online or, more likely, jump to the exact page you need. You can cut and paste code directly, to save time and eliminate programming errors. And to find information, you can browse books by category, or locate specific books quickly by searching by ISBN, author, title, publisher, or publication date.
Why ERLs Are Better: The Safari Advantage
Subscribers to Safari had a different story. When asked how the ERL affected their ability to research solutions or find code, users reported that Safari saved them an average of 13.5 hours per month, or just over 4 labor-weeks per year - nearly half the amount of time lost by those who didn't subscribe to the service.
Safari is a joint venture between two leading technology publishers, O'Reilly & Associates and The Pearson Technology Group (whose imprints include such well-known names as Addison-Wesley Professional, Cisco Press, Peachpit Press, Prentice Hall PTR, Sams, and Que). Among the subscribers interviewed, 86% stated that Safari helped them become better prepared to handle new projects that involved new technologies, while a full 66% acknowledged that because Safari is accurate, they spent less time on re-work.
Sun and America Online Agree
Because Safari is publisher-backed, the service gets first-hand access to the latest from O'Reilly and The Pearson Technology Group, along with content from other publishers such as Microsoft Press. Currently, Safari offers more than 2000 technical titles. Subscribers can search through all those books onscreen, or download PDF copies of selected chapters.
To start searching Safari today, go to http://oreilly.com/safari-syscon.
For information about corporate site licenses, contact email@example.com or 800-998-9938.
Reader Feedback: Page 1 of 1
Subscribe to the World's Most Powerful Newsletters
Today's Top Reads
|
OPCFW_CODE
|
… I have a passion for social change. Work with me because I want to promote the more transparent and collaborative academia, where we work with instead of just ‘for’ communities. Work with me because I am not afraid to try new things, and work through mistakes and hurdles to make things work and better. Work with me because you’re looking for an ethnographer who can connect with people of all backgrounds. Work with me because I love to collaborate with people from all backgrounds, academic and non-academic, employer or business owner, charity or beneficiary, and everyone in between, because I believe that everyone has something to say, and has the right and skills to contribute to knowledge and our understanding of the world.
My research interests include, but are not limited to:
Community, Policies and a Changing world
Geographies of community history and how these influence contemporary communities and their progress into and envisioning the future are the key focus of my current PhD project. I’m interested in the experiences of people living in local communities guided by national policies and affected by societal change, and how local business, organisations and governments deal with the perhaps misunderstood but giant differences between the issues affecting a nation and those affecting only particular parts of the country or even individuals. Where there are big changes, people are going to be affected. This should not be just a given fact, but a consideration for all of those who are behind the wheel of those big changes. We have to explore how these big changes (might) affect individual lives in order to see if we can make sure that nobody gets left behind.
Colonial History and Postcolonial Future
During my first degree I developed a passion for British and more general European colonial history and its aftermath, focusing on things such as inequalities, racism and how our currently globalizing world deals with this history when trying to look at and envisioning the future. Despite mainly focusing on community and policy in my current project, in future research projects I hope to contribute to a developing academic canon that seeks to improve our understanding of this post-colonial world in cultural, sociological and geographical contexts. I believe the world is an amazing place, shaped by centuries of intercultural contact and conflict, all culminating in an ever so much connected and globalized form of community. In what forms does the past still manage to survive in the present and will we carry it on forever in our future, and if so, how? As a lesson or as a guide?
Words over Numbers
Numbers are great. Especially the numbers 42, 1024, 1701 and 1337. If you’re a nerd like me, you’ll understand. Nonetheless, numbers, when talking of statistics, can give us a lot of useful information about what is going on in the world. We can see which communities are the wealthiest, and which are the most deprived. And then what? We could seek out and publish a lot of other numbers, for example numbers providing us with potential strategies to either maintain or improve the situation as it is. But what does it tell us about the community itself? Communities, counties, countries and the world do not consist of numbers to me. They are made up of people, experiences, loves and hates, passions and wishes, mistakes and successes; they are made up of stories. Without knowing the people and their stories, a strategy aiming to improve whatever situation will be generic rather than fitting the people who are supposed to benefit from the changes instigated by this strategy. Therefore, I consider numbers a great starting point, to guide us towards the places and people who are in need of help. But without getting to know those places and people, how can we know what they need?
|
OPCFW_CODE
|
What are good python conferences to see
enterPy: The new conference for Python in Business, Web and DevOps
Python is growing in popularity and unites a huge community. There are good reasons for this: The language is easy to learn, universally applicable and quickly provides usable applications. EnterPy is now aimed specifically at professionals who use Python productively in companies or who intend to do so because they want to exploit the potential of the programming language in data analysis, machine learning, web programming or in the DevOps environment.
Premiere in May 2020
The premiere of enterPy will take place on May 25 and 26, 2020 in the Congress Center Rosengarten in Mannheim. The workshop day with all-day, in-depth tutorials is followed by the actual conference with two parallel lecture tracks. In addition to the basics and advanced topics relating to Python development, there is a focus on practical experience from concrete projects in a wide variety of industries. EnterPy wants to show how new developments from Python, but also tools and techniques, can be used profitably in the corporate context in practice.
Business practice: From newcomers and migrants to professionals
As part of the Call for Proposals (CfP), which is now open, the organizers are looking for workshop and lecture submissions on the most important topics from Python basics to deep dives, frameworks and tools, web, DevOps, security and testing to data science and machine Learning. Concrete help is also desired for beginners and those switching - including, for example, prospective data scientists or C / C ++ / Java developers who want to get involved in Python. Proposals for 45-minute lectures and all-day workshops can still be submitted until January 24th.
EnterPy is organized by heise Developer, iX and dpunkt.verlag, all of which are part of the Heise Group.(map)Comment on the post
- What sentences form blasphemy in Christianity
- Why does nobody respect me
- What topics are prohibited at work
- Which are popular travel blogs
- What's in your salad
- Can cat droppings be used as fertilizer
- What's better than tacos
- Where can I find free online resources
- Is DanTDM cool
- What is feedback
- What is an MVC architecture
- What are monatomic, diatomic and teutonic gases
- Is tickling legal in the UFC
- How do trees and bushes differ?
- How often do hotels wash their robes
- God and his angels use magic
- How is coffee grown
- Holds IRS refunds for 2019
- How to say everyone in Finnish
- Why do Spanish words have gender
- How can Paul Ryan be elected president
- What is the future of legged robots
- What is a rooster root 1
- Why is irony called irony
|
OPCFW_CODE
|
Predictive maintenance: data science for utility companies
Geospatial Data Scientist
When you think of Geospatial Data Science, you might think of big innovative ideas from forward-thinking dreamers, who sometimes forget for a moment that you have to use data for a practical purpose. Predictive maintenance is an example of an extremely valuable practical application of data science. In this blog, I tell you more about asset management based on predictive insights.
Asset management based on predictions
Efficient asset management benefits from good maintenance predictions. When you manage tens of thousands of assets spread across several provinces and located above, below or on the ground, maintenance without high-level planning is impossible. I can see large filing cabinets in the days when computers did not exist. And vans with maintenance workers who would 'go and see how things were' based on random sampling. I wasn't there, so I don't know for sure. But I can imagine that asset management in 1979 was pretty labour-intensive.
Predictive maintenance was conceived from the idea that there is simply no time or money to send maintenance teams out with a risk of 'false alarm'. In addition, 'outside is inside' is increasingly the aspiration of organisations whose work takes place largely in the outdoors. The time when digital twins display the state of your all your assets in real time has not yet arrived, but the GIS world is moving rapidly in that direction.
leakage analysis dashboard (fictitious)
Predictive asset management, in other words, or predictive maintenance. I like to make the phenomenon concrete using an example: a dashboard with an alert score that shows the risk of pipeline failure at a glance. So that maintenance crews know where to go first and do not have to determine the highest urgency based on a (partially) subjective assessment.
Image 1: predictive maintenance dashboard for pipeline maintenance. Text continues below the image.
This dashboard is not based on actual data model, but a predictive model. When you set up a dashboard based on actual data, there is a greater risk of being behind the times.
I created this predictive model using PyCaret, a low-code machine learning library that allows you to create sophisticated machine learning workflows. In PyCaret, I created a predictive model for each attribute that provides input to the dashboard. In addition, I set up an email alert for all pipelines that have an excessive Alert Score. So when this is the case, the relevant maintenance team will immediately receive an email with the location of the pipeline.
A common saying in the GIS world is 'a line is a line'. This is also absolutely true when it comes to predictive maintenance. You can also use this dashboard to show the state of power lines, gas pipes or other assets at a glance. Technically, the principle is the same. Tensing is a recognised Esri Utility Network Specialist (an advanced ArcGIS data model for utility, telecom and infrastructure companies, among others). So I am in a luxury position: if I want to know specific things about a particular asset type, I always have a colleague with specific knowledge available.
The value of predictive maintenance is increasing
The number of assets to manage has been increasing for years at almost all our customers in different industries. Due to ongoing construction the number of assets and also the total amount of asset data is constantly increasing. In addition, the labour market remains tight and therefore organisations are constantly looking for ways to do more work with smaller departments.
And then there is the impact of climate change. Municipalities, provinces, the state, utilities: they are all innovating to contribute to a sustainable world. In addition, climate change also has a direct impact on the lifespan of assets.
Image 2: Asphalt maintenance can vary from country to country or even area to area. Text continues below the image.
Let's take asphalt as an example. The composition of asphalt is different in the Middle East than it is in Lapland, as different weather conditions call for different production and maintenance methods. I can so imagine (I'm a Geospatial Data Scientist, not a road engineer, I'll add) that these kinds of additional factors drive the demand for smart data use even further.
Strategic Market Research, a US market research firm, concludes that the total market value of predictive maintenance of industrial machinery in the US is going to grow from $4.32 trillion to $45.75 trillion between 2021 and 2030. So if the relative growth in the market value for predictive maintenance of assets is even a fraction of these figures, we are already talking about a small revolution.
The world is becoming increasingly complex and crowded. I am certain that predictive maintenance is currently still in its infancy and that organisations have a lot to gain in this area. Interested in exploring the possibilities of predictive maintenance? Feel free to contact me!
|
OPCFW_CODE
|
from tkinter import *
from tkinter import messagebox
from mininetbackend import MyNet
top = Tk()
width = top.winfo_screenwidth()
height = top.winfo_screenheight()
c = Canvas(top,bg = "gray", height = height-200,width = width-200)
filename = PhotoImage(file = "comp.png")
switchImage = PhotoImage(file = "switch.png")
net = MyNet()
class Host:
def __init__(self, x, y, hasLinkWith, isSwitch):
self.x = x
self.y = y
self.hasLinkWith=hasLinkWith
self.isSwitch= isSwitch
self.node = None
hostList=[]
#55x82 =imageSize Host
#100x44 =Switch
widthImage= 55
heightImage =82
widthImageSwitch=100
heightImageSwitch=44
addhostActive=False
addlinkActive=False
addSwitchActive=False
h1Select=-1
h1x=-1
h1y=-1
h2Select=-1
h2x=-1
h2y=-1
def addSwitch():
global addSwitchActive
addSwitchActive = True
c.bind('<Button-1>', MouseClick)
def addHost():
global addhostActive
addhostActive = True
c.bind('<Button-1>', MouseClick)
def addLink():
global addlinkActive
addlinkActive = True
c.bind('<Button-1>', MouseClickLink1)
B = Button(top, text ="Add host", command = addHost)
C = Button(top, text ="Add link", command = addLink)
D = Button(top, text ="Add Switch", command = addSwitch)
def MouseClickLink2(event):
global addlinkActive
global h1Select
global h2Select
global h1x, h2x, h1y, h2y
if h1Select != -1:
placeholder1=False
placeholder2=False
for index, obj in enumerate(hostList):
if index==h1Select:
placeholder1=obj.isSwitch
for index, obj in enumerate(hostList):
if event.x < (obj.x + widthImage) and (event.x + widthImage)> obj.x and (event.y < obj.y) + heightImage and(event.y + heightImage) > obj.y:
if placeholder1==False and obj.isSwitch==False:
messagebox.showinfo(title='Error',
message='Link can not be created between two hosts.')
C["state"] = NORMAL
return
h2Select = index
h2x=obj.x
h2y=obj.y
c.create_line(h1x,h1y,h2x,h2y,fill='black', width =5)
if h1Select==h2Select:
messagebox.showinfo(title='Error',
message='Link can not be created between a single host, Its already present.')
h2Select=-1
for index, obj in enumerate(hostList):
print(obj.hasLinkWith ,index )
if index == h1Select:
obj.hasLinkWith = h2Select
if index == h2Select:
obj.hasLinkWith = h1Select
C["state"] = NORMAL
h1Select= -1
h2Select= -1
def MouseClickLink1(event):
global addlinkActive
global h1Select
global h1x, h2x, h1y, h2y
if addlinkActive:
if len(hostList) < 2:
messagebox.showinfo(title='Error',
message='There are less than two device or hosts')
else:
for index, obj in enumerate(hostList):
if event.x < (obj.x + widthImage) and (event.x + widthImage)> obj.x and (event.y < obj.y) + heightImage and(event.y + heightImage) > obj.y:
h1Select = index
h1x=obj.x
h1y=obj.y
addlinkActive=False
C["state"] = DISABLED
c.bind('<Button-1>', MouseClickLink2)
print(h1Select)
def MouseClick(event):
global addhostActive
global addSwitchActive
if addhostActive:
for obj in hostList:
if event.x < (obj.x + widthImage) and (event.x + widthImage)> obj.x and (event.y < obj.y) + heightImage and (event.y + heightImage) > obj.y:
return
addhostActive=False
hostList.append(Host(event.x,event.y,hasLinkWith=-1, isSwitch=False))
c.create_image(event.x,event.y, anchor = CENTER, image=filename)
if addSwitchActive:
for obj in hostList:
if event.x < (obj.x + widthImage) and (event.x + widthImage)> obj.x and (event.y < obj.y) + heightImage and (event.y + heightImage) > obj.y:
return
addSwitchActive=False
switch = net.addSwitch()
h =Host(event.x,event.y,hasLinkWith=-1, isSwitch=True)
h.node = switch
# h.node.name
# h.node.IP()
hostList.append(h)
c.create_image(event.x,event.y, anchor = CENTER, image=switchImage)
c.bind('<Button-1>', MouseClickLink1)
c.pack()
B.pack()
C.pack()
D.pack()
top.mainloop()
|
STACK_EDU
|
I’m a huge fan of Windows 10 and have in general found very few issues with it, but the one that does come up the most often is the Start Menu not opening. Not a problem that I’ve seen with previous versions of Windows, the Start Menu in Windows 10 can sometimes just stop working for little to no discernible reason but thankfully there are a few quick and easy ways to get it back up and running again. You may also find that Windows 10 apps, such as the Edge browser or the Cortana search, also stop working with this problem.
Microsoft are aware of the issue and presumably it will be dealt with for good in an upcoming update, in the meantime they have released a troubleshooting tool to diagnose and fix the issue which will fix the vast majority of systems and can be downloaded from Microsoft directly from the ‘try the troubleshooter’ section of the article.
If the tool doesn’t work, there are a couple of other things you can try. I’ve listed these in the order that I find them most effective, so if one doesn’t work try the next and so on.
I’ve recently seen a couple of computers where the Start Menu wasn’t working from a fresh installation of Windows. If it’s never worked at all for you then it’s likely to be the same problem. After a lengthy session of troubleshooting I eventually tracked the cause down to an out of date graphics driver, so I’d recommend updating your video driver and trying the Microsoft troubleshooter again. The correct driver will vary depending on the graphics card or integrated graphics in the system, and new drivers are readily available from Nvidia, AMD or Intel
When the Start Menu on my computer at home stopped working, the solution that worked for me was a slightly more complicated one. This step requires going into the Windows Registry Editor so take care not to change anything else in there or you could cause further issues.
Right click on the Start Menu and select ‘run’ from the menu that appears. Type regedit press enter.
You may get a warning asking if you want to allow the app to make changes to your computer, choose yes.
Navigate to the following location:
You should see a single file in this folder, called (Default). Leave that as it is and right click on the blank space around it and choose New, DWORD (32-bit) then name it UseExperience
Double-click the newly created item and make sure that value data is set to 0
Click OK, close the Registry Editor and then restart your computer
Repair Corrupt Files
Another possible solution is to run a Windows integrity checker. Press the [Ctrl] + [Shift] + [Esc] keys together to open the Task Manager. Click file from the top and choose ‘Run new task’
Type powershell in the box that comes up, making sure that ‘Create this task with administrator privileges’ is ticked, and press enter.
Once the PowerShell has loaded, type sfc /scannow and press enter. It will take a while to run this scan so just leave the computer to it while it does. It may find no errors, or find errors and fix them automatically however it could also report that it found errors that it’s unable to fix.
If that happens, type DISM /Online /Cleanup-Image /RestoreHealth into PowerShell and again press enter. Make sure that you have an active internet connection at this point, as Windows will download the affected files from the Windows Update service to replace the broken ones.
New User Account
A last solution, again from Microsoft is to create a new user account. Microsoft provide detailed steps on how to do so and also to relink your Microsoft Account to the new user profile once you’ve done it – if this step works for you make sure that you back up any files or data stored in the user folders (documents, videos, pictures and music are probably the key ones) before deleting the old account. Microsoft's advice is the following:
If you're signed in with your Microsoft account, remove the link to that account first by doing the following (otherwise see "Create the new administrator account"):
1. Press Windows logo key Windows logo key + R, type ms-settings: and then select OK. This opens Settings.
2. Select Accounts > Sign in with a local account instead.
3. Type your Microsoft account password and select Next.
4. Choose a new account name, password, and password hint, and then select Finish and sign out.
Create the new administrator account:
1. Press Windows logo key Windows logo key + R, type ms-settings: and then select OK.
2. Select Accounts > Family & other people (or Other people, if you’re using Windows 10 Enterprise).
3. Under Other people, select Add someone else to this PC.
4. On Windows 10 Home and Windows 10 Professional, provide a name for the user and a password, and then select Next. On Windows 10 Enterprise, select I don’t have this person’s sign-in information, and then select Add a user without a Microsoft account.
5. Set a user name, password, and password hint. Then select Next > Finish.
Next, make the new account an administrator account.
1. Under Family & other people (or Other people, if you're using Windows 10 Enterprise), choose the account you created, and then select Change account type.
2. Under Account type, select Administrator > OK.
Sign out of your account and then sign in to your new account. If you can open Cortana or the Start menu, move your personal data and files to the new account.
If the problem still isn't fixed, try deleting the old administrator account:
1. Under Other users, select the old administrator account > Remove > Delete account and data.
2. After the old account is removed, restart your device and sign in with the new account again.
If you were using a Microsoft account to sign in before, associate the Microsoft account with the new administrator account. In Settings > Accounts , select Sign in with a Microsoft account instead and type in your account info.
As of January 2017, I've had very few reports of this issue for a couple of months now. I believe that it was permanently fixed in the Anniversary Update that was released last August, so if you're still having the issue now it's most likely due to this. I'd recommend updating to the most recent version of Windows 10 using Microsoft's Download Tool and selecting 'Upgrade this PC now' from the options. It will then download and install the latest version of Windows 10 from the Microsoft servers and should then fix the problem in the process.
In a worst case scenario, if all else fails then a fresh installation of Windows should put it right. If you’ve never installed Windows before we do have a guide to the process that should help you through.
|
OPCFW_CODE
|
In the series of Oracle storage wait events I have covered so far, five different events are related to the storage: “db File Sequential Read”, “db File Scattered Read” wait events, “Direct Path Read”, “Direct Path Read/Write temp” and “Free Buffer Wait”. In this post, I will describe the log file sync wait event, which in many cases is caused by poor storage performance.
A user session issuing a commit command must wait until the LGWR (Log Writer) process writes the log entries associated with the user transaction to the log file on the disk. Oracle must commit the transaction’s entries to disk (because it is a persistent layer) before acknowledging the transaction commit. The log file sync wait event represents the time the session is waiting for the log buffers to be written to disk. For example, the following user transaction consists of Insert, Select and Update statements, and completes with a commit:
The Insert and Update queries modified some data, but the new blocks were not written to disk yet. When the session issues the commit statement, it is placed on hold and the LGWR flushes the corresponding log entries for the transaction to the log file. (NOTE: The modified data blocks are still in memory and are not committed to disk yet.) When the LGWR completes the log entries flushing, the log file sync wait is over and the transaction is completed.
The reasons why Oracle uses such a technique have to do with performance, reliability and high availability. In terms of performance, the concept behind this technique is the notion that sequential writes are faster than random writes (which is definitely true for mechanical disks). Oracle writes the log file sequentially, while data blocks are written randomly. In addition, the log files’ write size varies and is affected by the transaction size. In OLTP applications with small transaction sizes, it is common to see log files write sizes as small as 512 bytes.
The following diagram illustrates an Oracle shadow process that is waiting for the LGWR process to write its entries to the disk:
Below is an example of an AWR report showing an application where the “log file sync” is the dominant wait event:
High “log file sync” can be observed in case of slow disk writes (LGWR takes long time to write), or because the application commit rate is very high. To identify a LGWR contention, examine the “log file parallel write” background wait event (256ms latency in the example above with 12,465 calls).
For many SSD solutions (other than the Kaminario K2), the log file writes are a challenging operation for several reasons:
- The log file entries are not aligned to 4Kb I/O. For many SSD vendors, this significantly affects the performance of writes.
- Many SSD solutions utilize RAID 5 or RAID 4 for high availability. The log writes on very small I/Os affects their performance and endurance.a. Each write will actually translate to at least two writes (data + parity), which can cause an endurance problem for very active OLTP environments. This will also affect the performance of the write operations, as each write is doubled.b. In RAID 4, the parity drive will probably observed many more writes than the data blocks. This is an endurance issue.
For these reasons, there are several SSD vendors that recommend to not place the transaction logs on their array. Kaminario K2 does not have these limitations and is an excellent storage for even the most demanding redo files. K2 has no performance penalty on non- 4Kb I/Os and does not use the RAID 5 configuration. In addition, K2 is the only SSD array offering a real SSD hybrid solution by allowing placement of the transaction logs on DRAM LUs while the data tablespaces are kept in Flash media.
|
OPCFW_CODE
|
import { Directive, ElementRef, Input, OnInit } from '@angular/core';
@Directive({
selector: '[row]'
})
export class RowDirective implements OnInit {
@Input() position!: string;
constructor(private el: ElementRef) { }
public toCssRow(type: string): string {
let rows: string[] = type ? type.split(' ') : []
let classes: string = ''
if (rows[0]) classes += `${rows[0]}-xs `
if (rows[1]) classes += `${rows[1]}-sm `
if (rows[2]) classes += `${rows[2]}-md `
if (rows[3]) classes += `${rows[3]}-lg`
return classes
}
ngOnInit(){
let classes: string[] = this.toCssRow(this.position).split(' ')
let row: string = 'row'
this.el.nativeElement.classList.add('row', classes[0] || row, classes[1] || row, classes[2] || row, classes[3] || row)
}
}
|
STACK_EDU
|
To remove all lines containing a particular
string in the vi or
text editors, you can use the
g command to globally search for the
specified string and then, by putting a "d" at the end of the command line,
specify that you want all lines containing the specified string deleted. E.g.,
If I wanted to remove all lines containing the string "dog", I could use the
That command would also remove any lines containing "dogs", "dogged", etc.
If I just wanted to remove lines containing "dog", I could use
You can, of course, specify the pattern on which you wish to search using regular expressions. E.g., if I wanted to remove any lines containing either "dog" or "hog", I could use the command below.
By putting the leters "d" and "h" within brackets, I indicate to vi that it should remove any line that has either a "d" or an "h" followed by "og".
If I wanted to remove all comment lines from a script, I could search for any lines beginning with the pound/hash/number sign character. E.g.:
The caret indicates that what follows must be at the beginning of a line (the dollar sign indicates that what precedes the dollar sign must occur at the end of the line).
If you want to delete any lines containing one of several specified words, you can separate the words with a vertical bar, aka the pipe character (it's the character you get when you hit the shift key and the backslash character simultaneously, at least on English language keyboards. The vertical bar signifies a "logical or" function should be performed. E.g., if I wanted to delete any lines containing the words "cat" or "dog", I could use the command shown below.
I need to precede the vertical bar with an escape character, i.e., a backslash, to "escape" the special meaning of the vertical bar, which is often used to "pipe" the output of one command into another.
If I wanted to perform a "logical and" function, e.g., to delete any lines containing both the words "cat" and "dog", I could use the command below.
The dot character indicates any character can occur in the specified
position. The asterisk represents a
quantifier indicating zero or more occurrences of the
preceding element. So ".*" means zero or more characters can occur between
"dog" and "cat". But I also have to put in the vertical bar followed by
cat.*dog to address cases where both words occur on the line but
"cat" precedes "dog" rather than following "dog".
If I wanted to delete all lines except those that contained a particular
string, I could put an exclamation mark after the
The exclamation mark signifies a "logical NOT" operation, i.e., negation. E.g.,
to delete all lines not containing "dog", I could use the command
If, instead, I wanted to remove all lines except those containing the words "cat" or "dog", I could use the command below:
You can specify a range of lines over which the command should operate by preceding the command with the line range. E.g., if I only wanted to delete lines containing the word "dog" for the first five lines, I could use the command below.
Or to apply the command to lines 100 to the last line, I could use the command below, since the dollar sign indicates the last line.
|
OPCFW_CODE
|
Browser Support for Signing and Submitting XML Forms
Milton M. Anderson
March 29, 1999
HTML Forms are one of the more widely used features of HTML. In reviewing XML, XSL and related standards, there doesn't seem to be much specific XML support for forms. The XFDL (Extensible Forms Definition Language) proposal creates a specific XML-based language for defining forms, rather than providing generic XML form and signature features. Thus, it requires a XFDL-aware client application or plug-in at the browser to present, fill, sign and submit the XFDL forms.
This memo describes a possible approach to extending XML support for forms and for signing forms.
Existing XML Support for Forms
The incoming XML document does not have any formatting information, and in particular, it does not have the elements such as <FORM>, <INPUT>, <BUTTON>, <SELECT>, <TEXTAREA>, <FIELDSET> and <LABEL> that define the scope of an HTML form and the data entry widgets it contains. Neither does it define the special purpose buttons that "submit" or "reset" the values entered in the form.
However, the XSL (or CSS -Cascading Style Sheet) can contain those elements, and XSL can inset the HTML form "flow objects" at the designated places in the HTML output, in conjunction with element attributes and values from the original XML. Thus, the output of the style sheet processor is a standard HTML 4.0 form (along with any other non-form content from the XML document and the style sheet, e.g. titles, headings, XSL formatted paragraphs containing text from the XML document, etc.).
The HTML Form will be displayed by the browser, and the user can enter information. As data is entered or selections are made, the flow objects are updated with the current values.
Pressing the submit button will cause the normal posting of the form. The field names and current values are extracted for the form, formatted as a set of field-value pairs and POSTed using the URL in the action parameter of the <FORM> element (which could be something like http://www.company.com/cgi/process or mailto://email@example.com ).
Note that this:
Signing and Submitting XML Documents
Figure 2 shows a possible approach to signing and submitting XML documents. The first part, that produces an HTML Form proceeds as before with the exception that the HTML <BUTTON> now has two new standard types: "Sign" and "Verify". The sign buttons also contains a list of Xpointers which identify the elements in the input XML document and the XSL style sheet that are to be signed. These are hashed independently, and then their hashes signed using the FSML approach. The verify button contains an Xpointer to the <signature> element in the XML document which is to be verified.
Render, display and input proceed as before, except that the input information is used to update the XML DOM, instead of being put back in the HTML form. Display also should "gray" the previously signed parts of the form, and disallow data entry (this might be defined in the XSL style sheet used for processing this document by this person).
When the sign button is pressed, the "on sign" process creates a new <signature> element in the DOM, using the Xpointers in the XSL style sheet to locate the elements of the DOM to be signed. Xpointers may also specify that parts of the style sheet be signed as well, e.g. the style sheet may contain standard disclosure terms and conditions that are not sent with every XML document, but which must be signed to acknowledge agreement with them. Pressing the sign button would also spawn the addition displays to support the signing ceremony, including entry of PINs to unlock smart cards, etc.
Pressing the submit button should now result in creation of a new XML document from the signed XML DOM, and sending of the DOM back to the server or mailbox. It would be nice to have the addressing be more flexible than simply sending to a URI contained in the XSL style sheet. The XML Schema is used to validate the XML document before it is sent. To reduce data transmission, parts of the XML document and signed parts of the XSL style sheet may be sent by reference, rather than by value.
Support for forms by XML is minimal, although form entry of data and subsequent signing and submission of the data in an XML document would seem to be a general and very useful feature.
This note has described one approach to added XML support for forms, how to sign the form and its content, and how to submit the resulting XML document including the signatures.
|
OPCFW_CODE
|
// Observables is dead:
// https://www.bitovi.com/blog/long-live-es6-proxies
// https://esdiscuss.org/topic/an-update-on-object-observe
/*
For reference because I always forget:
interface IKeyValueDictionary {
[key: string]: any;
}
*/
import { IX } from "Interacx/IX"
import { IXBinder } from "./Interacx/IXBinder"
import { IXEvent } from "./Interacx/IXEvent"
import { IXSelector } from "./Interacx/IXSelector"
class InputForm {
firstName: string = "";
lastName: string = "";
// Late binding with UpdateProxy after first time initialization.
x: number;
y: number;
// arrays must be initialized.
// list: string[] = [];
// Event handlers:
onFirstNameKeyUp = new IXEvent();
onConvertFirstName = s => s.toUpperCase();
onLastNameChanged = new IXEvent();
onXChanged = new IXEvent();
onYChanged = new IXEvent();
// Converters, so 1 + 2 != '12'
onConvertX = x => Number(x);
onConvertY = y => Number(y);
Add = () => this.x + this.y;
}
class HoverExample {
mySpan = {
attr: { title: "" }
};
onMySpanHover = new IXEvent();
}
class VisibilityExample {
seen = {
attr: { visible: true }
};
onShowClicked = new IXEvent().Add((_, p) => p.seen.attr.visible = true);
onHideClicked = new IXEvent().Add((_, p) => p.seen.attr.visible = false);
}
class ReverseExample {
message = "Hello From Interacx!";
onReverseMessageClicked = new IXEvent().Add((_, p: ReverseExample) => p.message = p.message.split('').reverse().join(''));
}
class BidirectionalExample {
input2: string = "";
input3: string = "";
message2 = new IXBinder({ bindFrom: IX.nameof(() => this.input2) });
message3 = new IXBinder({ bindFrom: "input2" }) // Seems like using a string here is reasonable.
.Add({ bindFrom: IX.nameof(() => this.input3), op: v=>v.split('').reverse().join('') });
// onInput2KeyUp = new IXEvent().Add((v, p: BidirectionalExample) => p.message2 = v);
}
class ListExample {
someList: string[] = ["Learn Javascript", "Fizbin", "Wear a mask!"];
}
class OutputForm {
outFirstName: string;
outLastName: string;
sum: number;
}
//class CheckboxExample {
// checkbox: boolean = false;
// ckLabel: string = "Unchecked";
// onCheckboxClicked = new IXEvent().Add((_, p: CheckboxExample) => p.ckLabel = p.checkbox ? "Checked" : "Unchecked");
//}
class CheckboxExample {
checkbox: boolean = false;
ckLabel = new IXBinder({ bindFrom: IX.nameof(() => this.checkbox) });
// or:
// ckLabel = new IXBinder({ bindFrom: "checkbox" });
}
class CheckboxListExample {
jane: boolean = false;
mary: boolean = false;
grace: boolean = false;
ckNames = IXBinder.AsArray(items => items.join(", "))
.Add({ bindFrom: "jane", attribute: "value" })
.Add({ bindFrom: "mary", attribute: "value" })
.Add({ bindFrom: "grace", attribute: "value" });
}
class RadioExample {
marc: boolean = false;
chris: boolean = false;
rbPicked = new IXBinder({ bindFrom: "marc", attribute: "value" })
.Add({ bindFrom: "chris", attribute: "value" });
}
class ComboboxExample {
selector = new IXSelector();
//selector = new IXSelector().Add({ disabled: true, text: "Please select one" })
// .Add({ value: 1, text: "A" })
// .Add({ value: 2, text: "B" })
// .Add({ value: 3, text: "C" });
selection: string = "";
onSelectorChanged = new IXEvent().Add((_, p) => p.selection = `Selected: ${p.selector.text} with value ${p.selector.value}`);
}
class ComboboxInitializationExample {
selector2 = new IXSelector().Add({ selected:true, disabled: true, text: "Please select one" })
.Add({ value: 12, text: "AAA" })
.Add({ value: 23, text: "BBB" })
.Add({ value: 34, text: "CCC" });
selection2: string = "";
onSelector2Changed = new IXEvent().Add((_, p) => p.selection2 = `Selected: ${p.selector2.text} with value ${p.selector2.value}`);
}
class SomeClass { }
class NameContainer {
name: string;
}
export class AppMain {
public AlertChangedValue(obj, oldVal, newVal) {
alert(`was: ${oldVal} new: ${newVal} - ${obj.firstName}`);
}
private myProxyHandler = {
get: (obj, prop) => {
console.log(`get ${prop}`);
return obj[prop];
},
set: (obj, prop, val) => {
console.log(`set ${prop} to ${val}`);
obj[prop] = val;
// Return true to accept change.
return true;
}
}
private valueProxy = {
get: (obj, prop) => {
console.log(`get ${prop}`);
let el = document.getElementById(prop) as HTMLInputElement;
let val = el.value;
obj[prop] = val;
return obj[prop];
},
set: (obj, prop, val) => {
console.log(`set ${prop} to ${val}`);
let el = document.getElementById(prop) as HTMLInputElement;
el.value = val;
obj[prop] = val;
// Return true to accept change.
return true;
}
}
public run() {
let proxy = new Proxy({}, this.myProxyHandler);
proxy.foo = 1;
let foo = proxy.foo;
console.log(`foo = ${foo}`);
let nc = new Proxy(new NameContainer(), this.valueProxy);
nc.name = "Hello World!";
// Simulate the user having changed the input box:
let el = document.getElementById("name") as HTMLInputElement;
el.value = "fizbin";
let newName = nc.name;
console.log(`The new name is: ${newName}`);
let a = 1;
let b = "foo";
let c = true;
let d = [];
let e = new SomeClass();
[a, b, c, d, e].forEach(q => console.log(q.constructor.name));
let listForm = IX.CreateProxy(new ListExample());
// listForm.someList[1] = "Learn IX!";
// let listForm = IX.CreateProxy(new ListExample());
let items = ["Learn Javascript", "Learn IX", "Wear a mask!"];
listForm.someList = items;
IX.CreateProxy(new BidirectionalExample());
IX.CreateProxy(new ReverseExample());
IX.CreateProxy(new CheckboxExample());
let rbExample = IX.CreateProxy(new RadioExample());
rbExample.chris = true;
let ckListExample = IX.CreateProxy(new CheckboxListExample());
ckListExample.jane = true;
ckListExample.mary = true;
IX.CreateProxy(new ComboboxExample());
let cb = IX.CreateProxy(new ComboboxInitializationExample());
cb.selector2.value = 34;
cb.selector2.text = "AAA";
cb.selector2.options[2] = { text: "bbb", value: 999 };
// cb.selector2.options.pop();
// cb.selector2.options.push({ text: "DDD", value: 45 });
let hform = IX.CreateProxy(new HoverExample());
hform
.onMySpanHover
.Add(() =>
hform.mySpan.attr.title = `You loaded this page on ${new Date().toLocaleString()}`);
IX.CreateProxy(new VisibilityExample());
let inputForm = IX.CreateProxy(new InputForm());
let form = IX.CreateNullProxy(); // No associated view model.
form.app = "Hello Interacx!";
// Post wire-up
// Notice UI elements get set immediately.
// TODO: Fire any onConvert and onChanged events!
inputForm.x = 1;
inputForm.y = 2;
// This does a post-wire-up of the change event handler for x and y now that they exist.
IX.UpdateProxy(inputForm);
let outputForm = IX.CreateProxy(new OutputForm());
inputForm.onFirstNameKeyUp.Add(newVal => outputForm.outFirstName = newVal);
inputForm.onLastNameChanged.Add(newVal => outputForm.outLastName = newVal);
inputForm.onXChanged.Add(() => outputForm.sum = inputForm.Add());
inputForm.onYChanged.Add(() => outputForm.sum = inputForm.Add());
inputForm.firstName = "Marc";
inputForm.lastName = "Clifton";
//inputForm.list.push("abc");
//inputForm.list.push("def");
//inputForm.list[1] = "DEF";
//inputForm.list.pop();
}
}
|
STACK_EDU
|
It's time again for the Cloud Foundry Summit Europe, a hugely successful gathering of Cloud Foundry and PaaS focused companies eager to expand their application development knowledge on the Cloud Foundry platform. Cloud Foundry allows developers the freedom to abstract away infrastructure and focus on the task: mission-critical applications.
As an IBM Developer Advocate, I get to come up with cool demos for events and conferences that showcases the technology IBM Cloud has to offer developers. For a community event, I built a TJBot photo booth, using the famous TJBot to capture photos using a Raspberry Pi camera and send them via Twilio. If you're feeling brave, you can go right to the code on GitHub. Otherwise, read on to discover how I turned this powerful little robot into a photo booth!
At the upcoming Serverless Conference, the OpenWhisk team will run a workshop to teach basic concepts of serverless computing in general and the Apache OpenWhisk platform in particular.
In a recent study on DevOps-driven innovation, Forrester Research reminds us that "DevOps is about people first." Implicitly, at whatever velocity, a DevOps program that fails to innovate on customer experience is like running in place.
When a company spots a lucrative gap in the market or comes up with an innovative new business model, it can trigger sudden, exponential growth. However, many growing enterprises fail to sustain their initial surge in momentum—or may even collapse dramatically.
There are several cases where it is useful to be able to stand up additional WebSphere capacity for a limited time period, without needing to worry about data center server capacity and software licenses. Typical examples include batch type workloads, test of new code at infrequent intervals, maybe as part of an overall application modernization strategy.
Imagine you’re interviewing a new job applicant who graduated top of their class and has a stellar résumé. They know everything there is to know about the job, and has the skills that your business needs. There’s just one catch: from the moment they join your team, they’ve vowed never to learn anything new again. You probably wouldn’t make that hire, because you know that lifeMachine Learning Brainlong learning is vital if someone is going to add long-term value to your team. Yet when we turn to the field of machine learning, we see companies making a similar mistake all the time. Data scientists work hard to develop, train and test new machine learning models and neural networks. However, once the models get deployed, they don’t learn anything new. After a few weeks or months, become static and stale, and their usefulness as a predictive tool deteriorates.
We are announcing the retirement of the Watson Document Conversion (DCS) and Watson Retrieve and Rank (R&R) services. Current users can take advantage of the evolved capabilities in these services by switching to Watson Discovery.
|
OPCFW_CODE
|
The best Side of RotomakerAcademyVFXThis can be a regular protection exam that we use to prevent spammers from making phony accounts and spamming customers.
About us Manjeeram is a novel Bharatanatyam dance academy that provides to you the genuine essence of the excellent artwork form that's to reinforce human potential. It aims to empower the thoughts, overall body and soul to optimise output in every area of daily life.
This film required each and every ounce of expertise and technologies that MPC and Technicolor experienced to provide … and the final results are up there to the display screen for all to view.”
For people who can not commit this kind of lengthy interval Rotomaker Academy has several temporary courses that requires from two to eight months to finish. Whilst going through these programs The scholars do the job with many softwares.
Even so, Rotomaker Academy conducts numerous curriculum primarily based things to do that already shape you to be prosperous while in the Visible consequences.
“The jungle was very hard and also the animals–everyone knows animals, Therefore if anything felt artificial, the magic trick wouldn’t get the job done,” mentioned Valdez.
Faculty offers junior diploma, senior diploma, BA and MA coaching in respective art fields. Sanchalana Faculty of Dance is situated in Saket, Hyderabad (Key department) also has 15 other branches in and all over Hyderabad. The main aim of The college is to generate consciousness inside the folks thoughts that every one arts might be read more learnt within an academical way to be an expert artist.
The improved person interface plus high resolution retina display assistance significantly enhances the top consumer practical experience.
0. Filled with new modern functions that amenities with advanced VFX creation pipelines will adore, mocha Professional 4 presents groundbreaking native Stereo 3D (S3D) workflow abilities and Superior aid for Python scripting.
Highly developed assistance for native stereo 3D workflows and Python scripting simplify monitoring and roto on elaborate VFX tasks
Amenities can now deeply integrate mocha Professional 4 into their distinctive VFX output workflows as a result of newly included assist for Python scripting. “With Python scripting, VFX homes can better personalize their workflows,” feedback Shain. “For example, a facility can use Python scripting to combine mocha Professional 4 having an asset management procedure which include Shotgun, Hiero or Era, seamlessly synchronizing Visible effects information and metadata across purposes and tasks.
About us Our institute Bodhya vocational teaching centre is more info an extension of five yr very long interaction expertise schooling provided to little ones of Unique need to have in a single on a single basis. It has now taken up added accountability of training adolescents and Older people in the team for his or her unbiased survival abilities. 1. Our centre offers individualized tailored programmes for the students Based on their skills and needs.
This provides Rotomaker Academy a singular qualitative edge above other institutes, who're delivering ineffective training that's diluted throughout mushrooming franchisees build all around the region side.
The only real Dwell action from the film is Mowgli (Neel Sethi) plus the tiny Rotomaker Academy bit of set on which he stood or climbed. The remainder of the film — most of the animals along with the photoreal jungle itself — was Laptop generated.
Rotomaker Academy has been guaranteeing for a few years that Education and learning and Studying reaches all the desirous and aspiring multitude.
Thank you for furnishing much more details about your necessity. You might listen to back before long through the coach
|
OPCFW_CODE
|
Pytest ordering of test suites
I've a set of test files (.py files) for different UI tests.
I want to run these test files using pytest in a specific order. I used the below command
python -m pytest -vv -s --capture=tee-sys --html=report.html --self-contained-html ./Tests/test_transTypes.py ./Tests/test_agentBank.py ./Tests/test_bankacct.py
The pytest execution is triggered from an AWS Batch job.
When the test executions happens it is not executing the test files in the order as specified in the above command.
Instead it first runs test_agentBank.py followed by test_bankacct.py, then test_transTypes.py
Each of these python files contains bunch of test functions.
I also tried decorating the test function class such as @pytest.mark.run(order=1) in the first python file(test_transTypes.py), @pytest.mark.run(order=2) in the 2nd python file(test_agentBank.py) etc.
This seems to run the test in the order, but at the end I get a warning
PytestUnknownMarkWarning: Unknown pytest.mark.run - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs
.pytest.org/en/stable/how-to/mark.html
@pytest.mark.run(order=1)
What is the correct way of running tests in a specific order in pytest?
Each of my "test_" python files are the ones I need to run using pytest.
Any help much appreciated.
Why do you care about the order? Generally speaking, tests should be independent.
@pL3b is right. You could refer a sample code at https://stackoverflow.com/a/77755875/294577
even if tests are (as they should be) independent, imagine using xdist to parallelize the tests running, and then if a few tests are significantly longer running than all the other ones, unless they ran first your parallelization may typically not yield as much time saving on average ― not with the current options that xdist sports ― ordering them by a-prior knowledge of their average runtime can bring about great gains in aggregate elapsed time under a parallel run.
To specify the order in which tests are run in pytest, you can use the pytest-order plugin. This plugin allows you to customize the order in which your tests are run by providing the order marker, which has attributes that define when your tests should run in relation to each other. You can use absolute attributes (e.g., first, second-to-last) or relative attributes (e.g., run this test before this other test) to specify the order of your tests. Here's an example:
import pytest
@pytest.mark.order(2)
def test_foo():
assert True
@pytest.mark.order(1)
def test_bar():
assert True
What is the correct way of running tests in a specific order in pytest? Each of my "test_" python files are the ones I need to run using pytest.
Answer to your question is using pytest hooks. You can read about hooks here.
Basically you need to declare a method with specific signature in your conftest.py file. List of hooks that can be defined is stored here.
In your specific case you need a hook named pytest_collection_modifyitems. You can find description of it here. You will be able to change order of elements in items list which is actually list of all collected tests.
So in your conftest.py it will look like this:
def pytest_collection_modifyitems(session, config, items):
# some code that changing order of elements in items list
Also keep in mind that if you want to completely redefine items list (not just changing list that is being stored in items variable) you should use following syntax: items[:] = ...
Try renaming them as test_1_transTypes.py, test_2_agentBank.py and test_3_bankacct.py
Thanks for your suggestion. I did rename the files and ran the tests in this order. test_8_pmt.py, test_9_pmt.py, test_10_pmt.py. When I checked in Pycharm terminal it ran in the same order I specified. But the html report is not in the same order. It shows the results of test_9_pmt.py first, followed by test_8_pmt.py, finally test_10_pmt.py. I even tried sorting using "Test" column but no luck. Is there a way to see the report for the tests in exactly the same order that they ran? Thanks
|
STACK_EXCHANGE
|
Following up from earlier experiments in which MERIT looked at pollution of IPv4 prefixes, Manish Karir and his colleagues now did an experiment with various IPv6 prefixes. The results can be found in this article.
The traditional view on Internet pollution has been that it was comprised primarily of worm scanning and DDoS backscatter. However, some of our recent analysis on IPv4 darknets has shown that Internet pollution is much broader in scope.
It can consist of traffic that results from mis-configurations, topology mapping scans, software code bugs, bad default setting, routing instability and even Internet censorship . Based on this background we sought to better understand Internet pollution in IPv6. There had been some prior work by Sandia Labs and APNIC in this area. Our goal was to perform a significantly larger study, which would cover multiple RIR regions, the largest amount of IPv6 space we could advertise, and last as long as possible - long enough to be able to detect any emerging trends.
As networks attempt to enable an increasing number of IPv6 applications and services, it is highly likely that inexperience, software configuration differences, or even software or hardware bugs will result in errors that can lead to Internet pollution. Observing such undesired traffic can provide valuable insight to help navigate the rocky start of the new technology. In addition, by observing IPv6 background radiation traffic, we can watch for the emergence of malicious activity, such as scanning and worms, via the new protocol. Identifying these issues early in the adoption process minimizes the cost of fixing them and can provide a best-practice template for future IPv6 network administrators.
Figure 1: Experiment Configuration
The IPv6 darknet experiment is setup in a similar manner to previous experiments with IPv4 address space. We first obtain Letters of Authority (LoA) from the Regional Internet Registries (RIRs) for the prefixes to be studied. Next we work with our upstream Internet providers to make sure they accept our announcements of these prefixes. All the resulting data for these prefixes that flows towards our routers is collected and archived for analysis. The significant difference in this scenario is that while in IPv4 we were able to announce address space prior to any sub-allocations, for the IPv6 experiment, we announce a covering /12 prefix for each RIR which is the largest single block of address space that each of them has been allocated. We announced the following prefixes:
- AfriNIC (2c00::/12)
- APNIC (2400::/12)
- ARIN (2600::/12)
- LACNIC (2800::/12)
- RIPE NCC (2a00::/12, modified to 2a08::/13+2a04::/14 after 2 days)
Data Analysis and Results
Figure 2: Traffic Volumes
Figure 3: Protocol Distribution
Figure 2 and 3 above show the traffic rates we observed for each RIR dataset in a one week sample of our collected data. The traffic varies considerably across RIRs and is the highest for the ARIN and the Lacnic region. In particular we see significant spikes in the ARIN dataset that peak at 1Mbps. The composition of the traffic also varies greatly from one region to another. In the one week sample data we show here, we see ICMP dominating in the Lacnic region, while UDP appears to dominate in the ARIN and the APNIC region. TCP traffic is in all cases a small portion of the overall traffic. On the other hand, an earlier three-month data sample that we analyzed shows TCP dominating the ARIN data. We intend to report on a longitudinal view of these data in an upcoming publication.
While we find no evidence of worm activity we are able to detect some limited amounts of scanning that appears to be directed at limited subsets of particular network prefixes. Some of this is perhaps related to the topology discovery/mapping performed by large CDN operations.
In one of several interesting case studies we examined, we were able to observe link-local source addresses in our dataset indicating the lack of proper filtering at network edges to prevent such traffic from leaking into the Internet. A large portion of the observed traffic can be characterized as DNS requests and replies. Interestingly, we also observe BGP, NTP and SMTP traffic in our datasets.
We have only begun the process of combing through the data to get a better understanding on IPv6 background radiation. So far our high level analysis has revealed that like IPv4 Internet pollution, IPv6 Internet pollution is also highly unpredictable and varied. Over 90% of the pollution is directed at less than a 100 specific destinations and over 90% of IPv6 Internet pollution is sourced from less than a 1,000 unique sources. We find interesting instances of pollution that can be characterized as DNS, BGP, NTP, SMTP, HTTP and ICMP traffic.
Perhaps more interestingly, we find evidence that shows the highly unstable nature of the average prefix in the IPv6 routing table. We are also able to show that a significant amount of pollution traffic has a high degree of locality of reference to existing prefixes in the routing table. We also believe that using a covering prefix to detect Internet pollution is an important mechanism for detecting potential mis-configurations, instability and other issues as an increasing number of networks introduce IPv6 based services.
We are continuing to monitor some of the /12 prefixes with the goal of observing long term trends. In the future we would like to particularly consider the following additional areas to enhance our research:
- Improved co-ordination with various network operations groups – Originally we did not widely publicize our experiment, as we did not want to risk the possible contamination of our data samples based on intentional tampering by malicious actors. However, in our experience we have not observed any such activity and we therefore believe that is possible to relax this concern and better co-ordinate future activities – particular beaconing described below – on public forums.
- Summary reports based on our analysis – It is possible to generate summary reports in some cases where specific networks are seen as the sources of traffic of various types in our pollution dataset. This feedback can help the network operations research community identify issues.
- IPv6 routing beacons – We would like to particularly focus our future research activities on implementing IPv6 routing beacons which would give us a better opportunity to understand the dynamics of traffic in flight during periods of routing instability.
Internet Pollution – Part 2, Scot Walls and Manish Karir, NANOG51, Miami, February 2011, http://www.merit.edu/research/pdf/2011/Internet-Pollution-Part2.pdf
188.8.131.52/8, Manish Karir, Eric Wustrow, George Michaelson, Geoff Huston, Michael Bailey, Farnam Jahanian, NANOG 49, San Francisco, June 2010, http://www.merit.edu/research/pdf/2010/karir_1slash8.pdf
IPv6 Background Radiation, Geoff Huston, First International Workshop on Darkspace and Unsolicited Traffic Analysis (DUST), 201
Comments are disabled on articles published more than a year ago. If you'd like to inform us of any issues, please reach out to us via the contact form here.
|
OPCFW_CODE
|
Source contributions per concentration bin in the Netherlands, divided per sector.
IntroductionOne of the main targets in air quality modeling is to know the origin of air pollution. Ideally, information about the contribution of individual emission sources is known at a certain location, for example locations within a large urban or industrialized area. The different sources can be for example road traffic versus industry, emissions from abroad versus emissions from inside the country of interest and many others. This information of the origin of air pollution can be used for policy measures to create an effective reduction of air pollution.
BackgroundUntil now the individual contributions of sources to pollutant concentrations are mostly calculated by so-called scenario runs. In such a scenario run, the emissions of only one (or a few) sources are turned off in the model. The difference between the original run and the scenario run is used to determine the impact of the emission source. This means that for each emission source of interest, the model must be run separately. Another issue in the approach of scenario runs is that for chemical active tracers, the non-linear chemical regime is influenced. This implies that the difference between the basis run and the scenario run is not equal to only the contribution of the emission source(s), which is (are) switched off.
MethodWithin the chemistry transport model LOTOS-EUROS, a labeling method is developed to make a better source apportionment study. In this labeling method, the emissions of several sources of interest can be labeled. During all the model processes, the labels are tracked, such that the resulting concentrations can be coupled with the originating emissions. For the non-linear processes, the labeling method is more accurate to make a coupling of concentrations with originating emissions.
ResultsThe labeling method is used in several projects. For example, an application of the labeling method is used for the BOP campaign; in this campaign a study of the origin of modeled PM concentrations in the Netherlands is done. In the top figure, one of the intermediate results is shown; the concentrations nitrate aerosol (left) and total pm10 (right) which is from road transport. Note that the city hotspots are visible in this picture. In the bottom figure the contributions of different sources to PM10 in the Netherlands are given with respect to the PM10 concentrations. In the left panel one can clearly notice that the Dutch sources contribute only 20-25% to the total concentration, while the foreign sources contribute about 45-50%. The contribution of the boundary conditions can be allocated to windblown dust from the Sahara and from long-lived tracers which comes from outside the European domain. In the right panel one can see that the contribution of agriculture and road transport grows if the concentrations are higher. On the other hand the contribution of natural sources is large when the concentrations are low.
Example of modelled nitrate aerosol from Dutch road transport (left), and modelled PM10 from Dutch road transport (right) in the Netherlands
Source contributions per concentration bin in the Netherlands. Sources divided in Dutch and foreign (left) and divided per sector (right).
ReferencesKranenburg, R., Hendriks, C., Schaap, M., and Segers, A.
Source apportionment using LOTOS-EUROS: module description and evaluation.
Geosci. Model Dev., 6, 721-733, 2013, doi:10.5194/gmd-6-721-2013
|
OPCFW_CODE
|
|Release Date:||Oct. 20, 2012|
sciAudio is here! Check out the "LockerGnome Quest Redux" demo game to see it in action!
Announcement here: http://sciprogramming.com/community/index.php?topic=634.0
- Playback of WAV and MP3 files
- Unlimited number of sounds playing simultaneously
- Fade in/out, looping and volume control
- Classification of sounds for playback management
- Multiple commands may be issued simultaneously to the 'controller' file
- Multiple controller files to avoid resource contention
- Runs hidden in background & will terminate itself shortly upon game close
- Calls to sciAudio are performed very similarly to the built-in SCI sound calls
- Poor man's encryption (MP3's only) - simply rename your .MP3 to be .sciAudio
- Log file for troubleshooting sound playback ('sciAudio.log' located in same directory as sciAudio.exe)
Works only in Windows (requires .NET framework)
- Place the 2 nAudio DLLs & executable into a subfolder within your game named 'sciAudio'
- Create subfolders within sciAudio for your playback files. Something like this<gamedir>\sciAudio\effects
- Include the sciAudio.sc script & 'use' it in any script you wish to have sciAudio playback in
command <playback command>playBegins playback of a soundstopStops playback of a soundchangeChanges playback of a sound. Used primarily for volume & loop control, very limited usefulness.playx (play exclusively based on sound class)This behaves just like the 'play' command, but will stop any currently playing sounds with the same sound class as the specified class.
fileNameFilename to playback, including the path.
soundClassSound class to assign this sound. This is simply a string to assign to the playback for the purpose of potentially changing or terminating all sounds with the same sound class at a later time.
For example, you might have several sound effects playing simultaneously in a room. Upon leaving the room, you want to stop all the sound effects. To to this you would issue a stop <soundclass> command, which would stop all currently playing sounds with the specified sound class.If no sound class is specified, a default sound class of "noClass" will be used
volumePlayback volume of sample. Default is 100 (100% volume of sample). Range: 0 to ~300?
fadeInMillisecsNumber of milliseconds to fade in the sample
fadeOutMillisecsNumber of milliseconds to fade out a sample. Can be issued with a play or stop command
loopFadeInMillisecsNumber of milliseconds to fade in a sample between loops.
loopFadeOutMillisecsNumber of milliseconds to fade out a sample between loops.
loopCountNumber of times to loop the sample. Default is 0, -1 is infinite
conductorFile'Conductor' file name used to place commands into. This is how commands are passed between the SCI virtual machine & the sciAudio application. The sciAudio application constantly polls all conductor files, looking for changes & executes them. These files must have extension of .con.
The default value for this parameter is command.con.Utilization of this parameter is useful in the event you have many near-simultaneous playback commands issued from within your game. Multiple different conductor files can mitigate the issue of potential file contention of just using a single file. Most game developers will probably not need to use this option.
playXFadeOutMillisecsOnly used for the 'playx' command. Upon terminating any currently playing sounds for a sound class, this value will be used for the fade out.
Code:(use "sciAudio") (local snd ) (instance aud of sciAudio (properties) (method (init) (super:init()) ) )
Code:// basic playback (send snd: command("play") fileName("effects\\score.sciAudio") // important: note the two backslashes in path name! volume("35") loopCount("0") fadeInMillisecs("3000") // fade in beginning by 3 secs fadeOutMillisecs("2000") // fade out ending by 2 secs init() )
Code:// looping playback example (send snd: command("play") fileName("effects\\loopedSound.sciAudio") volume("100") loopFadeInMillisecs("2000") loopFadeOutMillisecs("2000") loopCount("-1") // loop forever init() )
Code:// stop with 5 second fade out (send snd: command("stop") fileName("music\\introMusic.sciAudio") fadeOutMillisecs("5000") loopCount("0") init() )
Code:// change currently playing volume for a sound class (send snd: command("change") soundClass("myMusicSoundClass") volume("50") // set playback to 50% volume init() )
Code:// stop looping a particular sound file (send snd: command("change") soundClass("music\\introMusic.sciAudio") loopCount("0") // stop looping init() )
Code:// exclusive playback (starts new playback of sound // & stops any already playing sounds (with fadeout) for same soundClass (send snd: command("playx") soundClass("narration") conductorFile("speech\\narrate.con") fileName("speech\\newNarration.sciAudio") playXFadeOutMillisecs("1500") // will fade currently playing "narration" sound class sounds volume("200") init() )
New version attached here and also put all this code into a git repo, apparently it wasn't before: https://bitbucket.org/nellisjp/sciaudio/
Now there is an sciAudio.ini file (needs to be in the sciAudio subdirectory of your game) that is read in and used for monitoring a list of (5) possible applications to monitor to determine when to kill sciAudio. It's a bit hacky, and 5 applications might be overkill but it gives a little flexibility and is backward-compatible with the previous versions which monitored either 'RUN', 'DosBox' or 'ntvdm'. I also improved the logging around all of this so it's a bit easier to troubleshoot.
Here's what the included INI file looks like, modify it to suit your needs:
[sciAudio] GameExecutableName1=RUN GameExecutableName2=DOSBox GameExecutableName3=ntvdm GameExecutableName4= GameExecutableName5=
Version 1.2.1 includes a fix that was causing the application to not work reliably because it was checking for the game application 'too soon' and would abruptly exit. Fix consists of a 5 second delay before watching currently running processes.
If you decide to re-use the same sound instance for multiple sounds, it's up to you to reset any previously specified properties, otherwise they will be carried over into the next command
Use the sciAudio.log file (located in the same directory as executable) for troubleshooting sound playback.
Download from here (source included):
|
OPCFW_CODE
|
750i Mobo + BFG 9800 GT = fail?
I am quite new to this forum, so please bare with me. I recently purchased a EVGA 750i Mobo, 4 gigs of OCZ Ram (DDR2), and a wonderful BFG 9800 GT video card. The rig built around that has PSU of 450 W. 250 gig Maxtor HDD, and an intel Q6600 CPU. It all started when i installed all of the hardware for the first time in the case, then fired her up. First post code, 7F (obviously just wanted user input, ie F1) However, there was nothing showing on my monitor. In fact it acted as though it wasn't plugged in. So i reseated the VC, reconnected and restarted, same result. So, me being the type of guy i am, i repeated rebooted. Finally, it showed the bios, pressed F1 and continued, everything SEEMED ok. Performed new windows XP service pack 2 installation just fine. After that install completed i attempted to install the VC drivers via the Cd supplied. Upon installation, about half-way through, the monitor flashes to a black screen with a white underscore in the top left blinking, indefinatly. At this point the computer was no longer restartable, so after waiting at least 30 min, with no change, I pulled and power and tried again. SAME RESULT. So decide perhaps the Bios of the motherboard was out of date as well as the VC driver. :argh:So i downloaded the newest version of both (750i MOBO and 9800GT) drivers and attempt to install the newest VC driver first. Worked! At least i thought, after the install completed and the computer restarted, STILL NO VIDEO OUTPUT! So performed ridiculous reboot ritual, finally got video, then when windows tried to load, FREEZE.So, then I performed the newest bios update and retried, Still freezes at windows and 90% of the time no video. So after attempting one last time just FFS, with no avail, I called them. Yes, BFG tech support hotline. So after the hours on hold, i finally get through to someone who directs me to repeat the entire above process, "just to be sure". So i do. What do ya know? Still freezes! So they tell me they're not sure what could be the problem and for me to just send my card in to have them take a look.
Pardon me for the long story i just wanted to be descriptive so perhaps someone can help.
Also if you require more information, simply ask and ill so my best to supply what I can,
So after all that, Im just about ready to curl into a ball and die.
I would greatly appreciate ANY advice on the two hardware devices.
|
OPCFW_CODE
|
Error { kind: SendRequest, source: Some(hyper::Error(IncompleteMessage))
Hi, I'm facing some issues using the adapter with a Python Lambda function packaged as Docker Image using FastAPI framework. Lambda code functionality is implemented using mainly async functions and when deploying the lambda and invoking a async endpoint, the next error is shown:
{ "errorType": "&alloc::boxed::Box<dyn core::error::Error + core::marker::Send + core::marker::Sync>", "errorMessage": "client error (SendRequest)" }
This does not happen for endpoints that use only sync operations. I get this response testing my lambda from AWS console using AWS_LWA_PASS_THROUGH_PATH variable for non http-triggers
And enabling RUST_LOG=debug, I just see this on logs:
ERROR Lambda runtime invoke{requestId="52dc7925-67d8-4682-9e83-f781e93ae4da" xrayTraceId="Root=1-65fa2420-49e792476009db305da67ab2;Parent=7106713d10b989f9;Sampled=0;Lineage=19427698:0"}: lambda_runtime: Error { kind: SendRequest, source: Some(hyper::Error(IncompleteMessage)) }
I'm running it behind a Lambda Function URL and I get the same logs using the URL. When I run this locally using sam local start-api it works just fine. Any idea on how to troubleshoot this correctly? Thanks in advance.
Could you share a minimum code that can reproduce this issue? And what is the trigger for the Lambda function?
Thanks for looking into this...I'm using Nemoguardrails from a Lambda. The function is invoked using a Function URL. Here is a sample code on how it works:
import json
from fastapi import FastAPI, HTTPException, Request
from nemoguardrails import RailsConfig, LLMRails
app = FastAPI()
YAML_CONFIG = """
models:
- type: main
engine: openai
model: gpt-4
rails:
input:
flows:
- self check input
- allow input
prompts:
- task: self_check_input
content: |
Your task is to check if the user message below complies with safety policies.
Policy for the user messages:
- should not contain word TEST
User message: "{{ user_input }}"
Question: Should the user message be blocked (Yes or No)?
Answer:
"""
COLANG_CONTENT="""
define bot allow
"ALLOW"
define subflow allow input
bot allow
stop
define bot refuse to respond
"DENY"
"""
@app.get("/")
def get_root():
return {"message": "FastAPI running in a Lambda function"}
@app.post("/run")
async def execute(request: Request):
raw_body = await request.body()
body_str = raw_body.decode('utf-8')
payload = json.loads(body_str)
config = RailsConfig.from_content(yaml_content=YAML_CONFIG,colang_content=COLANG_CONTENT)
rails = LLMRails(config,verbose=True)
message_content = payload["input"]
# Generate response from guardrails service
output = await rails.generate_async( messages=[{
"role": "user",
"content": message_content
}])
if output.get("error"):
print(f"Error: {output.get('error')}")
raise HTTPException(status_code=500, detail=output)
return output
/run endpoint expects a simple JSON object like this:
{"input":"Some test text"}
This is the Dockerfile I'm using to build the image. It is adapted from nemoguardrails docs to use AWS Lambda adapter:
FROM public.ecr.aws/docker/library/python:3.10-slim
# Copy the Lambda Adapter from the public ECR
COPY --from=public.ecr.aws/awsguru/aws-lambda-adapter:0.8.1 /lambda-adapter /opt/extensions/lambda-adapter
# Install OS dependencies
RUN apt-get update && apt-get upgrade && apt-get install -y gcc g++ make cmake
RUN mkdir -p /myapp
COPY requirements.txt /myapp
RUN pip install --upgrade pip && pip install -r /myapp/requirements.txt --no-cache-dir --target /myapp
COPY sample.py /myapp
WORKDIR /myapp
RUN python -c "from fastembed.embedding import TextEmbedding; TextEmbedding('sentence-transformers/all-MiniLM-L6-v2');"
# Make port 8000 available to the world outside this container
ENV PORT=8000
# Start nemoguardrails service
# FastAPI
CMD exec python -m uvicorn --port=$PORT sample:app --host=<IP_ADDRESS>
Hi, I'm facing the same issue. I have a FastAPI App using the lambda adapter to trigger it via API Gateway.
My POST requests work perfectly fine. The GET route, however, throws an internal server error and the lambda logs show what @sebasjuancho mentioned:
ERROR Lambda runtime invoke{requestId="c39861b5-955a-447c-91e8-319222d9caad" xrayTraceId="Root=1-663756d6-69e1fdbd321ff6c230681174;Parent=09faa06e7ad0673c;Sampled=0;Lineage=5726a561:0"}: lambda_runtime: Error { kind: SendRequest, source: Some(hyper::Error(IncompleteMessage)) }
Did anybody find a workaround or solution for this?
IncompleteMessage errors usually mean your web app drops the connection. Does the app crash?
@bnusunny I am running the same application code on an EC2 instance (with ALB) and with the lambda adapter (with API-Gateway proxy-path). The request on the EC2 instance is successful but the one to the lambda crashes.
@annStein Lambda execution environment has a few differences with an EC2 instance, such as read-only root file system (except /tmp), no root permission, etc). It is better to enable debug logs and see if any error happened in your application.
@bnusunny I figured it out, eventually. The lambda had only 128MB RAM while the EC2 instance had 1024MB. The lambda ran into a timeout during a hash-function. This led to the fact that I couldn't see any other logs or errors. Nevertheless, thank you for looking into this!
|
GITHUB_ARCHIVE
|
Let's better define the [enigmatic-puzzle] tag
As of posting the enigmatic-puzzle tag has this minimal usage guidance:
Puzzles where the genre or solving strategy of the puzzle is not explicitly stated; puzzles where the puzzler must deduce what type of puzzle it is.
But there is no further explanation or information about this tag, and it's intended use.
If you look at this SEDE query, you can see the top 5 tags without a wiki:
Tag
Posts
enigmatic-puzzle
1809
letter-sequence
111
time
100
crossword-clues
64
progressive-matrix
62
That the enigmatic-puzzle tag is so widely used, without a good usage blurb and no further wiki information, is slightly troubling.
It causes the tag usage to vary wildly, and (anecdotally) in the worst cases encourages making questions obscure for the sake of increasing the difficulty rather than making them better puzzles (evidence by the fact I've seen some enigmatic puzzles get highly upvoted, only to nose dive once the 'trick' of solving is revealed).
Here is a suggestion for the sort of thing I'd like to see in the usage guidance, partly based off of this answer:
Use this tag when the solver's first task is to determine how to solve the puzzle. Merely omitting information on how to solve is not sufficient to justify using this tag, and an enigmatic introduction or pseudo-explanation is favoured.
However I'd still like to see answers discussing how to improve on that block of text, and what to put in the wiki-info for the tag. Preferably we'd list some good examples, and also some more information what makes a bad enigmatic-puzzle, i.e. what to avoid.
I am not following what the concern is, the tag has always seemed useful to me, as if a puzzle lacks info on how to solve, but has the tag, I know it was intentional and can work with that. Maybe some examples of puzzles that omitted information on how to solve, yet the puzzler did not have to deduce what type of puzzle it was? Maybe they were simply mistagged and need to be edited? I have no issue with your proposed new text, it seems very similar to the first, yet a little more difficult for me to understand what it is getting at.
@Amoz well, that's how you understand the tag, but I think the point of this is to develop better usage guidance so that everyone can agree on what the tag means and how to use it.
Let's create the tag info for the enigmatic-puzzle tag!
The usage guidance is fine because it summarizes what an enigmatic puzzle is. However, the tag info could surely use some work (just look at the riddle tag).
This answer is intended to enable community collaboration on generating the tag info for the enigmatic-puzzle tag. Please use our chat feature to communicate with members of the community on sub topics and iron out ideas and nuances prior to adding to this answer. The overall objective is to answer the following questions:
What is an enigmatic puzzle?
What makes a good enigmatic puzzle?
How can an author determine the minimum amount of information required for a solution?
What common techniques exist for solving enigmatic puzzles?
Note: Before migrating the results of the community's efforts over to the tag wiki, be sure to remove the "contributors" statement from each section.
What is an enigmatic puzzle?
An enigmatic puzzle is one where the solving method is not given. Some great examples of enigmatic puzzles are BmyGuest's "hyper-modern art" series, like this puzzle and this one. They are also common in puzzle hunts such as the MIT Mystery Hunt, ΣUMS, and P&A Magazine.
They frequently involve a leap of intuition hinted at through the puzzle's framing, and their solutions are usually a common English word or phrase.
Contributors: @Stevo, @Taco タコス, @Deusovi (through this post)
What makes a good enigmatic puzzle?
Enigmatic puzzles may contain misdirections and deceptions ("red herrings") intended to lead the solver away from the intended solve path. A good enigmatic puzzle involving red herrings will also have good hints leading towards the correct solve path, so that solvers do not waste all their time pursuing fruitless avenues of inquiry.
Contributors: @Avi
How can the bare minimum be determined?
TODO: Describe how to determine the bare minimum amount of information required to solve an enigmatic puzzle.
How can I solve an enigmatic puzzle?
An enigmatic puzzle often, but not always, leaves its first few steps, which is a non-trivial puzzle in itself, unclear. They will leave components of either the puzzle's goal or solving process unclear to the solver. The solver needs to try find out what the puzzle-maker left as information, and what is a red herring.
Contributors: @Stevo
How do I write an answer?
When you post an answer to an enigmatic puzzle, it is usually a good idea to explicitly specify how each element of the puzzle known at present contributes to the next step in solving the puzzle. In this way, readers can clearly understand the elements of the puzzle and how they lead to the solution. Writing an answer with too few details is discouraged for most questions, and is highly problematic for enigmatic puzzles.
This is because enigmatic puzzles are by nature harder to understand, so omissions or poor explanations can make it impossible for readers to follow the answerer's logic.
Contributors: @Avi
Is "Instructionless Puzzle" (IP) counted as an Enigmatic Puzzle? IP has no given solving method but is usually accompanied by a set of examples to deduce the instructions. However, I don't think IP usually has some "red herrings" or "hints" compared to ones in Hunt Puzzle-type.
|
STACK_EXCHANGE
|
re: Apple Core Rot
Does Apple test anything? Stuff that worked for 15 years gets broken and stays that way.
UPDATE, Sept 29: the nitwit engineers have not only NOT fixed this issue, but made it far worse in macOS Ventura 13.6; sending Terminal just once to the background (eg using any other app just once) breaks all the window command key shortcuts. What a bunch of incompetents—engineers and the (non-existent?) Q/A team. It impairs my workflow every day.
Screen captures are broken, command keys in Terminal are broken, and I cannot even see the user interface sometimes on my calibrated NEC PA271Q, due to near-white gray in the UI. Lots of other problems but those affect me daily.
I work with 5 Terminal windows; here is what I know:
- When Terminal is launched, all shortcuts are working and continue working, for a time.
- Seems to involve having two displays.
- Never happened on Monterey, never happened on Intel.
- At some point and for unknown reasons, all shortcuts stop working and all Terminal will do is beep when one is used.
- Only quitting and relaunching Terminal will restore functionality.
For someone who keeps Terminal open all day long and uses it constantly, this is a major workflow-disrupting headache that breaks 20 years of habits. I now have to quit and restart Terminal about 20 times a day, which includes restarting ssh connections, etc. I despise Apple sloppiness in foisting such broken software upon its users.
Sam W writes:
I may have a workaround to the Terminal.app issue you're experiencing, though it'll require a slight workflow change: use the `screen (1)` utility to keep your shells/SSH connections alive, even if you have to restart your Terminal.app. It uses Control for its (many) key mappings, so that should be safe.
0. Create and customize your ~/.screenrc, to make it many times more useful. I recommend copying and pasting the default FreeBSD one (https://reviews.freebsd.org/differential/changeset/?ref=20854) into there, as it'll tell you how many you have open, date, which is active, and so on.
1. Run `screen -t SESSIONAME` in the Terminal, for example: screen -t Lloyd1
2. You can create additional "screens" inside this one Terminal tab/window via screen's shortcuts: ^a ^c
3. You can switch to the next "screen": ^a ^n
4. If Terminal.app craps the bed, you can nuke it and ignore the message about running tasks. Quit/restart Terminap.all, and type: `screen -D -R` to re-attach the running screen session. If you have more than one named session, you have to specify the name, but you get everything back, including SSH sessions, as they're kept alive.
5. I recommend checking its manpage, because it has a ton of shortcuts and customization options, and also because backscroll works way differently than in the usual Terminal.app.
I hope this helps!
MPG: I appreciate the help/suggestion... but I’m not really willing to engage in this complexity or the hour or three required to setup and tune such things.
|
OPCFW_CODE
|
import {clearOverlayColor, initBoard, generateBoardVerteces} from './Objects.js';
export default function temp(){ return 1;};
export {Board};
class Position{
constructor(x, y, name){
this.x = x;
this.y = y;
this.name = name;
}
}
class Board{
constructor(size, playerOne, playerTwo, turnTimer = 30){
this.size = size;
this.placed = [];
this.players = [playerOne, playerTwo];
this.currentPlayer = 1;
this.turnTimer = turnTimer;
this.turnTimerStamp = 0;
this.turnTimerCheck = 0;
this.gameFinished = true;
}
setTurnTimer(turnTimer){
if(this.getGameState){
if(turnTimer % 1 == 0){
this.turnTimer = turnTimer;
console.log(`Turn timer set to ${turnTimer} seconds.`);
}else
console.log(`Can't set the turn timer to "${turnTimer}", please enter a number above 0.`);
}else
console.log(`Can't change the turn timer while a game is progress.`);
}
getTurnTimer(){
return this.turnTimer;
}
getTurnTimerCheck(){
return this.turnTimerCheck;
}
setTurnTimerCheck(value){
this.turnTimerCheck = value;
}
setGameState(state){
this.gameFinished = state;
}
changeGameState(){
this.gameFinished = !this.gameFinished;
}
getGameState(){
return this.gameFinished;
}
getTimeStamp(){
return this.turnTimerStamp;
}
doTimeStamp(){
this.turnTimerStamp = new Date().getTime();
}
toString() {
return "board";
}
place(x, y, name){
if(!this.isPlaced(x, y)){
this.placed.push(new Position(x, y, name));
this.players[this.currentPlayer].points++;
return true;
}
return false;
}
isPlaced(x, y){
for(let element in this.placed){
let pos = this.placed[element];
if(pos.x == x && pos.y == y){
return true;
}
}
return false;
}
isPlacedArea(xFrom, xTo, yFrom, yTo){
for(let x=xFrom; x<xTo; x++){
for(let y=yFrom; y<yTo; y++){
if(this.isPlaced(x, y)){
console.log("Can't place within an area on the board that's been played already.", "black");
return true;
}
}
}
return false;
}
isWithinBoard(xFrom, xTo, yFrom, yTo){
for(let x=xFrom; x<xTo; x++){
for(let y=yFrom; y<yTo; y++){
if(x < 0 || x >= this.size || y < 0 || y >= this.size){
console.log("Can't place outside of the board.", "black");
return false;
}
}
}
return true;
}
canPlace(x, y, name){
//if(x == 0 && y == 0)
for(let i=y-1; i<y+2; i++){
for(let j=x-1; j<x+2; j++){
if( (i == (y - 1) && j == (x)) || (i == (y + 1) && j == (x)) || (i == (y) && j == (x-1)) || (i == (y) && j == (x+1)) ){
let temp = this.getPlaced(j, i);
if(temp.name != "empty" && temp.name === name){
return true;
}
}
}
}
if(x == 0 && name == this.players[0].name || x == this.size-1 && name == this.players[1].name) return true;
console.log(`Must be placed next to one or more of the already played rectangles or from the starting side of your color.`, "black");
return false;
}
getPlaced(x, y){
for(let element in this.placed){
let pos = this.placed[element];
if(pos.x == x && pos.y == y){
return this.placed[element];
}
}
return new Position(0.0, 0.0, "empty");
}
getCurrentPlayerName(){
return this.players[this.currentPlayer].name;
}
getCurrentPlayerPoints(){
return this.players[this.currentPlayer].points;
}
changeCurrentPlayer(light, pre){
if(this.currentPlayer == 0){ // Red
this.currentPlayer = 1;
light.color = [1.0, 0.43, 0.43, 1.0];
pre.color = [0.78, 0.0, 0.0, 1.0];
}else{ // Green
this.currentPlayer = 0;
light.color = [0.63, 1.0, 0.69, 1.0];
pre.color = [0.6142, 0.83, 0.6502, 1.0];
}
console.log(`${this.getCurrentPlayerName()}'s turn.`, `${this.getCurrentPlayerName()}`);
}
playerDone(light, pre){
let currentPlayer = this.players[this.currentPlayer];
let otherPlayer;
if(this.currentPlayer == 0) otherPlayer = this.players[1];
else otherPlayer = this.players[0];
currentPlayer.done = true;
if(!otherPlayer.done){
console.log(`${currentPlayer.name} finished with a score of: ${currentPlayer.points} points.`, `${currentPlayer.name}`);
this.changeCurrentPlayer(light, pre);
}else{
if(currentPlayer.points > otherPlayer.points)
console.log(`${currentPlayer.name} won! Game finished with ${currentPlayer.name}'s ${currentPlayer.points} points to ${otherPlayer.name}'s ${otherPlayer.points} points!.`, `${currentPlayer.name}`);
else if(currentPlayer.points < otherPlayer.points)
console.log(`${otherPlayer.name} won! Game finished with ${otherPlayer.name}'s ${otherPlayer.points} points to ${currentPlayer.name}'s ${currentPlayer.points} points!.`, `${otherPlayer.name}`);
else
console.log(`It's a draw! Game finished with ${otherPlayer.name}'s ${otherPlayer.points} points to ${currentPlayer.name}'s ${currentPlayer.points} points!.`, `black`);
this.endGame();
}
}
isOtherPlayerDone(){
if(this.currentPlayer == 0){
return (this.players[1].done);
}else
return (this.players[0].done);
}
resetBoard(gl, VBO, glConsole){
this.endGame();
clearOverlayColor(gl, VBO.findByName("Red"));
clearOverlayColor(gl, VBO.findByName("Green"));
clearOverlayColor(gl, VBO.findByName("preColor"));
clearOverlayColor(gl, VBO.findByName("lightColor"));
clearOverlayColor(gl, VBO.findByName("pointRectIndicator"));
this.players[0].done = false;
this.players[0].points = 0;
this.players[1].done = false;
this.players[1].points = 0;
this.placed = [];
glConsole.clear();
}
start(gl, VBO, glConsole){
if(this.getGameState()){
this.resetBoard(gl, VBO, glConsole);
this.currentPlayer = Math.floor(Math.random()*2); // Randomize starting player
this.changeCurrentPlayer(VBO.findByName("lightColor"), VBO.findByName("preColor"));
console.log(`${this.getTurnTimer()} seconds left.`, this.getCurrentPlayerName());
this.doTimeStamp(); // Start timer
this.setGameState(false);
}else
console.log(`A game is already in progress.`);
}
endGame(){
this.setGameState(true);
}
setSize(gl, VBO, glConsole, size){
if(size % 2 == 0){
this.endGame();
this.size = size;
generateBoardVerteces(gl, VBO.findByName("board"), this.size);
initBoard(gl, VBO.findByName("preColorTemp"), this);
this.resetBoard(gl, VBO, glConsole);
console.log(`Game board changed size to ${size}*${size}.`);
}else if(size % 2 == 1){
console.log(`The board size must be an even number`);
}else{
console.log(`"${size}" is not a valid input for the size of the board.`);
}
}
rules(){
console.log(`The goal of the game is to tag as many rectangles on the board as possible. The board is 20*20 rectangles wide(can be changed) and the players start on the opposite side och eachother. Each rectangle is worth 1 point and you may tag 2*2, 3*3, 4*4, 5*5 rectangles each turn. The turns alternate and a standard timer of 30 sec is imposed (can be changed).` +
` If a player fails to tag any rectangles within the time limit the game ends for that player while the other player may continue until there's no more rectangles to tag (a player can end the turn by pressing ctrl + s). The game ends when both players can't tag anymore rectangles.`);
}
}
|
STACK_EDU
|
GMoney is a platform facilitating the loan transactions between the borrowers and the NBFCs/Banks. All loan applications are approved and sanctioned by NBFCs/Banks registered with RBI and communicated upfront during Loan application. The Platform provides access to an online platform bringing together consumers, financial institutions, data partners and other partners willing to abide by their respective Terms & Conditions. GMoney’s lending partners provide various kinds of medical loan products as specifically supplied by the Users through the Platform.
THE CLIENT’S CHALLENGE
Client was looking to expand the reach to consumers and for that they are looking for settings up the secure AWS Infrastructure which follow all the compliance and have the flexibility to deploy the builds easily. Previously infrastructure was configured as RAW with all instances accessible over Public IP Including databases and as they grew they wanted to make sure that infrastructure is 3 Tier with proper Web, Application and Database segregation. While setting up the infrastructure client was also concerned about cost. They are also looking for help in Database optimization where they are lacking the expertise.
The GMoney application development team uses Github as their code repository. Jenkins is used as the CI/CD tool here. The deployment to the production environment is kicked off manually using Jenkins. The backend application services (written in Node.js and Python) are deployed in a blue-green fashion where new sets of auto scaling instances are spun up. Once the new set is healthy, they are replaced in place of the original instances (the original instances are killed thereafter). The frontend application (written in AngularJS) code is placed in AWS S3 and served to end customers using AWS Cloudfront. This CD process is set up in Jenkins and kicked up manually during production deployment.
INSIGHT TO ACTION
The TECHPARTNER team worked with the GMoney management team and tech leads to understand the project needs. Together we chalked down the plan and finalized the architecture. Our focus was towards securing the infrastructure, highly available and with an easy CICD process. We also worked with the client’s database team to understand the slow queries identified the missing indexes as well as the queries which were not written optimally. We shared the query execution plan and helped them to tune the database for better performance. With optimized query and schema help the client to downgrade the DB instance without compromising the performance.
Scalable Architecture: With the scalable architecture GMoney was able to serve the user with improved response time which in turns helped to acquire more users
Performance: As the application and deployment is modular, the whole CI/CD process became easy and efficient
Automation: Automation reduced the manual deployment time by 90% giving free hand to developer to concentrate on Innovation
Optimisation: Optimized queries for the database reduce the usage by 70% and which in turn help to downgrade the DB Size and reduce the cost/month by 50% for DB itself.
With Scalable architecture, the client was easily able to handle the more than Million of requests/min on the hybrid infrastructure without any hiccups.
- AWS WAF: Since this is a consumer facing application, WAF is used for better security.
- AWS CloudFront: The frontend of the application is served via CloudFront for quicker response to customers and take advantage of CloudFront edge locations.
- AWS Application Load Balancer (ALB): The application is hosted on private auto scaling group instances and ALB is used to load balance and expose the application to the internet. ACM is used to generate SSL certificates and applications are made to serve only SSL encrypted payload to end customers.
- AWS EC2 Auto Scaling Groups (ASG): The application is set up using auto scaling group ec2 instances so that ASG is used for auto-scaling. Whenever the application usage crosses a threshold of 60% CPU usage, the ASG horizontally scales out to accommodate the inflow of requests, enabling seamless scale and user experience. The ASG scales-in whenever the load subsides; this helps keep costs in control.
- AWS Cloudwatch: AWS Cloudwatch is used for monitoring and alerting. Resource usage of the infrastructure is monitored using Cloudwatch dashboards. Whenever any critical threshold is breached (e.g. too many 5xx on ALB), an alert is sent out to the Operations and Engineering team for investigation. Billing alerts have also been set.
- AWS Route53: Route53 is being used for DNS. DNS zones are set for internal as well as public facing records.
- AWS S3: S3 is used for storing frontend application code and static assets (which are served by CloudFront eventually). S3 is also used to store backup data. Long term data to be retained will be moved to Glacier periodically.
- AWS CodeBuild: It is used to build and compile the application.
- AWS CodeDeploy: It is used for deployment and it helps to make deployment in blue/green.
- AWS Lifecycle Manager: It is used to take instance’s image backup.
- AWS RDS (PostgreSQL): This is the backend for the python applications.
- AWS KMS: It is used for encryption of instance’s disks
|
OPCFW_CODE
|
substring() returns blank in hive
I want to extract last 10 numbers from fields of a column, so I am using substring built in function in hive. But if the field value is less than the defined value(<10). Its returning a 'blank' field.
Input :
orig_number
140976526012
140980434512
1740016
1740016
17250460171
I am using this code.
select *,length(orig_number) as leng,substr(orig_number,-10) as subbstring from num_table sort by orig_number;
Output is:
orig_number leng subbstring
140976526012 12<PHONE_NUMBER>
140980434512 12<PHONE_NUMBER>12
1740016 7
1740016 7
17250460171 11 725046017
Retrieve up to 10 characters from the end of the line
select orig_number
,regexp_extract (orig_number,'.{1,10}$',0) as orig_number_suffix
from num_table
;
+--------------+--------------------+
| orig_number | orig_number_suffix |
+--------------+--------------------+
|<PHONE_NUMBER>12 |<PHONE_NUMBER> |
|<PHONE_NUMBER>12 |<PHONE_NUMBER> |
| 1740016 | 1740016 |
| 1740016 | 1740016 |
|<PHONE_NUMBER>1 |<PHONE_NUMBER> |
+--------------+--------------------+
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF
https://docs.oracle.com/javase/7/docs/api/java/util/regex/Pattern.html
And here is why you get a blank.
Seems to me like a bad design of the substr function.
UDFSubstr.java
private int[] makeIndex(int pos, int len, int inputLen) {
if ((Math.abs(pos) > inputLen)) {
return null;
}
...
@TobySpeight - I've edited the answer. Being that said - (1) An answer's scope is subject to limitations of time, internet availability, computer availability and the quality/level of interest of the post itself. This specific answer was given from my cellphone while I was waiting for an elevator. (2) Please bear in mind that SO answer is not intended to function as a tutorial but to solve a specific problem. (3) I Invite you to take a look on some of my other answers
|
STACK_EXCHANGE
|
Is it viable to compile my project in java8 and then run it on java 17
I've read a few articles where in jdk 9, .* APIs have been removed. By removed does this mean they are completely removed from the jdk 9 or are they marked as deprecated?
Given the above statement, if in case the project is jdk 8 compiled, would it run without any issues on jdk 17? I am trying this for the first time, and had issues with tomcat not being able to support jdk 8 because of modularity changes.
I am planning to compile in jdk 8 and then run on jdk 17 until the entire project is jdk 17 compliant (By jdk 17 compliant, I meant to use the updated APIs rather the the existing deprecated APIs of jdk8).
Am I headed in the correct direction? or should I follow a different migration approach?
Not sure what you mean by "until the entire project is JDK 17 compliant"; if it isn't, then how will you run it on JDK 17?
@kaya3 By jdk 17 compliant, I meant to use the updated APis rather the the existing deprecated APIs
Why do you want to “compile in jdk 8 and then run on jdk 17”? If you don’t run with JDK 8, there is no point in using it to compile the code. Compiling with JDK 17 will tell you immediately whether your code uses a remove API (unless Reflection is involved), which is much better than to find out at runtime by chance.
"compile in jdk 8 and then run on jdk 17" makes it possible for a smooth transition as is not required to updates all instances at once
By removed does this mean they are completely removed from the jdk 9 or are they marked as deprecated?
Some have been just marked as deprecated. Others have been removed entirely., though generally not in Java 9. Most of the removals were done later, and some are still to occur. If you look at elements that are now annotated as @Deprecated, the annotation in some cases will formally indicate that the element is going to be removed.
Given the above statement, if in case the project is jdk 8 compiled, would it run without any issues on jdk 17
Not necessarily. Another thing that has happened is that access to "internal" Java SE APIs has been progressively closed off. So if your application uses these APIs, in Java 9 you got warnings, by Java 11 you got errors by default, and by Java 17 some access has become (I think) impossible.
So ... there can be "issues".
The correct approach to migration is simply to do it. And test, test, test until you have ironed out all of the problems.
Obviously, that means that you need a test server, and you need to use an automated test framework with unit tests, functional tests, UI tests and so on. But those are just good Software Engineering practices. You should already be following them.
I am planning to compile in jdk 8 and then run on jdk 17 until the entire project is jdk 17 compliant (By jdk 17 compliant, I meant to use the updated APIs rather the the existing deprecated APIs of jdk8).
Am I headed in the correct direction? or should I follow a different migration approach?
No. As @Holger points out, you are liable to run into lots of runtime errors ... due to problems that the Java 17 compiler would have identified.
There is only one approach ... really. Compile on Java 17 until you have identified all of the compile time dependency problems; i.e. using APIs that have been removed or closed off. Then run on Java 17 and test, test and test until you have found all of the other problems.
Before you start, it would be advisable to check all of your project's dependencies (libraries, etc) to see if they are supported on Java 17. Then update them to the correct (probably latest) versions.
Java 17 has only been released for a few weeks, so there is a good chance that some of your dependencies haven't caught up yet. If that is the case, it may be a better idea to target Java 11 LTS at this stage.
|
STACK_EXCHANGE
|
[SPIKE] Investigate the utility of an input macro separate from govukInput
Background
Currently, the Date input component directly imports and uses the govukInput macro to provide the date fields; and the Character count component directly imports and uses the govukTextarea macro for the textarea.
In both cases, importing the macro carries with it the entire HTML of that component, including the govuk-form-group wrapper, label, hint, error message, and the input/textarea itself. This HTML is non-permeable to the parent component and cannot be altered, unless we specifically extend those macros with new parameters to do so.
The Character count component
In the case of the Character count, this limitation has led to a few 'hacky' workarounds:
The inability to alter what's inside govukTextarea means that the 'info' hint below the textarea has to live outside of govuk-form-group, which has a large bottom margin that pushes the 'info' hint away from the textarea.
Custom CSS is then used to remove the bottom margin from govuk-form-group and instead apply an identical margin to govuk-character-count, a new element that wraps around the entire form group.
JavaScript is used to move the 'info' hint up to within the govuk-form-group where it's available to do so.
None of these are ideal, and all of them constitutes extra code loaded by browsers that doesn't need to exist.
The Password input component
In developing the Password input component, we also saw us having to deal with similar frustrations when it came to applying a wrapping element around only the input of govukInput and placing a button within that new wrapper.
There was also some trepidation about how govukInput could conflict with or compromise the effectiveness of the Password input; such as how input's prefix/suffix HTML may interfere with the Password input's own wrapper and button, or that some of the accessibility and security features of the password input could be overridden through modification of the inherited input.
These issues led us to initially decide to not use the macro — #4512.
Exploring solutions
Reckoning that this may be a recurring problem in future, we are considering a few ways to modify how these components are composed to avoid these problems.
One option being explored is to add 'injection' points within the components where arbitrary HTML can be added, similar to how we use Nunjucks blocks in templating — #4567.
The idea behind this spike
This spike seeks to minimise the use of directly importing and extending govukInput and govukTextarea in favour of two 'internal' macros.
These macros only include the base input and textarea elements, and do not include the other parts (group wrapper, label, etc.) with the expectation that:
Components become more 'standalone', implementing their own label/hint/error code rather than reusing them from the Text input and Textarea components.
In doing so, we can 'liberalise' the component HTML and allow it to more readily diverge where it needs to, such as for Character count and Password input.
These macros define the system-wide defaults and are extended with whatever parameter options become necessary, instead of having those options unnecessarily bloat the govukInput and govukTextarea components.
Components can choose to only expose the relevant parameters of the underlying macros.
Changes
Creates govukInputElement and govukTextareaElement Nunjucks macros, each containing the base input and textarea elements.
Additionally creates a _govukTextCommonAttributes macro for attributes that are common to both input and textarea.
Reworks Character Count, Date input, (Text) Input, Password input and Textarea to use the new macros.
Additionally reworks Character count to:
move the wrapping element to within the form group
move the 'info' hint to within the form group
remove the CSS styles to adjust the bottom margins
remove the JavaScript to programatically move the 'info' hint
Thoughts
I am not at all tied to the current names for the internal macros. They can probably be a lot clearer.
Not much care put to how the HTML is output either. This can definitely be improved too.
No tests as it's a spike.
@querkmachine I'll give this a proper look tomorrow 👀
Whilst I remember, this previous work in https://github.com/alphagov/govuk-frontend/pull/1281 could be an alternative
It abstracted away the form group wrapper, label, hint, error message into govukFormGroup()
That PR could be made to make govukFormGroup() optional, letting every form field component render "wrapper-less". Similar to how Checkboxes have optional fieldsets
Ah I've still got a separate input macro for the Nunjucks indenting work
I've moved the Text input changes into a separate PR where an _inputElement() macro is used:
https://github.com/alphagov/govuk-frontend/pull/4707
|
GITHUB_ARCHIVE
|
New Farming Rewards (LP Incentives)
This farming rewards is being used since Crescent V3.
- The actual name of this module is
lpfarm, but for the easier reading, we call it as
Farming V2in this document.
Comparison of Farming V1 and V2
Crescent’s farming module is providing farming functionality that keeps track of staking and provides farming rewards to farmers. Each farmer to the pool having the farming plan(s) can get the farming rewards.
One main difference between
Farming v2is the period of the rewards distribution.
Farming v1, the farming rewards is distributed at 00:00 UTC everyday to the farmer that has farmed for the entire previous day. In this case, if the farmer changes the pool that the farmer provides the liquidity for, the farmer would lose the farming rewards for a day. This makes the farmers hesitate to change the pool for liquidity.
Farming v2, the farming rewards are distributed at every block to the farmers. Thanks to this feature, the farmers can dynamically choose the pool for their liquidity to get more rewards.
Another main difference is the farming plan for a given pair.
Farming v1, the farming plan is given to a pool. Therefore, each pool can have separate fixed amount of the rewards regardless of its effective liquidity. For example, a ranged pool with price range of [0.99, 1.01] has the amplification factor of 200, i.e., the effective liquidity of the ranged pool is 200 times of the basic pool with the same TVL. Even in this case, if the farming rewards to the pools are the same, then the farmers of the ranged pool and the basic pool will get the same amount of rewards with the same liquidity. This means, there is not so much motivation to provide liquidity to the ranged pool.
Farming v2, the farming plan can be given to a pair, which has multiple pools. For example,
bCRE/CREpair has multiple pools including a basic pool and ranged pools with different ranges. In this case, a farming plan can be given to
bCRE/CREpair, not a specific pool. Then, the farming rewards automatically distributed according to the EFFECTIVE liquidity of each pool, where the
effective liquiditymeans the actual TVL multiplied by the amplification factor of the ranged pool. This means, the
effective liquidityindicates how much liquidity at a tick of the orderbook is provided by the pool. With this feature, the farmer providing its liquidity to the ranged pool will get more rewards. This results in less slippage from token swap at Crescent orderbook.
- p-th pool of a given pair having
TotalRewardget the rewards for the farmers of the pool as
- i-th farmer of the p-th pool will get the rewards as
whereis the total farming amount of
Liquid Farmingmodule to the p-th pool.
|
OPCFW_CODE
|
View Full Version : Yet another XP Question!
11-05-2002, 10:29 AM
Everytime I turn on my computer when windows loads it automatically opens the C:\WINDOWS\SYSTEM32 folder on the desktop....why the hell is it doing that? lol...Thanks in advance
11-05-2002, 11:14 AM
RayRay, I apologize in advance for this Hijack.
Another weird thing my XP is doing (although overall I'm thrilled with it and I don't see why a lot of people don't like it, I think it rocks!)
Every once in a while, I'll just be working along, and this weird little window pops up with Spam in it. It says something like Windows Messaging Service or something, and it's always just spam stuff. I think it's tied to Windows Messenger, that AIM-kinda thing, except that I turned it off, it's not loaded, no icon on the taskbar, nothing. Is it one of those things I can't remove all the way? I hate instant messaging and never use it. How do I stop this annoying spam?
Anyone? Anyone? Bueller? Bueller?
11-05-2002, 11:29 AM
/\/\etalhea|), take a look at this link:
Tech TV explains messenger spam (http://www.techtv.com/screensavers/answerstips/story/0,24330,3374542,00.html)
I don't know the solution to the OPs post. Maybe check you startup items, or go to run and type in "msconfig" (w/o the quotes) and see if anything is in your start up that is causing that folder to open when you start up.
ray, check your Start Up folder, along with your registry (go to start/run and type regedit to enter the registry then check HK Local Machine/Software/Microsoft/Windows/Current Version/ then check Run, Run Once, and Run Services) and the win.ini file to see where it's loading from.
11-05-2002, 01:33 PM
Dude, thanks a ton for the link. That explains a lot!
11-06-2002, 02:39 AM
Do a google for "xsetup". It takes a good OS (XP) and makes it GREAT! It lets you turn off all the annoying crapware MS feels you need, including messenger, the stupid balloon tips, etc. Best of all, xsetup is freeware.
vBulletin® v3.7.3, Copyright ©2000-2013, Jelsoft Enterprises Ltd.
|
OPCFW_CODE
|
Archive for April, 2011
18th April, 2 Comments
By John Watson
After a generally good start in the thirty years since Holmes first appearance on television, the Seventies turned out to be a decade that is probably best forgotten in relation to Holmes on the small screen.
Following the repeated series with Peter Cushing as Holmes in 1970 on BBC2 the next appearance of Holmes on the small screen was in the USA in a 90 minute adaptation of The Hound of The Baskervilles on ABC-TV on the 12th of February 1972. Stewart Granger played an unconvincing Holmes with Bernard Fox as me.
It was not well-received by the critics. Would you be convinced by a Holmes wearing a string tie living in a Baker Street on top of a hill overlooking St Paul’s Cathedral? Perhaps the most interesting cast member was William Shatner as Stapleton, three years after his role as Captain James T Kirk in the original Star Trek series. There will be more direct link with Star Trek as we shall see later.
But worse was to some. Back in Britain, the BBC’s Comedy Playhouse was a series of one-off 30 minute comedies, the idea being to see which the audiences liked that could be made into their own series. John Cleese had fallen out with the rest of the Monty Python team and was looking for “something completely different”.
So, on the 18th of January 1973 , the same day as the last of the current series of Monty Python was being shown on BBC2, Cleese appeared as Holmes with William Rushton as me (all is forgiven, Nigel Bruce) in “The Strange Case of the Dead Solicitors”.
A more serious attempt followed on the BBC late the following year though this should really be excluded from “Holmes on Television” as he wasn’t in it! “Dr Watson and The Darkwater Hall Mystery: A Singular Adventure” as its title suggests leaves everything up to me (played by Edward Fox). Its 73 minutes is like a foretaste of the recent BBC Sherlock series with many canonical references (including STUD, BLAC, MUSG and SPEC). I appear to have some fun with a Spanish maid but as the “action” appears to pre-date SIGN I had not met my future wife at that point.
Nearly three years pass and then, in 1976, “Holmes in New York” appears on NBC-TV with Holmes being played by James Bond, I mean, Roger Moore with John Steed (Patrick Macnee) of the Avengers impersonating Nigel Bruce impersonating me. Nevertheless, the plot has some points of interest. Just what is that statuette on Moriarty’s desk and what might it have to do with the person playing Moriarty in this two-hour (too) long television movie?
The following year, in the series “Classics Dark and Dangerous”, came a 30 minute dramatisation of Silver Blaize with Christopher Plummer as Holmes and Thorley Waters as me. This was one of a series of six adaptations of horror and mystery stories. It was broadcast on ITV in Great Britain on the 27th of November 1977. Christopher Plummer is a cousin of Nigel Bruce and portrayed Holmes in a dry, distant manner and chose to stress Holmes use of cocaine by wearing a pale foundation. Thorley Walters plays me as “an overgrown schoolboy” according to one review.
This was preceded on the 18th September by John Cleese, this time with Arthur Lowe (of “Dads Army” fame) as me in “The Strange Case of the End of Civilisation As We Know It”, another parody on ITV lasting 54 minutes. Best forgotten is the general view.
Then in 1978, Peter Cook and Dudley Moore appeared as Holmes and me in “The Hound of the Baskervilles . . . Yet another adventure of Sherlock Holmes and Dr Watson by A Conan Doyle”. This 84 minute parody is also best forgotten!
The BBC TC series “Crime Writers” covered “The Great Detective” later in 1978 with Jeremy Clyde as Holmes and Michael Cochrane as me.
The Seventies was somewhat redeemed at the very end with a series of 24 pastiches, essentially a reworking some of the same scripts as were used in the 1954-55 Sherlock Holmes series starring Ronald Howard. This time Geoffrey Whitehead played Holmes with Donald Pickering as me.
Generally speaking, none of what occurred in the Seventies has made it to DVD which may say something about its quality, so much of what I have written is based on what others have told me about these programmes.
The exception is the last series with Geoffrey Whitehead as Holmes. Some of these episodes have appeared on YouTube and a good search engine should help you locate them.
If anyone can advise on the availability of any of the programmes on video, here or in the USA, I will be happy to pass on the details.
So, the Seventies came to a close with little to recommend it to Holmes fans. But the Eighties would eventually bring us a fresh approach to my original stories and a Holmes, who on the television screen, would rival, and some say surpass, Basil Rathbone’s portrayal on the cinema screen.
Posted in Television
|
OPCFW_CODE
|
As a brand new Audacity user, am impressed by the knowledge & dedication displayed by the program developers - and users - and hope there is a fairly non-technical answer to my problem. My large cache of vinyl records is begging digitization - and Audacity seemed the solution. Have been able to make my 1st recording with Audacity (from newly purchased Pro-Ject III USB turntable), then produced an audio CD with Nero. However, the CD will play on my computer, but not on either of in-home stereos, where one produces silence, and t’other gives ‘no disk’ message, and ejection of the CD. Will be grateful for any suggestions, as this roadblock is holding up my launch of digitization project. I DID mark ‘audio CD’ whenever prompted to do so, and the disc info shows that it appears to be an audio CD.
Good, that is correct - not doing so is such a common mistake.
It is also important that you use a CDR (Recordable CD) and not a CDRW (re-writable CD).
Sometimes you will get write failures, or poor quality disks for no apparent reason. It is generally recommended to use branded disks, although I often use cheap unbranded ones and only have failures of about 1-2%. Occasional failures can also happen with branded disks, and you can be unlucky and get a whole pack of bad disks, though this is rare these days.
Some versions of Nero provide the option to “verify” the disk after it has been burned - you should select that option to allow Nero to check the data that it has “burned” onto the disk.
If you have the full version of Nero, it includes tools to check the quality of a disk. Poor quality disks may not be recognised by CD players.
Sometimes you will get CD players that will just not play home made CD’s, but since you have tested on 2 CD players, that does not seem to be a likely cause.
The first thing to check is that you are using CDR’s and not CDRW’s.
Thanks for the feedback. Yes, I am using CD-R discs, as I had read that the CD-RW’s are not accepted by some equipment. The discs which I purchased for this project are the Gold Archive quality MAM-A professional grade CD recordable disc (certified for use in ALL high speed CD writers).
Will try other brands (as soon as I can brave the storm & get out), and will use the ‘verify’ option on Nero - really don’t remember seeing it when I
recorded this one, but am just at the beginning of this endeavor and it’s all on the learning curve. Any further suggestions appreciated.
you may want to consider reducing the burn speed - IIRC I think you can do that in Nero, but it’s been a while since I used it. This advice has been dispensed several times in other postings on the forum, so I’m guessing it may work for you. And do post back if you get it all working - as this helps other readers of the forum.
Even if it takes longer to burn, just go and make yerself a cuppa while Nero does its work for you …
I would also recommend in addition to creating the CDs that you store as backup at least one set of the WAV files that you produce (I have two separate copies on two separate USB disks - separate folder per album). You will do a lot of work on this project - and you don’t really want to lose it, do you?
BTW it would be better in the future if you posted in the appropriate section of the forum - it’s divided up by users of the 1.2 stable releases and the 1.3 beta release - and then subdivided by operating system. That way you are more likely to get a response - this subsection is a far-flung star at the end of a long spiral arm of the Audacity galaxy - and not many folks pass by this section
This does sometimes improve the burn quality, though not with all CD burners and not with all CD’s.
The full version of Nero also has an option to test the writing speed before burning - this will sometimes help to reduce the number of “coasters” (beer mats).
Compatibility is much less of a problem with CD’s these days than it used to be, but you will sometimes still find that a particular CD writer does not “like” a particular type of CD, even if both the writer and the CD are of good reputable quality.
Go buy a short stack of brand name disks that doesn’t say “High Quality” on them, spelled in any way. HiQ disks, HiKwality…etc. I’m a fan of Sony DVD-R at home.
Do not let the burner decide what to do. I created several beer mats before I decided that screaming 48X, while entertaining to watch and listen to, just wasn’t producing good disks. I found all the burn errors vanished at forced 16X.
We found it enormously valuable to own a complete piece of garbage CD (and DVD) player. This is the Quality Control player. If it plays on this thing, it will play anywhere. We have a very old Panasonic DVD player at work that does this job. It won’t play anything that’s not straight up and legal. At home, it’s a very old Panasonic portable that is very particular about what it will play.
Scan is very unforgiving of bad disks. If you scan forward on your disks and the player gets lost, then you need to change something.
|
OPCFW_CODE
|
If I unlimited resources to build whatever I wanted…
I am pursing the Center for Bits and Atoms because of my interest how we embed intelligence into the things that we make. This is composed of two areas: 1. How do we design things that can improve themselves as well as enhance our ability to make smarter devices? 2. How to we fabricate things in a way that they can evolve themselves?
The proposals below highlight my research interests that best align with the Center for Bits and Atoms lab and the projects that I would pursue to further my inquiry in these areas.
‘Truly Embedded’ Hardware
OBJECTIVE // Build an integrated CNC fabrication device that allows for the construction of circuitry that is embedded directly in the mechanisms and enclosure that make up a device. This would require the construction of a multi-functional fabrication tool that integrates a fused deposition modeler extruder tip, pick and place suction nozzle and heat stake tip. Additionally, this will require the development of a software tool to assist in the design and fabrication of the 3D circuitry.
APPROACH // I would loosely take the following steps to evaluate feasibility and evolve the potential technology:
Assess market of various conductive filaments for physical and electrical properties. Grind, blend and extrude custom carbon impregnated filament if necessary to achieve desire properties.
Characterize process of embedding various electrical components into conductive filament. Evaluate alternative means by which the components and wiring will be fused/attached electronically.
Prototype and explore various structural and electrical combinations; through-hole mushroom caps, SMT components “reflowed” with localized heating techniques.
Design and construct tabletop CNC fabricator utilizing FDM, pick and place nozzles and heat stake for interfacing with through hole components.
Develop a design methodology based on experience prototyping that can be converted into a workflow for a computer tool.
Build computer tool to allow for the routing of traces in plastic and the fabrication with a multitool device.
FUTURE APPLICATION // As rapid additive manufacturing technique become viable as economical and accessible means for producing custom components for consumers at a large scale, there is an opportunity to maximize the functionality of structural elements by embedding electronics and processing capacity into the plastic. In lieu of constraining design to accommodate the requirements of producing a PCB and fitting it inside an enclosure, the electronics will contribute to the structure of the device and be embedded within the wall thickness.
Physical Voxel Interpreter
OBJECTIVE // Build a Lego type of brick with various resistive properties for each stud that allow you to use measurements of the stacked resistance to interpret what the geometry looks like computationally. This would allow you to physically make a stack of bricks on top of each other creating a 3D model that would then be rendered in real time on a computer. This method would allow me to explore how we take things from physical building blocks and process them into geometric data.
APPROACH // This project consists of two components: an individual, universal brick and a mat that takes measurements at each stud to determine the stacked resistance. In order to construct the number of bricks that I would interested in making, I would design a custom injection molding tool and table top molding machine to produce the base substrate and subsequent overmold of conductive elastomeric material. Additionally, I would make a base plate that allows me to sample the resistance between any two studs in order to interpret the number of bricks stacked above that position. Using this data, we can construct a geometric model rendering the physical build.
FUTURE APPLICATION // The long term goal of this work would be to allow you to construct physical things in a manner that allows you to read and understand the geometry by interfacing with one element of it. Similar to the way that you can read the DNA of a single cell to understand the composition of the entire being, you could understand the geometry of the entire object by reading a single block. For example, you could build a bridge with these blocks with complex internal structures that are hard to assess their condition over time, but you could monitor and identify cracks in the entire bridge by reading the data from a single node.
Morphable Node Building Block
OBJECTIVE // To construct a electromechanical building block that snaps together and communicates with each new node that is added. Through sensors in each node, they can understand their particular geometry and through the mesh network that is created, they can communicate the accumulated geometry of the entire network to an exit point.
APPROACH // Each node would consist of a base element which would house the microcontroller that interprets the local geometry and facilitates the communication with the other nodes. There are various legs with a singular DoF (6 legs shown here) with markings that allows to append a encoder to the joint and determine the length of each one. Additionally, each leg is constructed to contain a wire that allows both RX/TX and power sharing with the adjacent node. The established mesh network would allow the geometry of each node and it’s orientation with respect to it’s neighbors to allow you to acquire the complete geometry of the collective structure by requesting the information from a single node. With this data, a holistic model of the entire structure can be reconstructed digitally.
Very Large Format CNC Lawn Mower
OBJECTIVE // Design and build a lawnmower that can be programmed and controlled remotely to allow you to precisely cut patterns and vectors into large agricultural plots.
APPROACH // The big question here would be whether or not this could be constructed on an open loop system, or whether you could build this without GPS location feedback. The design and construction of the base cutting hardware would be straightforward; a four wheeled robot with a z axis spindle that holds the blade and determines cut height. There would need to be a preprocessor that converts the images to a series of commands (potentially GCode), then once I have hardware that can interpret this and translate it to motion, I can assess whether tracking the rotation of the wheels has minimal enough error to render vectors/bitmaps sufficiently in the grass. After building the closed loop system, I would either use an agricultural GPS to provide position information, or
APPLICATION // Field and Crop marking. Autonomously marking large portions of land for aerial interpretation.
|
OPCFW_CODE
|
Copy overlays are a fun and easy solution for creating a more personalized, unique design. I have been using them for years, but now I have the tools to use them in my own projects.
Copy overlays are just a simple bit of CSS that adds a bit of a border to a page. The idea is that when someone visits a page you want a bit of a border, but it’s not your own. The border then appears on all of the other pages that share the same domain name.
I am working on a few projects that use copy overlays to create unique, personalized, and cool designs. If you want to know more about copy overlays, check out the link below.
At least I hope it’s not my own website, because I always hate it when someone else uses one of my copy overlays on their website.
It’s a common mistake to make in the design of a website. It’s actually a pretty ugly design, and I’m not really sure what I should do with it. It doesn’t make sense to me because it’s the only place to put a copy on a page. I think the only solution would be an overlay instead of a page. This is where the link-building is done correctly. It will be different depending on the browser and the type of browser you’re using.
For those that don’t know, a copy overlay is a link that overlays one page onto another. What this means is that a link on one page will be a link on another. This is a very common practice for most sites and this makes it really easy to make links. The only trouble is that copy overlays have a number of problems.
The first problem is that they can be created from within the page. For instance, if you want to make a link on a page that says “this page” and “this other page”, then you’ll need to include the URL in the link. A “www.google.com” link is very different from a link that says “this page, www.google.com”.
There are a number of ways to create a copy overlay. One of the easiest is to add a URL in the text of the page, then remove the URL from the first link when it is clicked. Another is to have two anchors on the page that have the same text, but one is hidden when clicked, and the other is visible.
To make a link on a page, you need to start with a URL. You can create a URL for a page and then set the image to that URL. A URL can be a little more difficult, because the text on the page has to match exactly. If you have a page with two pages that contain links to each other, then you could use a link to make a link from one page to the other.
|
OPCFW_CODE
|
Configuring MVC for SimpleMembership
I have an ASP.NET MVC application with a number of membership-related pages generated from the project templates.
When I attempt to access one of those pages, I get the following error:
The ASP.NET Simple Membership database could not be initialized. For more information, please see http://go.microsoft.com/fwlink/?LinkId=256588.
After some time spent researching, I determined I was missing the connection string. I have an existing connection string named FreeWebFilesEntities and I created a new connection string named DefaultConnection, and gave it the same value.
<connectionStrings>
<add name="FreeWebFilesEntities" connectionString="metadata=res://*/FreeWebFilesRepository.csdl|res://*/FreeWebFilesRepository.ssdl|res://*/FreeWebFilesRepository.msl;provider=System.Data.SqlClient;provider connection string="Data Source=.;Initial Catalog=FreeWebFiles;Integrated Security=True;MultipleActiveResultSets=True"" providerName="System.Data.EntityClient" />
<add name="DefaultConnection" connectionString="metadata=res://*/FreeWebFilesRepository.csdl|res://*/FreeWebFilesRepository.ssdl|res://*/FreeWebFilesRepository.msl;provider=System.Data.SqlClient;provider connection string="Data Source=.;Initial Catalog=FreeWebFiles;Integrated Security=True;MultipleActiveResultSets=True"" providerName="System.Data.EntityClient" />
<connectionStrings>
But it still doesn't like this. I found some resources that suggested I'm using an EF connection string (and I am). I tried removing the metadata section of it but, no matter what I do, it throws an exception.
Since it appears Microsoft decided not to document this process very carefully (if at all), has anyone figured this out? How can I get the SimpleMembership pages to work?
Well, for starters, you are using a EntityClient connection string, which is incompatible with SimpleMembership. SimpleMembership requires a SqlClient provider type connection string (ie, it does not have the metadata and the provider type is SqlClient)
Second, the connection string that is used is set in your InitializeSimpleMembershipAttribute.cs class, in particular the line that calls WebSecurity.InitializeDatabaseConnection. You can change this to whatever you want. DefaultConnection is the default connection string supplied in the machine.config, and does not by default show up in your web.config.
Thanks for responding. But the connection strings have always been a bit of a mystery to me. I tried editing the connection string in several ways, as I described. But I still don't get what it needs in order to work. (Also, I know that the DefaultConnection connection string doesn't show up automatically. That's why I added it.)
@JonathanWood - you misunderstand. Web.config files are hierarchical and machine.config is at the very bottom of that, and it contains the DefaultConnection. You only need to override this if you want to name your connectionstring "DefaultConnection" but do not want to use the DefaultConnection as defined in machine.config. If you don't want to name your connectionstring DefaultConnection, then you need to change the connection string name in the InitializeDatabaseConnection call in InitializeSimpleMembershipAttribute.cs.
@JonathanWood - EntityClient connection strings include the metatdata that points to your .edmx file, SqlClient connection strings do not include that information. SimpleMembershipProvider does not understand EntityClient connection strings, so you cannot use the same connection string as your EntityFramework classes with SimpleMembership. If this is a "mystery" to you, you should do some research, because this is an important distinction to understand and not something you can continue to be ignorant of.
Thanks, but the name of the connection string is not where I'm having trouble. With each edit I make, I get a different exception. But I seem to be making progress. I would like to learn about this but there seems to be a lack of good information. And the information on using SimpleMembership is downright pathetic.
Do you have these sections in your web.config file?
<roleManager enabled="true" defaultProvider="simple">
<providers>
<clear/>
<add name="simple" type="WebMatrix.WebData.SimpleRoleProvider, WebMatrix.WebData"/>
</providers>
</roleManager>
<membership defaultProvider="simple">
<providers>
<clear/>
<add name="simple" type="WebMatrix.WebData.SimpleMembershipProvider, WebMatrix.WebData"/>
</providers>
</membership>
SimpleMembership does not use any config file entries for membership
Then you're saying that those sections should be pulled out of a working, code first, migrating, mvc project that uses SimpleMembership? Although, your right about the provider name needing to be System.Data.SqlClient
I don't know what you mean by a "migrating" project, but SimpleMembership is configured in code via the WebSecurity.InitializeDatabaseConnection method, and does not use any config files for this. To prove this to yourself, generate a new Internet MVC4 application, you will notice there are no membership entries in the config.
By 'migrating' I was referring to a dev project that uses DbMigration. I tip my hat to you with the fact that the SimpleMembership is initialized with the InitializeDatabaseConnection which create the sections in the web.config file.
To be clear, InitializeDatabaseConnection does not create anything in the web.config, but it does initialize the Membership.Provider dynamically in code, so the web.config entries are not needed (and in fact, are overridden by the InitializedDatabaseConnection method). You can read the source for the WebSecurity class here http://aspnetwebstack.codeplex.com/SourceControl/latest#src/WebMatrix.WebData/ConfigUtil.cs
Ok, you can stop pulling your hair out. I did as you suggested and created an new mvc test project and checked to see that the filter folder was created and the InitializeSimpleMembershipAttribute.cs file was there. There were no membership section in the web.config file. Builded and ran it and registered a new user, logged out and closed it. There was still not membership section in the web.config file.
|
STACK_EXCHANGE
|
Junit 5 tag not working with Gradle 6.5.1
Hi All,
I am trying to learn JUnit 5, my Gradle version is Gradle 6.5.1
java version is openjdk version "11.0.7" 2020-04-14
OS ubuntu 18.04.4.
public class FruitCalculator {
public int addFruit(int fruit1, int fruit2) {
return fruit1 + fruit2;
}
public int subFruit(int fruit1, int fruit2) {
return fruit1 - fruit2;
}
}
import org.junit.jupiter.api.Tag;
import org.junit.jupiter.api.Test;
import static org.junit.jupiter.api.Assertions.assertEquals;
public class FruitCalculatorTest {
@Test
@Tag("add")
void addFruitTestTag1() {
System.out.println("FruitCalculatorTest.addFruitTestTag1");
FruitCalculator fruitCalculator = new FruitCalculator();
assertEquals(2, fruitCalculator.addFruit(1, 1), "1 fruit + 1 fruit is 2 fruit");
}
@Test
@Tag("sub")
void subFruitTestTag1() {
System.out.println("FruitCalculatorTest.subFruitTestTag1");
FruitCalculator fruitCalculator = new FruitCalculator();
assertEquals(1, fruitCalculator.subFruit(2, 1), "2 fruit - 1 fruit is 1 fruit");
}
}
plugins {
id 'java'
}
group 'org.example'
version '1.0-SNAPSHOT'
repositories {
mavenCentral()
}
dependencies {
testImplementation('org.junit.jupiter:junit-jupiter:5.6.2')
testRuntimeOnly('org.junit.jupiter:junit-jupiter-engine:5.6.2')
}
test {
useJUnitPlatform {
includeTags 'add'
excludeTags 'sub'
}
}
1-> @ tag is not working. if I run using cmd or IntelliJ, it's running all 4 test cases.
2 -> in cmd -> gradle clean build test printing no output whether the test is passed or failed.
~junit5$ gradle clean build test
BUILD SUCCESSFUL in 1s
5 actionable tasks: 5 executed
even the System.out is also not printing.
If you run the tests using IntelliJ, the gradle test task configuration is irrelevant. You need to run the tests using gradle for the gradle test task configuration to be taken into account. You can use gradle from IntelliJ by selecting the test task in the gradle tools window and executing it, or by configuring IntelliJ to run the tests using Gradle instead of its own test runner (by opening "Preferences", then searching for "gradle", and then select "Gradle" in the select box "Run tests using:".
That's expected. The test task generates an HTML report (under build/reports/tests/test), which lists all the tests and their output. The gradle output says "BUILD SUCCESSFUL", so the tests passed. If a test had failed, then the build would have failed and you would have an error in the console signalling it, such as:
> Task :test FAILED
FruitCalculatorTest > addFruitTestTag1() FAILED
org.opentest4j.AssertionFailedError at FruitCalculatorTest.java:13
1 test completed, 1 failed
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':test'.
> There were failing tests. See the report at: file:///Users/jb/tmp/fruit/build/reports/tests/test/index.html
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.
* Get more help at https://help.gradle.org
in maven it is easy to exclude tests which have the tag "slow" on the command line with:
mvn clean install '-Dgroups=!slow'
what is the equivalent on gradle? currently i put it into the build.gradle.kts file which is not so flexible (kotlin, not groovy):
tasks.test {
useJUnitPlatform {
// gradle-5.6.1 not yet allows passing this as parameter, so exclude it
excludeTags("slow")
includeEngines("junit-jupiter")
includeEngines("junit-vintage")
}
}
tasks.register<Test>("slowTests") {
useJUnitPlatform {
includeTags("slow")
}
}
}
Thank you for your interest in Gradle!
Please ask such a usage question on discuss.gradle.org or stackoverflow.com where to get answers by the community of Gradle users.
You can pass any property you want from the CLI to your build scripts as well as excluding specific tasks from running with the -x flag ./gradlew check -x slowTests (
|
GITHUB_ARCHIVE
|
Playing bass over a guitar solo
How does one create a walking bassline over a guitar solo? Certain song tabs tell you what key the solo is in at certain moments, so I can follow along there, but when I try to improve over a solo, I end up making an out-of-tune droning bassline.
Playing bass over a guitar solo? Sounds like the band is upside-down...
@leftaroundabout I might have used the wrong terminology. Under might be a better word. If you want an example -- The bass under Pf - Coming back to lifes third solo. There's a nice bass in there. I can't find out how to recreate it.
Stick to stuff in one key at a time. Know the appropriate scale notes for that key. A walking bass, in simple terms, is four in a bar, moving up and down in a sort of scalar manner. Think about it. Playing four notes in a bar in C, at least one, more likely two of the notes you play consecutively (up or down) from a C scale will contain a C, E and/or G. The best plan is to make sure one of these notes fits on the more important parts of a bar: beats 1 and 3.
Example - in C, for 2 bars - play C D E F G A B C. It works fine.Because 1st bar - C=1, E=3, 2nd bar - G=1. If the next bar is G, you can play the first note as the B below C, and go B A G F. If the next bar is F, then you may play F E D C, ending on the top C previously played.
All this is very basic walking bass, and you can sometimes merely get away with walking up and down the scale, with occasional jumps. However, it is probably a nice start point. Give it a try. There's a lot more, like chromatics, extra jump notes to stop the timing becoming staid, etc. etc.
I do really hate to ask to many questions, and I do hope I'm not bothering you -- but a guitar solo in the key of A minor, ranges a bunch of notes and despite it being in A minor, playing A throughout does not make a good bassline. How do I tell what note I shouldplay based on the solo?
It may be in the key of Am, but it won't stay on Am all through. Go with the chords - probably Dm and E(m) feature too. It would be very unusual to stay on Am throughout. Walk up and down from the root note on beat 1, as suggested, and it'll be a start. Use the 1,3 and 5 of each individual chord as the main (1 and 3) beats of each bar, again, as suggested above. Take my whole answer, and it will be several practice sessione before it all drops into place, assuming it is a walking bassline you're after.
It seems that you haven't figure the key the guitar solo is in, the most important thing is to find in what key you should be playing ,from their you should figure the scales , the notes and the chords you could use in your bassline.
knowing the theory behind the music you're trying to transcribe is crucial since it makes your job much easier.GOOD LUCK.
I think I might be missing something here -- The bassline for the third solo of Pink Floyds "Coming back to life" is in the key of A minor. During the solo, the bass goes from playing c, to f, to a, to g, to f, to a, to g, to f, etc... How are all of those related to A minor, and how do you know which is currently being played to back it up and bring the notes, "to life" so to speak.
EDIT: Didn't know enter submitted the comment.
Starting around 4:55 on the the original album version? That progression is in C major, I hear the chords as I-IV-vi-V-IV (C-G-Am-G-F) twice and then vi-IV-vi-bVII-vi-V-I (Am-F-Am-Bb-Am-G-C). The bass is playing the root of each chord except the one little riff on the I. Listen carefully to the rhythm and articulation, I think they're really important to that line. Nice song!
|
STACK_EXCHANGE
|
I am a Web designer having 8 years of experience in both Design and Development .
Custom application, Wordpress theme development, UI design, plug-ins development and customization also have prior experiencLagi
I am an expert E-Commerce website designer & developer. I have knowledge of many E-Commerce builders Like Wordpress, Opencart, Wix, Shopify, wooCommerce, Magento and PrestaShop. I am a certified expert in ShopLagi
Hi, I'm a Web Designer/Developer . My name is Suraj. Your project description sounds interesting to me and I do have skills & experience that are required to complete this project.
Relevant Skills and Experience
✅ I will Design your website professionally with 100% satisfaction and Money back guarantee and will show you samples according to your work.
✅ I have 8+ years’ experience in web development, I can show you my previouLagi
To Showcase and Sell Wrist watches online Professionally
For you we Will Design and Develop An Eye catching and Elegant Ecommerce Website
Responsive, will work Perfectly on all Devices
Having a Modern Look and FunctiLagi
My understanding for the project is you want to have a E Commerce website designed & built.
I have extensive expertise with custom Ecommerce development & can help you with this regard.
1. I will start developiLagi
I have read your requirement, i can design your website as per your requirement, I have few questions to you that we can discuss in our chat. The website will be fully responsive for all the devices, MobLagi
Greetings!! We have few queries kindly ping us.
Thanks for giving us the opportunity to bid over the requirement. As per your requirement you would like to develop an E-Commerce store developing e-commerceLagi
I am interested and can provide you a e-commerce website for wrist watches. I have over 9 years of experience in this field and can easily fullfil your requirements.
please send me a private message to discusLagi
I believe in the quality of work. Without quality no payment. I can start right now.
I have sound skill in English and hope it will help us for a good communication.
I am a web developer and I Have more than Lagi
I recently create a watch website It's in WordPress but i can do that in eCommerce really a simple work or if you have any design I can work on that I am sending you my recent work on watches website please haLagi
How are you?
I am really interested in your project.
I am a senior full stack developer.
So I am sure I can bring you perfect result as you want within short time.
I’m ready to start your project immediatelyLagi
After reviewing your post, I am very interested in that due to my experience.
I'm expertise in php, core php, CMS, HTML5, bootstrap, wordPress and many other technologies. I have 8+ year of experience in deLagi
Hi Dear, I can read your requirement, I am a Website developer and designer and I am complete your task with quality on time. I hope you would like my work and you will give me a good feedback which is very important fLagi
My name is Dhirendra. I am a professional Web developer. I have total 6+ years of work experience in software development.
Actually, we are a team of software experts. And we have worked on many projects.Lagi
We would love to design and develop awesome e-commerce website. i saw your requirements like you want a e-commerce website for wristwatches please revert me back for further discussion.
We are a highly skilled litLagi
I am working with a company and we specialize in E-commerce development and other verticals.
Our work would typically involve development of specialized e-commerce application as per clients needs implemenLagi
>>>similar work i have already done<<<
As per my understanding you need to website and i can do this job very well.
I am glad to say that i have developed more than 150+ websites using Word press.
The website you’re looking for is totally do-able! However, I would need a few more details. Let’s come over chat or call (whatever suits you) to discuss this. I’ve been creating successful websites for the past decadeLagi
$80 USD dalam 2 hari
Mencari untuk memperoleh sedikit wang?
Tetapkan bajet dan tempoh masa anda
Rangkakan cadangan anda
Dibayar untuk kerja anda
Ianya percuma untuk mendaftar dan membida pekerjaan
|
OPCFW_CODE
|
One cannot assume that an aphorism is statement promoting a tall tale with extraordinary events. Rather, it is a witty truthful statement that can be used in or out of context. Its extensive historic background explains how past writers have used aphorisms. Today, its purpose is used so boundlessly in many of areas such as the entertainment industry and politics. As aphorisms carry whimsy truths, it is only limited to carry out truthful insights. It must catch the audience with awe and express despair as being funny.
Aphorisms have two definitions from the American Heritage College Dictionary. First, it defines aphorism as “a terse statement of a truth or opinion; an adage. Second, “a brief statement of a principle” (Aphorism). However, other references, author’s, historians would refer an aphorism as a maxim, but most have concluded that maxims would be used more as a synonym (J. ). The word aphorism is a noun that is pronounced a-fə-ri-zəm. In the past, the word aphorism was changed several times from three different languages. The first language was from the French and old French word of aphorisme. Next, the Late Latin word of aphorismus previously derived from the Greek word aphorismos a synonym to the word aphorizein; defined as: to delimit, define (Mifflin). This statement, with a complex definition, is a component to the writing world. It promotes an active voice, stating truths, facts, with some surprising twist, all in one brief statement. Famous writers have used the aphoristic technique to capture the attention of an audience.
Aphorisms have had a very long history. In brief, the origin started as early as the Greek era. Although, the word aphorism was never used in context in the early years, ea...
... middle of paper ...
...Heritage College Dictionary. 4th ed. 2002. Print.
"Inaugural address of John F. Kennedy: Friday, January 20, 1961." Current 510. (2009): 19+. Gale Opposing Viewpoints In Context. Web. 21 Feb 2011.
J., John. The Oxford book of aphorisms. Oxford University Press, USA, 1983. Print.
Mifflin, Houghton. The American Heritage college dictionary. Houghton Mifflin Harcourt, 2002. Print
Nordquist, Richard. "aphorism." About.com. The New York Times Company, 2011. Web. 21Feb 2011. .
"paradox, n. and adj.". OED Online. November 2010. Oxford University Press. 3 March 2011 .
Powskey, Etta. Personal Interview by Rochelle Kennedy. 15 Feb 2011. 15 Feb 2011.
Wolf, Manfred. "The Aphorism." ETC: A Review of General Semantics 51.4 (1994): 432-439. Web. 21 Feb 2011.
|
OPCFW_CODE
|
What is the point of an employer to adjust the salary of an existing employee based on actual remote work location?
These two articles mention the fact that Facebook is comfortable with remote work, but might adjust salaries based on the actual location used for doing the work:
Facebook CEO Mark Zuckerberg said Thursday that most Facebook
employees can work from home wherever they want. But they should not
expect to get Silicon Valley salary levels if they relocate to
less-expensive areas.
"We'll adjust salary to your location at that point," Zuckerberg said
during a livestream, according to CNBC. "There'll be severe
ramifications for people who are not honest about this."
The pandemics also created an opportunity for some those who can work 100% remote where I live (Eastern Europe), but I haven't heard of anyone adjusting the salaries due to working outside of a major city (metropolitan area).
I am wondering what is the rationale behind this. I am asking because this involves some costs (tracking your employees by location, possible demotivation due to salary reduction).
Question: What is the point of an employer to adjust the salary of an existing employer based on actual remote work location?
Note: This is very similar to this question, but the important difference is that here I am asking about existing employees.
@PhilipKendall While this is true, saying to an employee: "starting next month you will do the same work for me, but for less money, because you are working from another location" is going to incur a motivation cost.
Sure, but your question is "what's the point", not "is it a good idea".
@PhilipKendall - I thought this type of question will receive the "primarily opinioned based" stigma, but I noticed that are quite a few that managed to survive. Anyway, Mattew Gaiser provided an interesting perspective that I agree with and shows that it is more than an immediate reduction in salaries cost.
Do consider that depending on where they go the employee could take a substantial cut and still have more money that is actually theirs to keep and not going directly to a landlord and california tax rates. (re: being demotivated doing same work for "less money")
Should also be noted that in federal jobs there are some that come with BAH (basic allowance for housing) that is paid to people based on where they live so it isn't unprecedented to pay based on location.
Federal jobs have a locality adjustment. If it’s good enough for the federal government why shouldn’t Facebook do it? In reality the answer is simple. Facebook can adjust your pay based on your location, because they have been hiring remotely for years, and have been adjusting those job offers based on the applicants location for years. Facebook has many types of remote jobs that doesn’t include their highly paid developers
Let's hope they don't use there own scheme to find out the location salary levels and use this as an excuse to pay far too less..
@shoover - yes. Fixed that. Thanks.
A huge portion of any market salary is the cost of living for that market. It's no different than the living wage being different in different countries. Clothes and shoes are made in specific countries because the salary costs for those countries is lower. Software and other "remotable" jobs are no different. Facebook and other companies pay a premium to individuals based on the location. Individuals in Seattle and San Francisco have significantly higher living costs and so make a higher pay rate. It's absurd to pay a bay area salary to someone living in Wichita.
One might attempt to argue that salary is the value of the work, but when you examine actual salary it's the value of the work weighed against the market the worker is in. The people who need to begin worrying are the people in those high pay rate areas. If these jobs are capable of being fully remote without significant degradation to quality or timeliness, why would companies continue to pay a premium for those areas when they could save millions in payroll?
As a note for the very specific question of existing employees: salaries are reviewed and adjusted to market on a regular basis. Many companies provide an annual "cost of living" adjustment at the beginning of a year. Promotions and raises in place are regular reviews of salary against the market. If the employee's market changes it's not unreasonable that the market adjustment will go down.
I think you should expand on the idea that cost of living for the market is a key factor in salary, should be easy to show that people with similar positions in silicon valley and the midwest as an example have a big difference in pay that can only be accounted to the cost of living in both areas.
Also, regarding "value of the work", there are many aspects making a person-hour of an employee located in Silicon Valley actually more valuable than a person-hour in <rural state, not necessarily Kansas>. If that weren't true, companies would not have offices in Silicon Valley.
@TooTea Companies have offices in Silicon Valley because people really want to live there, close to ocean and mountains, vs. Kansas, which has neither.
The reality of the world is that some areas are more expensive to live than others. I'm not saying that's right or wrong, only that it is true. Without a global economy and shared global currency, it would be impossible to pay everyone the same.
If a company said to someone working in Silicon Valley, "I'm only going to pay you $X because that's all someone living in Eastern Europe gets", that person probably wouldn't take the job, and would probably soon be homeless and starving if they did. Conversely, someone working in Eastern Europe on a Silicon Valley salary would probably be able to live a life of luxury far in excess of their colleagues in Silicon Valley on the same money.
Why does the company pay the person in Silicon Valley more? Because they have to, if they want them to work for them. Why doesn't that same company pay someone in Eastern Europe a Silicon Valley salary? Because they don't have to: someone can live a comfortable life in Eastern Europe on a lot less money, so who is going to hold out for a "life of luxury" salary when someone else in the same town would take the same job on a "comfortable" salary?
Personally, I would be quite surprised by someone who moved from an expensive area with high costs of living and high salaries, to a cheap area with low costs of living and low salaries, and who expected to keep the high salary, or whose morale dropped as a result of losing the high salary. It just doesn't work that way.
(Here in the UK, this is frequently specified in employment contracts or even in law - consider for example this page which shows how teachers' pay varies depending on whether they live and work in London.)
I suspect companies are not as sold on remote work as they claim
There was this wave of cheers a couple months into the pandemic that “remote was the future” and “everything was just as efficient.” Anecdotally, those cheers are gone now as teams now have people they have never met due to turnover. There are now people I only know as chat boxes on the screen on my team and learning how to work with them is more difficult. That is a challenge for my team. On a team a friend is on, the bug count has increased as they are used to having requirements discussions orally and are now learning how to write them down.
To be sure, those challenges can probably still be overcome to allow remote work to continue. But there are challenges and management may want a way to be able to reverse work from home easily in a year or two if they decide those challenges are insurmountable. Over the prior 5 years, many companies were trending towards reducing remote work, not increasing it, even if it had been a staple of the company for decades.
That’s a lot easier to do if 80% of them haven’t left San Francisco for somewhere cheaper.
The thing to remember is that salaries are set by market forces. Specifically the competition between buyers (employers) and sellers (employees).
Compensation for a position ends up as a range between 2 numbers:
A floor, below which nobody would agree to take the job
And a ceiling, above which leaving the position empty costs the company less than the salary would.
For a Junior Developer in the Bay Area, those numbers are probably something like a floor of $80k (because the cost of living is so high), and a ceiling of $300k (Facebook makes $2.4 Million of revenue per employee. Even for junior developers, the cost of not having an employee to fill a position that needs filling is probably hundreds of thousands of lost revenue).
Imagine a theoretical auction. Usually, there is just one thing being sold and multiple people trying to buy it, so they end up bidding each other up to the highest price any of them is willing to pay (the Ceiling).
Now imagine a different auction where there is just one buyer and multiple people trying to sell them the same thing. The dynamic is reversed and now the buyers are going to keep offering lower and lower prices until they get to the lowest any of them is willing to sell it for (the Floor).
The same dynamics happen in the jobs market.
When there are more jobs than people who can fill them, you get intense competition among companies and they end up paying salaries near their Ceiling.
When there are more people than jobs, you get intense competition among employees and they end up settling for numbers near their Floor.
Taking Facebook again, the current salary for a junior developer in California is $110k which means salaries are only slightly higher than the cost of living.
Ergo, if an employee moves somewhere where the cost of living is cheaper, their Floor is going be lower (because cost of Living is lower) and Facebook can negotiate them down to their new Floor, knowing that the employee will accept it.
If you were earning Cost of Living + $20k in California, then you'll probably accept earning Cost of Living + $20k somewhere else as well.
Facebook knows this, and negotiates accordingly.
If it were the other way around, and companies were desperate for workers, then there would be no reduction because salaries would be being driven by companies' ceilings which have nothing to do with the Cost of Living.
|
STACK_EXCHANGE
|
Microsoft Teams encapsulates a lot of different features; chat messages, video calls, document collaboration, file sharing, polls, forms, schedules, to-do lists, and more. It still manages to use a nominal amount of memory when it’s running.
Microsoft Teams will use more memory when you’re in a meeting or when it’s syncing or uploading files but the memory usage is still reasonable. If you find that Microsoft Teams is using too much memory, you should know it isn’t normal and there are ways to minimize it.
Manage Microsoft Teams memory usage
Microsoft Teams will sometimes use more memory but consistently using a significant amount of RAM isn’t normal for the app. It isn’t meant to slow the system down or tax the performance of other apps.
Here’s how you can fix excessive memory usage by Microsoft Teams and manage it long-term.
1. Quit & Restart Microsoft Teams
If you do not restart your system often, some apps may eventually start to use more and more RAM. This memory usage can be fixed by restarting the app, and as an added measure, restarting the system.
If Microsoft Teams continues to use excessive memory after being restarted, move on to the fixes further below.
2. Update Microsoft Teams
Microsoft Teams updates fairly often and some updates may result in memory usage problems. Microsoft tends to optimize the app so if you’ve noticed Microsoft Teams using too much memory, check for and install any updates that are available.
- Open Microsoft Teams.
- Click your profile picture at the top right and select Check for updates.
- Microsoft Teams will check for and download updates.
- Restart Microsoft Teams and the update will be installed.
3. Clear Microsoft Teams cache
Clearing the Microsoft Teams cache can solve quite a number of problems with the app, not least of which is high memory usage.
- Sign out of Microsoft Teams.
- Quit Microsoft Teams.
- Open File Explorer.
- Navigate to this location:
- Delete the following folders;
- Local Storage
- Restart the system.
- Sign in to Microsoft Teams.
4. Toggle GPU acceleration
Microsoft Teams can use GPU acceleration however, it may or may not contribute to more memory usage. If it’s enabled, try disabling it and monitor the memory usage by Microsoft Teams. If it is disabled, try enabling it and check how much memory it is using.
- Open Microsoft Teams.
- Click your profile picture at the top and select Settings from the menu.
- Go to the General tab.
- Change the selected state for Disable GPU hardware acceleration.
- Restart Microsoft Teams after making the change.
5. Disable Microsoft Teams Outlook add-in
Microsoft Teams installs an add-in to Outlook if you have Outlook installed on your desktop. This add-in makes it easier to schedule meetings in Microsoft Teams from Outlook but it may also cause high memory usage. Try disabling it and check if the memory usage returns to normal.
- Open Outlook.
- Go to File>Options.
- Select the Add-ins tab.
- Look for the Microsoft Teams add-in and select it.
- Click the Go button at the very bottom next to the Manage dropdown.
- Uncheck the box next to the Microsoft Teams add-in.
- Click OK.
- Restart Outlook.
Microsoft Teams will consume memory like any other app. If you’re worried that it’s consuming too much memory, you should know that consuming up to 1GB during a meeting is normal. It can consume up to 500-600MB outside a meeting. Unless the app is effectively slowed down, is slowing other apps down, or is hogging all the RAM, there isn’t much to worry about.
|
OPCFW_CODE
|
Ubuntu Server or Cleint
Looking to install FOG on Ubuntu 12.
I typically use Ubuntu Server instead of the client
Is that OK ? – Since the server is lighter (less baggage) will there be missing components that FOG requires from the client install ???
I had FOG running on Ubuntu it was very slooow 12-16 hours to take an image
I made a couple of changes and everything started to fail
Attempting to do a virus load failed
doing a memory test Failed
doin another image failed …
So I figured let me start over - I am using dnsmasq to do the PXE boot
Will attempt using the following steps (using the latest version of fog)
- Get the tar file
wget did not for so I downloaded the current version to FTP server
logon to fog server
ftp to ftpServer
- Extract the tar file
tar -xvzf fog_1.1.0.tar.gz
3.Start the Install
The following commands are for the installfog.sh installer information. Change the relevant values for your particular system.
Type 2 and press Enter for Ubuntu installation.
Type N and press Enter for Normal installation
Supply IP Address, it SHOULD be the static IP address you set up earlier, if it is not please revert to step 5 and try again.
Type N and press Enter setup a router address for the DHCP Server.
Type N and press Enter to set up DNS.
Type N and press Enter to leave the default Network Card the same.
Type N to use FOG for DHCP Service.
Type N to not install Additional Languages.
Type N to donate and press Enter.
- Do the install
Type Y to continue
Next it will begin the install
Press ENTER to acknowledge the MySQl server message
NOTE: I set up MySQL with default passwords for MYSQL
Answer N when asked if I left the MySQL password Blank
Enter the password you used for MySQL and press enter
Enter the Password again and press enter.
- Do the Database Schema Installer
Now we need to set up the web GUI for FOG. You will be preseted with a URL [url]http://(serveripaddress)/fog/management[/url].
Copy and past that into your browser and the click on the Install/Upgrade Now button
Press [Enter] key when database is updated/installed.
Type N to send your install information to the Project, and it will take some time to complete.
- Script done, Install logged to file is /var/log/foginstall.log
apt-get update && sudo apt-get upgrade && sudo apt-get dist-upgrade
How ProxyDHCP works
- When a PXE client boots up, it sends a DHCP Discover broadcast on the network, which includes a list of information the client would like from the DHCP server, and some information identifying itself as a PXE capable device.
- A regular DHCP server responds with a DHCP Offer, which contains possible values for network settings requested by the client.Usually a possible IP address, subnet mask, router (gateway) address, dns domain name, etc.
- Because the client identified itself as a PXEClient, the proxyDHCP server also responds with a DHCP Offer with additional information, but not IP address info. It leaves the IP address assigning to the regular DHCP server. The proxyDHCP server provides the next-server-name and boot file name values, which is used by the client during the upcoming TFTP transaction.
- The PXE Client responds to the DHCP Offer with a DHCP Request, where it officially requests the IP configuration information from the regular DHCP server.
- The regular DHCP server responds back with an ACK (acknowledgement), letting the client know it can use the IP configuration information it requested.
- The client now has its IP configuration information, TFTP Server name, and boot file name and it initiate a TFTP transaction to download the boot file.
Tested working with:
OS Version FOG Version
Ubuntu 10.04 LTS x64 Fog 0.29
Ubuntu 10.04 LTS x32,x64 Fog 0.32, Fog 1.0.1, Fog 1.1.0
Ubuntu 11.04 x32, x64 Fog 0.32, Fog 1.0.1, Fog 1.1.0
Ubuntu 12.04, 12.10 LTS x32, x64 Fog 0.32, Fog 1.0.1, Fog 1.1.0
Ubuntu 13.04, 13.10 x32, x64 Fog 0.32, Fog 1.0.1, Fog 1.1.0
LTSP Server, further documentation at Ubuntu LTSP/ProxyDHCP.
Edit /etc/exports to look like this:
apt-get install dnsmasq
add the following
Sample configuration for dnsmasq to function as a proxyDHCP server,
enabling LTSP clients to boot when an external, unmodifiable DHCP
server is present.
The main dnsmasq configuration is in /etc/dnsmasq.conf;
the contents of this script are added to the main configuration.
You may modify the file to suit your needs.
Don’t function as a DNS server:
Log lots of extra information about DHCP transactions.
Dnsmasq can also function as a TFTP server. You may uninstall
tftpd-hpa if you like, and uncomment the next line:
Set the root directory for files available via FTP.
The boot filename.
rootpath option, for NFS
Disable re-use of the DHCP servername and filename fields as extra
option space. That’s to avoid confusing some old or broken DHCP clients.
PXE menu. The first part is the text displayed to the user. The second is the timeout, in seconds.
pxe-prompt=“Press F8 for boot menu”, 10
The known types are x86PC, PC98, IA64_EFI, Alpha, Arc_x86,
Intel_Lean_Client, IA32_EFI, BC_EFI, Xscale_EFI and X86-64_EFI
This option is first and will be the default if there is no input from the user.
#pxe-service=X86PC, “Boot from network”, pxelinux
pxe-service=X86PC, “Boot from network”, undionly
A boot service type of 0 is special, and will abort the
net boot procedure and continue booting from local media.
pxe-service=X86PC, “Boot from local hard disk”, 0
If an integer boot service type, rather than a basename is given, then the
PXE client will search for a suitable boot service for that type on the
network. This search may be done by multicast or broadcast, or direct to a
server if its IP address is provided.
pxe-service=x86PC, “Install windows from RIS server”, 1
This range(s) is for the public interface, where dnsmasq functions
as a proxy DHCP server providing boot information but no IP leases.
Any ip in the subnet will do, so you may just put your server NIC ip here.
Since dnsmasq is not providing true DHCP services, you do not want it
handing out IP addresses. Just put your servers IP address for the interface
that is connected to the network on which the FOG clients exist.
If this setting is incorrect, the dnsmasq may not start, rendering
your proxyDHCP ineffective.
This range(s) is for the private network on 2-NIC servers,
where dnsmasq functions as a normal DHCP server, providing IP leases.
For static client IPs, and only for the private subnets,
you may put entries like this:
restart your dnsmasq service
sudo service dnsmasq restart
Make a symlink for the undionly.kpxe file so dnsmasq can find it.
sudo ln -s undionly.kpxe undionly.0
If you run into issues with Ubuntu, I recommend debian like VincentJ, its the base from which Ubuntu is built. The terminal commands are the same as in Ubuntu, it’s just an all around stable build.
Thanks all for the replies 12.04.4 64bit LTS it is
FOG does not require a Desktop version of the Linux OS, actually a LTS version is recommended as long as you are comfortable working with a cli and not a GUI.
i use Ubuntu 12.04 LTS server myself.
the installer uses Apt to install stuff,
I run on Debian and I’ve not had major problems.
|
OPCFW_CODE
|
Lesson 1: Going through directory tree and listing files and folders
First line that you will get in terminal is: alexander@pc:~$
This means that current user is “alexander” at machine named “pc”. ~ means that the current directory is home directory, which is
/home/alexander. “$” at the end marks that this is the regular user, while root user will have “#” instead of “$” sign. To switch to root user use: sudo su root, to return from the root user just type exit, and to use root permissions only for the given row, type sudo and than command you with to execute as a root.
man – command that shows what other commands do and how to use them.
Example that shows how to use “ls” command:
If you do not know which command to use for the task you need, you can use command “apropos”. Example that will find commands related to the keyword “zip”:
ls – command that is the same as “dir” in Windows and DOS. It lists files and directories. Its most important options include the one that lists all the properties:
ls –l, and the one that lists hidden files and directories as well:
cd command is the same as in Windows. However, instead of backslashes “\”, Linux uses slashes “/”. To go to the home directory enter command: cd
/home/alexander/ or just
To go to the root:
Go one directory up:
Go two directories up:
To make directory use command mkdir in the form:
To delete empty directory use command:
If the directory is not empty, use this command, but with caution:
rm –rf [path to directory or file]
Lesson 2: Copying, moving and renaming files and directories, and writing into text files
cp copies directories and files. Syntax:
cp [path to what we want to copy] [path where we want it copied]. We can use wildcards if we want to copy lots of files: * or ?. To copy set of files or directories we can enter all those paths to the files we want to copy, and at the end path to directory:
cp [file1] [file2] [file3] [path where we want it copied]
mv moves files or directories, also it is being used to rename them:
mv [file or dir] [path where it will be moved]. In the path where we want it moved, we can enter some other name and have it moved to another place with another name. If the path is the same and the name is different, we will rename the file.
To write some text into a file thus replacing the whole text inside:
echo [text we want to write] > [file]
If we want to append text we will use >>. Example:
echo Marko Polo >> listofexplorers.txt
Lesson 3: Searching and displaying results
To see what is inside some file we can use cat command. Example:
Searching text inside list of files can be done with grep. For example search for word netbook inside all the files in the current directory:
grep netbook *
Option –i is used when we do not want search to be case sensitive:
grep –i netbook * will search for words “netbook” and “Netbook”, but also “NetBook”.
-r option means recursive for almost all commands. It searches within all subdirectories.
To search more than one word, we must enclose the search term:
grep ‘netbook eee’ *
If we want to write our search result into some text file:
grep netbook * > searchResult.txt
Searching in linux can be done with
find commands. Locate goes through index, so it is faster, but find searches for files with the given path, without using index. The same options –i and –r apply, in general it is best to use them both: -ir (r for locate does not make sense, only for find): locate –i netbook
Example: list all the files named netbook that contain the word eee:
locate –i netbook | grep eee
What we did here is we used pipe “|” to pass the results to another command.
Example: find in home directory all the files that contain eee in their name and we want to search only for files, not directories (option –-type f):
find ~ --name *eee* --type f
Lesson 4: Conditions and Variables
Variables begin with a dollar sign. To assign a value to a variable, $ sign is not being used, just when we want to recall it. Example: assign text “netbook” to variable “eee”:
Example: write out variable in a sentence:
echo I love my $eee eee. This will write out: “I love my netbook eee”.
Inside is test for if command. If the condition is true, it does the next command.
|-ne||not equal||!=||not identical||-f||file|
|-lt||less than||variable||text in variable is defined||-r||readable file|
|-le||less than or equal||-n variable||text length in variable is greater than zero||-s||file length is greeater than zero|
|-gt||greater than||-z variable||empty variable is defined||-w||writable file|
|-ge||greater than or equal||-x||executable file|
if command we must end with fi.
if [42>1]; then echo super else 'the universe has collapsed!' fi
Example for for command what downloads all of the 50 jpg files that are from certain web address:
for i in $(seq 1 50) do wget $url$(printf $i).jpg
Same example for command while:
while [i -le 50] do wget $url$(printf $i).jpg
until [$i -gt 50] do wget $url$(printf $i).jpg
Lesson 5: Bash
In leafpad, gedit or other text editor we can create .sh file. Type in Terminal:
To make our script executable enter this command:
chmod +x script.sh
To make it executable for all users:
chmod u+x script.sh
To give read write execute permissions for all the users:
chmod 777 script.sh
To change owner attributes we can use
The first line of our script must be:
#!/bin/bash, which calls out bash. There are others, like sh, zash etc, but the most widely used is bash.
You can enter the commands in the previous lessons. It executes the second row, than the third, than fourth etc, until the last line.
To execute a script type in terminal:
If the script is not executable, we must call sh:
|
OPCFW_CODE
|
Fix issue with TextCommandBarFlyout not opening when using custom button
Description
Fixes issue with TextCommandBarFlyout closing immediatly even when there are buttons that were added manually.
Motivation and Context
Fixes #2950
How Has This Been Tested?
Added new interaction test.
Screenshots (if appropriate):
/azp run
This looks good to me! Just one thing: could you verify that you don't see any UI when we call Hide() in opened rather than opening? I believe I put it in the latter location because I was concerned we might see UI show and then disappear briefly in that case, since it's called after we add the popup to the visual tree.
It looks like the added test failed on VMs earlier than RS5 because TextBox.SelectionFlyout, which is added in the XAML for ExtraCommandBarFlyoutPage, didn't exist prior to that version of the OS. Could you maybe remove those from the XAML while you're at it? It looks like we're already conditionally adding them based on whether the property exists in ExtraCommandBarFlyoutPage.xaml.cs, so I think that everything should be fine without that in the XAML.
This looks good to me! Just one thing: could you verify that you don't see any UI when we call Hide() in opened rather than opening? I believe I put it in the latter location because I was concerned we might see UI show and then disappear briefly in that case, since it's called after we add the popup to the visual tree.
I've tried what happens if there isn't any buttons to show, and it seems that you don't have a "glitch" behavior where you can see it briefly.
It looks like the added test failed on VMs earlier than RS5 because TextBox.SelectionFlyout, which is added in the XAML for ExtraCommandBarFlyoutPage, didn't exist prior to that version of the OS. Could you maybe remove those from the XAML while you're at it? It looks like we're already conditionally adding them based on whether the property exists in ExtraCommandBarFlyoutPage.xaml.cs, so I think that everything should be fine without that in the XAML.
Good idea, I've removed the setters from the XAML of the page.
/azp run
@llongley It seems that the test still crashed the app in RS2/3/4. Should we add a check to run the test only if we are in RS5 and above?
@chingucoding Hmm, it looks like the problem here is another type that didn't exist until RS5, this time StandardUICommand. If we can get the test to run downlevel, that'd certainly be better, since that'd ensure we have maximal test coverage - I think you should be able to just remove the usage of that type and instead set the AppBarButton's label and icon manually. If we still have an issue after that, I'm fine with just adding a check to make the test not run pre-RS5, since I also understand that having random test failures prevent you from checking in is frustrating.
Oh I see, guess I just copied the UICommand from the sample repo, I've updated it now. Hopefully there isn't anything else now.
Given that we removed the code that would crash the page in pre RS5, should we reevaluate the tests that use the page and require >=RS5, maybe we can increase the coverage a bit there.
/azp run
Sweet, looks like the test is passing now. Thanks!
Regarding the other tests, they rely on RS5 features and bug fixes, so I believe they'll fail if we run them on OS versions earlier than RS5.
Regarding the other tests, they rely on RS5 features and bug fixes, so I believe they'll fail if we run them on OS versions earlier than RS5.
Right, that makes sense. Great that we got the test in this PR working on all versions.
|
GITHUB_ARCHIVE
|
Add callbacks option to PathSampling
I have thought (a long) while about how to best implement the on the fly shooting range optimization and I think I finally came to a result that may be worth your time :)
The idea is to add an optional callbacks argument to the PathSampling, which is expected to be a list of PathSamplingCallback (callable objects with an interval attribute), which would then be called by the PathSampling, with the PathSampling as argument, every interval steps. This way one can call custom code every interval PathSampling steps. This enables advanced users to write their own code to select shooting points based on the information from all previous shooting attempts and thereby iteratively improving the efficiency of their simulation.
I already went ahead and wrote a preliminary implementation which defines a PathSamplingCallback abstract baseclass and added the necessary code to the PathSampling at https://github.com/hejung/openpathsampling/tree/pathsamp_callback.
See also the rewritten ShootingRangeOptimizer initially developed at Leiden and now using the callbacks option to be more flexible and lightweight at the same time (https://gitlab.e-cam2020.eu:10443/hejung/sr_shooter/tree/refactor_callback).
Please let me know if this is something that could be accepted into OPS sometime in the future.
What you're proposing here is very similar to something that's already in development. I was using the term "hook" to describe it, thinking of several places in during the process of a simulation step that you might want to add behavior. In addition to use cases like yours, the same approach could be used for things like on-the-fly analysis, or to make some output optional (for example, step number becomes less meaningful when running in parallel). In fact, even storing to a file can be thought of as a "hook" that we add, which can be re-used between different types of simulations. (As you said, this whole approach makes the code both more flexible and more lightweight.)
In other words, I think it is a good idea, and I already started working on it!
The first draft of that is in #755. I think the best approach would be for you to look over that, and see if the structure of hooks implemented there will give you enough flexibility for the ShootingRangeOptimizer -- if not, let me know how to improve it!
In particular, you'll want to look at:
hooks.py
shoot_snapshots.py, especially the .run method
path_simulator.py, methods related to hooks
(Note also that this is all after the refactor of path simulators into multiple files; #754. I would suggest that any work on simulator objects should be based after that change.)
Current status: So far, I've added hooks for "shoot-from-snapshot" (i.e., committor and related) simulations. That's still being tested, and will be followed by some work parallelizing the committor simulation (handling these hooks while parallelizing takes some careful attention). That work is not yet in a PR, but you can see it in the dask_committor branch of my fork, which is still extremely experimental (the file task_schedulers.py is the main one to look at; we're planning to use dask.distributed to handle the parallelization; I would recommend YouTube videos of talks about dask by Matt Rocklin and/or Jim Crist as a great way to get introduced to how dask works).
I haven't added support for hooks to PathSampling objects, but that is coming (if you want to work on it, feel free, just add lots of tests along the way, and make sure that the default behavior is exactly like it used to be. This change will be part of the 1.x series.)
I don't know enough about the shooting range optimization algorithm to know whether the trajectories you launch can be run in parallel or not; if so, you'll need to pay attention to ensuring the task graph has the right dependencies. (That's the tricky part with this parallelization approach.)
But overall, yes, an approach similar to this is exactly what I was planning for the near future.
Great to hear that!
From a first glimpse it looks like the hooks can do everything I need (and probably more).
I will start working on adding them to the PathSampling and see how it goes.
Should I close this Issue and ask everything else that comes up in #755?
Closing this now, the hooks for PathSampling are in the main branch and also in v1.5. @dwhswenson Feel free to let me know/reopen if you think we should not close it yet.
|
GITHUB_ARCHIVE
|
Fancy Feast Gravy Lovers Cat Food At Walmart
Fancy Feast Gravy Lovers Cat Food At Walmart - Cat Meme Stock Pictures and Photos
With real ingredients and savory combinations, each gourmet wet cat food option is destined to delight your cat in a whole new way.
Fancy feast gravy lovers cat food at walmart. Fancy feast petites single serve wet cat food in gravy collection variety 24 pack. More pet owners as a result of the pandemic. In fact, many fancy feast cat food products contain fish that may carry high levels of mercury.
Gravy lovers turkey feast in roasted turkey flavor gravy: Fancy feast’s dry food is significantly less expensive at around $0.22 per day. 4 payments of $7.50 learn more.
Cans 587 4.7 out of 5 stars. Show your favorite feline friend just how important she is to you and your family by adding this collection of mouthwatering fancy feast gravy lovers wet cat food cans to your order on walmart.ca. This product is rated 4.6 stars out of 5 stars.
Each serving delivers 100 percent complete and balanced nutrition for kittens and adult cats so your feline friend gets the nourishment she needs through every stage of life. Offer your cat a sophisticated entree when you serve purina fancy feast gravy lovers ocean whitefish & tuna feast in sautéed seafood flavor gravy wet cat food. Fancy feast has doubled in price in certain places and climbed to a higher price—but not quite doubled in others—for many of the reasons listed above.
Containing 34% protein, 17% fat, and 3% fiber, this cat food is a nutritionally complete option for adult cats. Fancy feast assorted pate variety pack, wet cat food 24 x 85g. One common effect of mercury poisoning is kidney damage.
These products can cause the effects of mercury poisoning over a long period of time, and it can result in severe health problems. Fancy feast petites single serve wet cat food in gravy comes in just the right. Fancy feast wet cat food is crafted with delicious details, served daily.
- Cat Separation Anxiety Treatment
- Cat Sitting Los Angeles
- Cat Shakes Head Sounds Like Water
- Cat Singing Happy Birthday Mp3 Song Free Download
- Cat Scratch Deterrent Spray
- Cat Sitting In Litter Box Reddit
- Cat Side By Side Canada
- Cat Scratched Dog S Eye Bleeding
- Cat Shaking Head Seizure
- Cat Sitting Positions Explained
- Cat Safe Essential Oils For Humidifier
- Cat Showroom Near Me
- Cat Shots Cost Petco
- Cat Scratcher Furniture Protector
- Cat Scent Glands In Mouth
- Cat Scratcher Couch Protector
- Cat Shakes Head After Petting
- Cat Safe Essential Oils Aspca
- Cat Shakes Head And Falls Over
- Cat Scratch Deterrent Spray Canada
|
OPCFW_CODE
|
If you're selling stuff, I would say "no".
What value does it add for the customer? As a consumer, if I want to buy something I've seen online, I'm going to look for other places online that I can buy that item for a cheaper price. If you can be sure that you have the lowest price....maybe link to your competitors to make a point. But that seems like more trouble than it's worth.
As a consumer, I'm also interested in finding out if I can buy the item at a brick-and-mortar location if I need it immediately. If you're linking to those retailers (and it doesn't negatively impact your business, say if you're the sole manufacturer) then again it may be worth doing.
Last but not least, as a consumer I'm interested in what other people have to say about your product and/or your services. Linking to positive reviews again may be useful, but shrewd consumers are going to be skeptical of the reviews you link to and will also seek out reviews from neutral parties.
Now, if you're selling (for example) brass widgets with inlaid mother-of-pearl doohickeys, and plan on simply linking to:
- Wikipedia articles on brass, widgets,
- A page about the town in
which your company is located
- A blog
post or small town newspaper article
about your company, or
- A feel-good
story about how somebody used your
product to change their life
...then in my opinion a links page is completely worthless.
Maybe you've got something entirely different in mind...but in considering all of the above, I'm having trouble justifying the (admittedly small) effort.
If there's content you feel that your customers (and prospective customers) should have access to, put it on your site. Make your site into the be-all, end-all site for your particular product by educating your visitors (note: education != sales pitch). The additional related content won't hurt your standing with search engines, either.
EDIT: pnichols, in a comment above you link to a travel site as an example. In such a case I think there might be a little more justification to build a links page, but still not a lot....because when your visitors leave your site (conceptually, even if the page is still loaded in a browser tab) they're more likely to run across a competitor site or simply forget about your site if they see something compelling or shiny. So I think my advice above still stands: make your site the be-all, end-all place for what you're trying to accomplish. For a travel site, that would be including a weather widget (so your visitors see how nice the climate at the destination is), photos of the location (maybe you use some Creative Commons/commercial-ok photos or hook up with a photographer on Flickr) and some internal textual content about what to do and see.
Sticky, sticky, sticky. Obviously you don't want to try and force users to stay (as I've seen with some incredibly annoying sites that prompt you to bookmark them if you try to close the window, for instance), but give them plenty of reason to want to stay due to rich, meaty content.
|
OPCFW_CODE
|
make.coverpg - create a fax coverpg on stdout
make.coverpg [options] <pages> <sender-ID> <sender-NAME> <receiver-ID> <receiver-NAME> <date> <time>
make.coverpg is called from faxspool(1) to generate a cover page for the just-processed fax. It has to create a proper G3 file (e.g. via pbm2g3(1) or hp2hig3(1) or ghostscript(1)) and output that on stdout. If the program doesn't exist, or can't be executed, the fax simply won't get a coverpage (so, if you don't want a fax coverpage, do not install it...)
make.coverpg can put anything it wants on the page, but note that there are certain legal requirements in certain countries about the contents that *have* to be on the cover page, for example, the fax phone number of the sender and the recepient, the number of pages, or similar things.
make.coverpg gets the informations about the fax to be sent from the command line, in the order listed above.
If the environment variable normal_res is set to something non-empty, faxspool requests that make.coverpg creates a cover page in normal resolution (98 lpi). Default is fine resolution (196 lpi).
NO make.coverpg program is installed by default, since everyones needs differ too wildly.
Some sample coverpage programs are provided in the mgetty source tree, in the samples/ subdirectory (coverpg.pbm shows how to do it with "pbmtext|pbm2g3", coverpg.ps shows how I do it with ghostscript).
In this directory, you can also find two shell scripts (fax and faxmemo) that will take advantage of one more esoteric feature of my coverpage programs: if called with the option "-m <memo-file>", the sample programs will put a text file "<memo-file>" on the cover page (used for short notes or such). To make use of it, faxspool is called with the option '-C "make.coverpg -m <memo-file>"' (the double quotes are needed!).
A five-page fax sent from me to my second number could result in a call like this:
make.coverpg 5 "+49-89-3243328" "Gert Doering" "3244814" "myself" "Sep 15 94" "22:10:00"
- the program itself
The idea behind make.coverpg is Copyright (C) 1993 by Gert Doering, <firstname.lastname@example.org>, the implementation will most likely have yours...
|27 Oct 93||greenie|
|
OPCFW_CODE
|
Although Thot is not a word processing system, it contains the basic functions of a text editor. Most functions can be applied to a character string as well as to elements of the document logical structure. However, this section only considers the processing of character strings. The logical structure editing is dealt with in section 4.
The scroll bar located on the right hand side of the window provides a simple way to move through a document. If you click with the left mouse button on the arrow placed at the bottom of the scroll bar, the document moves a single line up; the arrow located at the top of the scroll bar produces the opposite effect. Hold down on these buttons to scroll the document continuously.
To move to the beginning or end of the document, click on these arrows while depressing the Control key on the keyboard.
The position of the slider in the scroll bar corresponds to the position of the part displayed in the whole document. The height of the slider compared to that of the scroll bar represents the size of the part displayed compared to the full size of the document.
To move forward or back by the height of a window, just click in the scroll bar with the left mouse button above or below the slider.
You can also click on the slider with the left mouse button and hold it down to move it vertically. When you release the button, the document is positioned in the window according to the new position of the slider. The same result can be obtained by clicking with the middle mouse button on the desired position in the scroll bar.
It is also possible to move through the document view just using keys on the keyboard:
For scrolling horizontally, you can use the scroll bar along the bottom of the window: it works according to the same principles.
There are other ways to move throughout the document, as shown in sections 3.8 or 9.2, but scroll bars constitute an easy and direct way.
All word processing operations are based on selection: first you have to select the portion of text on which you want to work before modifying it.
The most direct way of selecting text is by highlighting it with the mouse. You select one character by clicking on it with the left mouse button: it appears in a colored background. Then, by clicking with the middle mouse button on another character, you extend the selection to this other character. As soon as you click on a character with the left button, you delete the previous selection and a new one is created.
To select a character string, drag the mouse pointer across the string while holding down the left mouse button.
When one or several characters are selected within a paragraph, you can modify the selection by using the following combination of keys:
When at least one character is selected, the text typed on the keyboard is automatically inserted before the first selected character. To do this, the mouse cursor must be located in a window controlled by Thot.
To add text at the end of a paragraph, click with the left mouse button just after the last character. A small vertical bar indicates the position in which the characters typed on the keyboard will be placed. You can also use the Control E command to place the text at the end of the line (see 3.3).
While typing the text or later, you can remove the character preceding the cursor with the Back Space key on the keyboard. The preceding characters are removed by pressing this key several times.
Certain characters are not represented on the keyboard, in particular the accented letters, Greek characters or mathematical symbols. However, these characters can be entered by using the ``Palettes'' command at the top of the Thot window. This command displays a menu whose two last entries (``Latin alphabet'' and ``Greek alphabet'') display a window presenting additional characters. To enter these characters within the document, click on the desired character or use the keyboard as indicated under the character image. For example, to obtain a `é' first type a `e' while holding the Compose key and then a '. This principle also applies to capital letters: a `É' is obtained by typing a `e' with both Compose and Shift pressed, and then pressing the ' key.
To enter these compound characters, it is not necessary to display the corresponding character set.
These typing conventions apply to the text of documents as well as to the dialogue boxes through which the command parameters are entered. They can be modified by editing a specific file (see section 23.4).
There are two particular types of characters in Thot:
The mode for entering and displaying these characters is determined by the ``Spaces'' command from the ``Environment'' menu. This command displays a menu which allows you to indicate whether the spaces (the three special spaces listed above and the normal word space) represented in the text must be displayed as spaces or under the form of characters that allow you to differentiate them:
Line break: ¶
Standard word space: ·
Non breaking word space: 1
Hair space: `
When a character string is selected (see 3.2), it can be deleted in several ways:
In all such cases, the deleted text is definitely lost (note that there is no ``Undo'' command in this version). If the selected text is to be deleted and used again, the ``Cut'' command should be used (see 3.7).
When a character string (or a single character) is selected, the preceding character can be removed by pressing the Back Space key on the keyboard.
Thot manages two clipboards. The internal clipboard enables to transfer text (and more complex elements as shown in 4.5) between Thot documents. The X clipboard allows you to transfer text, and only text, between the documents processed by Thot and the other applications working with the same X server.
The ``Text and structure'' entry from the ``Search'' sub-menu from the ``Edit'' menu (in the menu bar, at the top of each document window) displays a form which allows the user to move throughout a document according to the content and/or the structure. The right-hand half of this form deals with the search operation carried out according to the structure and is described later (see section 4.2). The left-hand half can be used as follows:
To initiate the search operation, click on the ``Confirm'' button at the bottom of the form. If the string searched for is found, it is selected and the document is positioned so as to make this string visible; you can then search for the next instance of this string by selecting the ``Confirm'' button again. If the string is not found, the ``Not found'' message appears in the bottom right-hand corner of the form.
Strings can also be replaced using the same operation. The replacement text is entered in the ``Replace by'' input area and a replacement mode is selected from the ``Replace'' menu. The replacement modes are as follows:
Search or replace can be abandoned at any moment by clicking on the ``Done'' button.
Note: the replacement operations cannot be undone (there is no ``Undo'' command in this version).
[Section 4] [Table of contents]
|
OPCFW_CODE
|
The 2012 Vladimir Ivanovich Vernadsky Medal is awarded to Jean-Pierre Gattuso for creative and scholarly contributions to biogeosciences at the interface between microbial ecology, coral ecology, biogeochemistry and chemical oceanography.
Jean-Pierre Gattuso is a CNRS Senior Research Scientist (Directeur de recherche de 1ère classe) at the Laboratoire d’Océanographie de Villefranche-Sur-Mer (France). Jean-Pierre Gattuso started his studies in Biological Oceanography at the University of Aix-Marseille II, where he received his BSc, MSc, and PhD degrees. After a reader position at the University of Nice he spent three years as a postdoctoral research scientist at the Australian Institute for Marine Science. In 1990, he entered a position as CNRS research scientist at the Laboratory EPHE/Marine Biology, in Perpignan. Only three years later, Gattuso moved to Monaco to become program leader at the Observatoire Océanologique Européen. In 1994 he finished his habilitation in Biological Oceanography at the University of Nice. From 1998 until 2004 he was leading the research group ‘Diversity, Biogeochemistry and Microbial Ecology’ at the French Laboratoire d’Océanographie in Villefranche-sur-mer, from 2005 on he is acting here as a CNRS Senior Research Scientist. In the years 2004 and 2005, Gattuso went as a visiting scientist to the Rutgers University and the National Center for Atmospheric Research, and between 2006 and 2011 he served as a research professor at the Marine Biology Institute at the Shantou University in China. Among several affiliations to scientific societies and numerous community services, Gattuso is a member of the European Geosciences Union and one of the two founding editors of the EGU journal Biogeosciences.
The research performed by Gattuso is very interdisciplinary. His early work on coral functioning and physiology and coral community metabolism had a major impact on the research field. He systematically unravelled carbon flows within coral communities; i.e. he documented how and to what extent the organic and inorganic carbon cycles were linked. This work basically settled the discussion whether pristine corals are carbon dioxide sources or sinks and how this may change upon eutrophication. Gattuso has also made major contributions to document and identify the factors governing ecosystem metabolism in coastal areas. He provided one of the first global assessments of coastal ecosystem metabolism and published the first paper on benthic primary production in the global ocean, an underappreciated and understudied topic. His most important contributions to scientific progress are related to ocean acidification and its effect on organisms, communities and ecosystems. The scientific approaches combine laboratory with field work in natural settings ranging from the Red Sea to the Arctic Ocean and Alpine lakes, and include even software development. He has trained numerous masters and PhD students, and Postdocs. The research of Gattuso and his co-workers has resulted in numerous publications in key journals and led to a better understanding in the interdisciplinary scientific fields. The interdisciplinarity and internationality of his research and the combination of multiple approaches to solve the important open questions in the carbon cycle are in the sense of Vladimir I. Vernadsky’s scientific work.
Besides other community services, Gattuso was Founding President of the Biogeosciences division of the European Geosciences Union, is co-ordinator of the EU project on Ocean Acidification (EPOCA), and he serves on many international panels and boards that co-ordinate and direct global research on the marine carbon cycle. All these powerful efforts were driven by his strong dedication to foster biogeosciences. The Vladimir Ivanovich Vernadsky medal committee has nominated Gattuso for his important contributions to biogeosciences, in particular at the interface between microbial ecology, coral and coastal ecology, biogeochemistry and chemical oceanography. He has been and is an enormously active driver in the biogeoscience world, serving through scholar- and mentorship, innovation, and community outreach.
|
OPCFW_CODE
|
The SQL Server agent plays a vital role in day to day tasks of SQL server administrator. Server agent’s purpose is to implement the tasks easily with the scheduler engine which allows our jobs to run at scheduled date and time. Multiple column sub query which returns multiple columns to the main query.
Open the properties window of the target SQL Server instance by accessing its SQL Server service running in the configuration manager. You can leverage segregating indexes into another filegroup on a different disk or you can consider appropriate partitioning to improve its performance. CDC is termed as “Change Data Capture.” It captures the recent activity of INSERT, DELETE, and UPDATE, which are applied to the SQL Server table. It records the changes made in the SQL server table in a compatible format.
Q. Which key provides the strongest encryption in SQL Server DBA?
Write a query to fetch the number of employees working in the department ‘HR’. Database Administrator and will also help you ace your interviews. Being the Porsche of Job Candidates – you can get a decent job by being a decent candidate, but to get the best jobs, you have to be the best candidate.
- The possibility of inconsistency in the database gives rise to the need for Master Data Management.
- However, the first three types are only frequently used, where “NF” stands for normal form.
- You can answer by identifying a skill from the job description, such as SQL programming or database management, and explaining why it’s important.
- Write a query to find the third-highest salary from the EmpPosition table.
- Temporary tables are the tables that ephemeral in nature.
- CHECK constraint is applied to any column to limit the values that can be placed in it.
The execution history of the job is displayed and you may choose the execution time . There would information such as the time it took to execute that Job and details about the error that how to become a sql server dba occurred. The simplest answer to this is “Clustered and Non-Clustered Indexes”. There are other types of Indexes that can be mentioned such as Unique, XML, Spatial and Filtered Indexes.
FAQs on SQL Server DBA Interview Questions
This prevents data changes from occurring if a query reads dirty data. This requires more code to determine that an update or delete did not occur and then to take the appropriate action to recover. Comparing all original values of every column in a row on every UPDATE or DELETE may be performed instead of verifying the TIMESTAMP column has not changed.
- Check out this MySQL DBA Certification Training by Edureka, a trusted online learning company with a network of more than 250,000 satisfied learners spread across the globe.
- Glassdoor has 1 interview questions and reports from Sql dba interviews.
- Gather all details about dependencies of this database like linked server, Jobs, logins, reports, replication, auditing, etc.
- NOLOCK allows the query to access both committed and uncommitted data and may return dirty data.
- DML triggers are powerful objects for maintaining database integrity and consistency.
- Jacob makes the dry subject like XML to worth reading and learning it.
|
OPCFW_CODE
|
OpenSSH Server – problems trying to connect two laptops on home network
This question is relating to OpenSSH client and server (which became bundled with the Windows OS rather recently).
I am hoping to get some help from someone who is more knowledgeable about SSH.
I have two laptops which both have the client OpenSSH (which now comes default with Windows 10).
One also has the Windows 10 default OpenSSH Server installed.
However, I'm having trouble ssh'ing from the one with just the client into the one with the server.
While I can ping my "server" laptop from my "client" laptop", as both devices are at home using my home network, when I try to ssh to my "server" laptop IP, I get an error:
connect to host xxxxx port 22: Connection timed out.
Since I'm just starting to use these features on these machines (and haven't made successful ssh connections with either of them before), I'm not sure whether the client or the server is having a problem. But I'm looking at the "server" laptop first.
One thing I'd like to know is how to find out what port my OpenSSH Server service is listening on... just in case it's not listening on port 22 for some reason. There is a firewall rule listed for OpenSSH Server in "Control Panel" → "System and Security" → "Windows Defender Firewall" → "Allowed Apps", but it doesn't mention the port. Also, my OpenSSH service IS running in services.msc.
I've also noted that I have a sshd_config_default file, but there was no actual sshd_config file created. The default file was all commented out.
So I wanted to ask
Where to verify if my OpenSSH Server service is listening for connections on port 22.
If anyone has a recommendation for what setting commands to use in a sshd_config file on the server laptop, which should just be connected to by another laptop in the same network. I assume one of the settings should specify port 2, but I'm not sure whether using these commented-out commands in the default file are the best ones to use for my setup:
#Port 22
#AddressFamily any
#ListenAddress <IP_ADDRESS>
#ListenAddress ::
Any other suggestions anyone has for why the connection times out.
Good question. What I would do is to use nmap scanner to verify that server indeed is listening on port 22. From the client I would connect to a known good SSH server to see if that works. On server I also would connect from server to iteslf (by localhost) to see if it is working.
Can you ssh to localhost:22 on each machine?
Also resmon in run, network tab, listening ports - This will show you all the ports that the machine is listening on, along with the executable that is listening.
ETA: Thanks for your assistance. Resmon told me my server ssh was listening on port 22. I can also login to the localhost on the server laptop. Just wanted to ask another question, in case it is related. Is it at all relevant to this connection timeout issue, that I can ping my server laptop's public IPv4, but not its local IP, from the client laptop?
Welcome to Super User! You can freely edit your own posts but for your protection, this must be done under the original user account. It looks like you have created a second account, which will also interfere with your ability to comment within your thread and to accept an answer. See Merge my accounts to get your accounts merged, which will solve the problem.
Deleted my last comment, because it is something for me to bring up with support, not you :-), I'm sorry. Thank you, I took your advice and emailed support about merging the accounts!
Here are my answers:
Where to verify if my OpenSSH Server service is listening for connections on port 22.
Run this command in your windows assuming your ssh is running on port 22 (default).
netstat -an|find "LISTEN"|find ":22"
Any suggestion how to configure.
As you said parameter Port 22 in sshd_config file indicates the port SSH daemon should open when is started. I guess is the default if you specify none. Since this is openSSH verify this site for more info about the parameters you can use.
Any other suggestions anyone has for why the connection times out.
If you can ping the nodes from each other the most common reason for the timeout is a firewall in between.
Okay, this time this IS a real answer. (And it is going to be a long form of Manuel Florian's answer Part III, since it WAS a firewall issue. - thanks Manuel!)
As I mentioned before, boxes couldn't ping each other's private IPs, and then I found out both machines had the same PUBLIC IP.
Two things I did to help with this which I performed both tasks on both machines):
Step 1) I changed my machines to treat my home network as private.
These PowerShell instructions from NiklasE rtlhm at
https://answers.microsoft.com/en-us/windows/forum/windows_10-networking/change-my-network-to-private-in-windows-10/45659a7b-89ee-42c4-910f-6ffbdd31ee0a?page=2
were quick and easy to follow:
Open Windows PowerShell in admin mode. Start -> PowerShell -> Right-click -> open as administrator
Get current profiles. Make sure you are logged on to the network you want to change.
Get-NetConnectionProfile
Change the network in your list to be private
Set-NetConnectionProfile -Name "MYWIFINETWORK" -NetworkCategory Private
Check that everything went fine
Get-NetConnectionProfile
I also changed the Firewall settings using these steps from https://kb.iu.edu/d/aopy:
Search for Windows Firewall, and click to open it.
Click Advanced Settings on the left.
From the left pane of the resulting window, click Inbound Rules.
In the right pane, find the rules titled File and Printer Sharing (Echo Request -
ICMPv4-In).
Right-click each rule and choose Enable Rule.
.... with the added step of changing one of the rules, which had a Profile of both Public and Private, to apply only to Private Profiles, since I don't want anyone on an actual public network to ssh to me.
Right click on the rule, select Properties, Advanced Tab, under Profiles section, uncheck Public.
After making these changes, I verified that I could ping the server from the client, I am able to ssh from my "client" to my "server" ssh device.
[SIDE NOTE on security: Of course, if anyone is reading this and wants to try it, do so at your own risk, since calling a network "Private" means "I trust the other devices on this network - and that no one has hacked into my network". But if you only do step 2 (without limiting the rule to private networks), that means when you're on a public network, other devices can ping you. I'm thinking of undoing these changes when I don't need them, may also disable SSH on both machines while not needed.]
I'll have to wait until my profile is able to be merged with user987957 to mark anything down as an answer. However, thanks you all for your clues to help me research this problem!
|
STACK_EXCHANGE
|
Guided Randomness in Optimization
Iterations As long as the STOP criterion (for example a maximum number of iterations) has not been reached: - draw a position x at random from the unsampled population; - if f ( x ) < f ( x ), then replace x by x . 1.1.2. Sequential search This method consists of drawing positions one by one, not at random (without repetition), but in a predefined order, for example position 1, position 2, etc. To calculate p
Guided Randomness in Optimization
Il arrive souvent de ne rien obtenir parce que l ' on ne tente rien (Often, nothing is gained because nothing is attempted)
In using chance to solve a problem, there is always a risk of failure, unless an unlimited number of attempts are permitted: this is rarely possible. The basic idea involved in stochastic optimization is that this risk is necessary, for the simple reason that no other solution is available; however, it may be reduced by carefully controlling the use of random elements. This is generally true, in that a correctly-defined optimizer will produce better results than a purely random search for most test cases. However, this is not always the case, and the ability to identify these "anomalous" situations is valuable.
1.1. No better than random search
Let us take a set of permutation tests. A precise definition is given in the Appendices ( section 7.1 ). Here, note simply that based on one discrete finite function, all of the other functions can be generated by permutations of possible values at each point.
The definition space is E = (0, 1, 2, 3) and the value space is V = (1, 2, 3). A function is therefore defined by its values at the points of E , for example f 1 (1, 3, 2, 2). One possible permutation of this function is f 2 (1, 2, 3, 2); there are 12 such functions in total, each of which is a permutation of the others, shown in the first column of Table 1.1 . Each function has a minimum value of 1 (to simplify our discussion, optimization in this case will always be taken to mean minimization). Now, let us consider three iterative algorithms, and calculate the probability that they will find the minimum of each function. These algorithms are all without repetition, and conserve the best position obtained along with the associated value (the ratchet effect). A brief, informal description of these algorithms is given below. For each, the result is given as a pair ( x , f ( x )), where x is the proposed solution.
1.1.1. Uniform random search
This algorithm, like those which follow, includes an initialization phase, followed by an iteration phase (see section 1.1 .). Let us calculate the probability p ( t ) of finding the solution after t position draws. As there is only one solution, p (1) = 1/4, the probability of not obtaining the solution on the first try is therefore 1 - p (1). In this case, as three nonsampled permutations remain, the probability of obtaining the solution on the second try is 1/3. Thus, the probability of obtaining the solution on the first or second try is p (2) = p (1) + (1 - p (1)) 13 = 1/4 + 3/ 4 1/ 3 = 1/2. Similarly, the probability of obtaining the solution on the first, second or third try is p (3) = p (2) + (1 - p (2) 1/2) = 3/4. Evidently, as the algorithm is without repetition, the probability of having found the solution on the fourth try is 1, as an exhaustive search will have been carried out.
Algorithm 1.1. Random search without repetition
- Draw a position x at random, following a uniform distribution (each position has the same selection probability).
As long as the STOP criterion (for example a maximum number of iterations) has not been reached:
- draw a position x at random from the unsampled population;
- if f ( x ) < f ( x ), then replace x by x . 1.1.2. Sequential search
This method consists of drawing positions one by one, not at random (without repetition), but in a predefined order, for example position 1, position 2, etc. To calculate p
|
OPCFW_CODE
|
'''
Created on Jan 9, 2014
@author: akittredge
'''
import unittest
import pandas as pd
import datetime
import vector_cache.mongo_driver as vc_mongo_driver
import pymongo
from pandas.util.testing import assert_frame_equal
import operator
import abc
TEST_HOST, TEST_PORT = 'localhost', 27017
class DataStoreDriverTest(object):
__metaclass__ = abc.ABCMeta
@abc.abstractmethod
def build_data_store(self):
return self.data_store, self.collection
data_points = [
{'identifier' : 'a',
'date' : datetime.datetime(2012, 12, 1),
'price' : 1},
{'identifier' : 'a',
'date' : datetime.datetime(2012, 12, 2),
'price' : 2},
{'identifier' : 'a',
'date' : datetime.datetime(2012, 12, 3),
'price' : 3},
{'identifier' : 'b',
'date' : datetime.datetime(2012, 12, 1),
'price' : 100},
{'identifier' : 'b',
'date' : datetime.datetime(2012, 12, 4),
'price' : 110},
]
@abc.abstractmethod
def _populate_collection(self):
self.test_df = pd.DataFrame(self.data_points).pivot(index='date',
columns='identifier',
values='price')
self.index = self.test_df.index
'''put data_points into data_store'''
def setUp(self):
self.data_store, self.collection = self.build_data_store()
self._populate_collection()
def test_get(self):
'''Get previously cached values.'''
empty_df = pd.DataFrame(columns=['a', 'b'], index=self.index)
empty_df.columns.name = 'identifier'
df_from_datastore = self.data_store.get(metric='price',
df=empty_df)
assert_frame_equal(df_from_datastore, self.test_df, check_dtype=False)
def test_set(self):
self.data_store.set(metric='price', df=self.test_df)
class MongoDriverTestCase(DataStoreDriverTest, unittest.TestCase):
def _populate_collection(self):
super(MongoDriverTestCase, self)._populate_collection()
for data_point in self.data_points:
self.collection.insert(data_point)
def test_read_frame(self):
'''read a Dataframe from mongo.'''
df = vc_mongo_driver.read_frame(qry={},
index='date',
values='price',
collection=self.collection,
)
assert_frame_equal(df, self.test_df)
def test_no_cached_values(self):
'''when no cache values are found an empty DataFrame should be returned.'''
df = vc_mongo_driver.read_frame(qry={'this is not in the cache' : True},
index='date',
values='price',
collection=self.collection,
)
self.assertTrue(df.empty)
def test_write_frame(self):
'''write a DataFrame to mongo.'''
metric = 'metric'
collection = self.collection
collection.drop()
vc_mongo_driver.write_frame(metric=metric,
df=self.test_df,
collection=collection)
records = list(collection.find())
self.assertEqual(len(records), operator.mul(*self.test_df.shape))
@classmethod
def build_data_store(cls, host=TEST_HOST, port=TEST_PORT, collection_name='test'):
client = pymongo.MongoClient(host, port)
db = client.test
collection = db[collection_name]
collection.drop()
data_store = vc_mongo_driver.MongoDataStore(collection=collection)
return data_store, collection
from vector_cache.sql_driver import SQLDataStore, CachedValue
class SQLiteDriverTestCase(DataStoreDriverTest, unittest.TestCase):
metric = 'price'
def build_data_store(self):
self.data_store, self.session = memory_sql_store()
return self.data_store, self.session
def _populate_collection(self):
super(SQLiteDriverTestCase, self)._populate_collection()
for data_point in self.data_points:
model = CachedValue(metric=self.metric,
date=data_point['date'],
identifier=data_point['identifier'],
value=data_point['price'])
self.session.add(model)
self.session.commit()
def memory_sql_store():
data_store = SQLDataStore('sqlite:///:memory:')
return data_store, data_store._Session()
|
STACK_EDU
|
oss_midiloop(7) OSS Devices oss_midiloop(7)NAME
oss_midiloop - Loopback MIDI driver.
The loopback midi driver makes it possible to create special purpose virtual midi devices based on user land server processes.
MIDI loopback devices are like named pipes or pseudo terminals. They are grouped in client and server device pairs. The server side device
must be open before the client side device can be opened.
SERVER SIDE DEVICE
The server side device is used by some special application (such as a software based MIDI synthesizer) to receice MIDI events from the
applications that want to play MIDI.
CLIENT SIDE DEVICE
Client applications such as MIDI players open the client side device when they need to play some MIDI stream (file). The client side device
behaves like any "ordinary" MIDI device. However it cannot be opened when there is no program connected to the server side.
MIDI loopback devices differ from "normal" MIDI devices because an application is needed at the both ends of the loop. The loop device will
return a "Connection reset by peer" error (ECONNRESET) error. Applications designed to be used as loopback based server applications
can/should use this error (returned by read or write) as an end-of-stream indication.
Specifies how many loopback client/server MIDI device pairs to be created.
/etc/oss4/conf/oss_midiloop.conf Device configuration file
16 December 2012 oss_midiloop(7)
Check Out this Related Man Page
MIDIPLAY(1) BSD General Commands Manual MIDIPLAY(1)NAME
midiplay -- play MIDI and RMID files
midiplay [-d devno] [-f file] [-l] [-m] [-p pgm] [-q] [-t tempo] [-v] [-x] [file ...]
The midiplay command plays MIDI and RMID files using the sequencer device. If no file name is given it will play from standard input, other-
wise it will play the named files.
RMID files are Standard MIDI Files embedded in a RIFF container and can usually be found with the 'rmi' extension. They contain some addi-
tional information in other chunks which are not parsed by midiplay yet.
The program accepts the following options:
-d devno specifies the number of the MIDI device used for output (as listed by the -l flag). There is no way at present to have midiplay
map playback to more than one device. The default is device is given by environment variable MIDIUNIT.
-f file specifies the name of the sequencer device.
-l list the possible devices without playing anything.
-m show MIDI file meta events (copyright, lyrics, etc).
-p pgm force all channels to play with the single specified program (or instrument patch, range 1-128). Program change events in the
file will be suppressed. There is no way at present to have midiplay selectively map channels or instruments.
-q specifies that the MIDI file should not be played, just parsed.
specifies an adjustment (in percent) to the tempi recorded in the file. The default of 100 plays as specified in the file, 50
halves every tempo, and so on.
-v be verbose. If the flag is repeated the verbosity increases.
-x play a small sample sound instead of a file.
A file containing no tempo indication will be played as if it specified 150 beats per minute. You have been warned.
MIDIUNIT the default number of the MIDI device used for output. The default is 0.
/dev/music MIDI sequencer device
SEE ALSO midi(4)HISTORY
The midiplay command first appeared in NetBSD 1.4.
It may take a long while before playing stops when midiplay is interrupted, as the data already buffered in the sequencer will contain timing
BSD January 16, 2010 BSD
|
OPCFW_CODE
|
If you create multiple images via a packer/terraform pipeline (ami,openstack, vmware) is it possible to scan the template file for potential os level vulns associated with the images that will be built? The alternative now is actually spinning up the images and scanning them dynamically with something like Nessus. Is there any tool that can shift this process to the left and scan the code files?
Hi @skalamaras ! Shifting left is an admirable goal, but there are as always many ways to skin that cat.
Your goal is to decide whether, based on a declaration of state, if that state exposes potential vulnerabilities or misconfigurations. What you really want is not to deploy a vulnerable state, but you don’t want to find out after the fact, hence the shift left. However, let’s remember that it’s the state we are interested in, not the declaration.
The state is a product of two things:
- Upstream state (ie, the source image)
Can we, just by looking at a Packer template, decide whether the upstream state is vulnerable? I don’t think anyone keeps a 100% up-to-date database of vulnerable images, this would be impossible, since images can be modified in arbitrary ways. The best you can do is to check the state for known vulnerabilities, by comparing it to relevant standards. For the installed packages, check them against the vulndb, compare sha’s, etc , for the filesystem, compare with a hardening compliance profile.
In short: No, you cannot tell from the declaration of the upstream image whether it is vulnerable. You have to look inside the image.
If the provisioner is declarative itself, such as the Ansible provisioner, then you can look at the statement and decide whether vulnerabilities are may be introduced. This is inspecting a component though, not the Packer template, and it should be indeed be done upstream, or even better, completely out of band. IE, you have a whole other pipeline which assures the quality of the components (Ansible playbooks, roles) rather than the product (the image made by Packer).
Provisioners change state
Remember that the provisioners change the state. So, if you want to shift left by looking at the upstream image and deciding whether to break the build, how can you be sure that your subsequent application of the provisioners won’t fix the problem? You now have a paradox:
- If (the image is vulnerable) and (the provisioners will fix it) but (you shift left and fail on image inspection) then
- The provisioner will never get executed
- You have wasted effort on writing something that never gets used.
However if you don’t make assertions on the upstream image, and the provisioner declares non vulnerable state then you can inspect the actual product (your final image that Packer has built) before you commit the image.
Use compliance as code as provisioners
So, we come to the conclusion: Yes, shift left, but in the packer template itself.
You cannot inspect only the Packer template and decide whether the final image will be defective, based on the arguments I’ve outlined. For one thing, there is nothing to compare upstream images to, for another, you will end up wasting engineering time.
Pipeline execution time is much less expensive than engineering time, so I typically add these compliance statements as provisioners in the Packer template itself. Typically I add one or all of:
- A trivy scan of the final image before it is committed via a
- An Inspec profile via the Inspec provisioner (although this is apparently not currently maintained)
- A TestInfra compliance profile via the shell provisioner
There are plenty of options !
The point however is you cannot tell just from looking at the declaration, you must inspect the final product, because provisioners change state.
Your pipeline must ensure the quality of both the product as well as the components.
Thanks for the suggestions. Can you provide an example code for Trivy and how it can be used via a shell provisione.
This is the simplest example I have at hand:
It runs the scanner on the image before committing it. In this case, it runs inside the container or image that is being built, and as such might leave a trace if not cleaned up properly – TrivyDB files, etc.
You can also decide to fail the build later, in the post-processing using a
In this case, it runs outside the built image, on the machine that is actually running packer.
|
OPCFW_CODE
|
Can I filter the task to see only the recurring tasks that I've made? So I'll have a quick glance of all future task and edit them if needed.
Our recurring tasks only appear for the next/current occurrence. So filtering for this is not possible. You can filter for any amount of days and see the upcoming tasks though. (7 days) or (20 days) etc.
Can I suggest to include a function to filter all recurring tasks?
I think this would be helpful to forecast tasks ahead. Thanks.
You're welcome to vote for this request in our Votebox: http://todoist.com/Vote/showProposal/1126/
Any Update on filter by recurring ? or Any Work Around ?
Unfortunately not at this time, we will consider adding such option in the future, though.
Definitely +1 on the ability to filter based on recurring tasks
Thanks guys amazing product!
In the mean time I've identified recurring tasks manually and labeled them as 'recurring'.. just need to remember to label new tasks going forward.
+1 on filtering recurring tasks
Even I am in need of the same. It would be great, if can be added.
Thanks & Regards,
+1 to filtering recurring tasks. thanks
+1 to filtering OUT recurring tasks.
Is there another way to see only the non recurring tasks of the day?
Thanks and A Happy and Productive New Year to all !
+1 to filter by recurring. I also have created a label for now. I would also like the ability to filter by "not", as in "not @recurring". Thanks!
+1 to filter all recurring tasks. With lots of recurring tasks, it's not easy to see when would be best to insert a new task that you need to do routinely.
Meanwhile I adopted the same solution as ppeugh. Labeled recurring tasks with @rec and made a filter: today & !@rec which means all today's tasks without recurring ones.
Yes, what Florian suggested is exactly what I need. It's a shame I have to remember to label them, as it is already "labeled" internally by the system.
+1 on the feature.
Another +1 on filtering for recurring tasks
Yet another +1 the ability to filter for recurring tasks, and also to
filter OUT recurring tasks from other filters.
Somewhat related, it would also be great to be able to create a filter for
"tasks with notifications."
Side note: I'm submitting this reply via e-mail because, when I try to
reply via the web form (in Chrome version 43.0.2357.130 for Windows 7), I
get this error message:
CSRF verification failed. Request aborted.
More information is available with DEBUG=True.
I haven't gotten that error previously when replying to other threads from
the same computer/browser setup, FWIW.
Regarding the error message, please go to Chrome Settings -> Advanced Settings -> Content Settings -> All cookies and site data, search for "Todoist" and delete all found entries.
Also, please make sure your browser is accepting third-party cookies.
+1 as well.
I would like to be able to get an overview of my tasks that are not just routine chores
|
OPCFW_CODE
|
Glue - the new mapping framework
I've spent this summer implementing a new mapping framework for the .Net plattform: Glue.
Glue is a general purpose, bidirectional automatic mapping for the .Net platform, with strong verification and testing tools.
I've seen quite a lot of less than optimal handling of mapping issues in quite a few projects over the years. Again and again mapping seems to be just a task we have to do, and a bit too small for automating. It becomes clearer and clearer to me that mapping is both an important task in many projects, and it is always a repetitive and boring task. In addition far to often subtle annoying bugs tend to sneak into mapping code. This motivated me to create Glue. And this is what I need from a mapping framework:
- a way to automate mapping, mostly because it is boring, and that makes it error prone,
- a way to automatically test the mapping, I've found that manual written test code for mapping is seldom done in a good way,
- a way to prevent future changes making my mappings obsolete. When people make changes, they rarely focus on mapping (and honestly they should not either). They should be told when they have to update the mapping.
Now, you could argue that there is already a mapping framework available on the .Net plattform. And when I started this work, I actually started with that framework, thinking I could just make a few extensions to fulfil my needs. I actually spent some time in the source code to try to implement it, but sadly I realized that our needs differed too much. So, I created Glue, and these where the driving forces:
Glue is a general purpose mapper. We realize that in the real world there are a lot of different solutions, and not all of them follow "the one true pattern". In fact, mapping is often used to map to and from subsystems that are far from well designed. Subsystems we try to hide in a layer because we do not want it to leak into the other layers. Thus, we believe a mapper must support quite a few different scenarios. The goal is to promote good coding practices, but not to ignore the fact that there is a lot of legacy code out there that forces us to work a bit differently at times.
I would say that in many mapping scenarios we need to map in both directions. First we get data from an object in a layer, and map it to an object in the layer above. When that layer is done with manipulating that object, we often want to map it back to down to the layer where it all came from. I have noticed not all mapping frameworks see it this way, and this was one of the reasons why I started working on Glue instead of trying to extend existing frameworks.
Strong verification and testing tools
I want to be absolutely certain that my mapping works. I also want to make sure that it is very hard to break it later. Manually writing tests for mapping is even more tedious than writing manual mapping. Glue automates this. Future changes has a sad reputation of breaking mapping code. Glue helps you detect this. Tools for helping the mapping is very important to me. And more tools will be available in future releases.
Mapping should be simple. Glue tries to simplify both the mapping process, and the verification and testing. You should not have to state the obvious, and Glue support relating properties automatically based on names.
Although Glue enables you to automate much of the mapping process, it also gives you the opportunity to be explicit about the mapping. So if you want to describe every relation in detail, you can. When it comes to understandable code this can be a good thing. Taking difficult to understand implicit mappings, and stating them explicitly can sometimes make things much easier to understand.
Current version and the future
The current version is 0.2.0 Alpha. It is still in Alpha because if we find good ways to improve the API, we do not want to lock ourselves to it just yet, before we get more feedback. I'm guessing the next release will be Beta.
I am currently using Glue on the project I am working on, and in about a month it will reach production. This somewhat guarantees that it will continue to evolve as our needs expand, and that we will find bugs sooner, and they will be fixed sooner.
Looking into the future we have some exciting ideas on tools to help with the mapping, and we are also working hard to make Glue as easy as possible to use, so expect simplifications. In addition we want Glue to serve a broad set of needs, so feedback is highly appreciated and if you explain your special needs, we might just implement it.
- Tore Vestues
(This post have been migrated from my old blog, so sadly the old comments are gone)
|
OPCFW_CODE
|
- What kind of math is used in machine learning?
- Why is linear algebra important for data science?
- Is linear algebra a supervised machine learning algorithm?
- Is linear algebra needed for coding?
- Should I learn linear algebra for programming?
- How is linear algebra used in quantum computing?
- How is linear algebra used in real life?
- Is linear algebra hard?
- What math do data scientists use?
- Is linear algebra important for software engineers?
- How is linear algebra used in engineering?
- What math is used in quantum mechanics?
- What maths do you need for quantum computing?
- What is the use of linear vector space in quantum mechanics?
- Does AI require coding?
- Is trigonometry used in machine learning?
- Is linear algebra used in computer science?
- Which fields use linear algebra?
- Is linear algebra the same as linear programming?
- How hard is machine learning?
- Is machine learning just maths?
- Is calculus 3 harder than linear algebra?
Similarly, How linear algebra helps in machine learning?
Linear algebra is a branch of mathematics that studies vectors, matrices, and linear transformations. From the notations used to describe the operation of algorithms through the implementation of algorithms in code, it provides a critical basis for the discipline of machine learning.
Also, it is asked, Does machine learning require linear algebra?
You don’t need to know linear algebra to get started with machine learning, although you may want to study it later. In fact, if there is one subject of mathematics that I would recommend improving first, it is linear algebra.
Secondly, Why is linear algebra needed for AI?
Machine learning and deep learning are built on the foundation of linear algebra. Knowledge these concepts at the vector and matrix levels broadens and enhances your understanding of a given machine learning topic. A for-loop with 100 iterations may be used to conduct these calculations.
Also, What is linear algebra used for in programming?
Many fields of computer science, including graphics, image processing, cryptography, machine learning, computer vision, optimization, graph algorithms, quantum computing, computational biology, information retrieval, and online search, rely on linear algebra ideas.
People also ask, Is linear algebra more important than calculus for machine learning?
Linear algebra is an important learning element for machine learning, much as a firm foundation is for a construction (ML). Prior knowledge of linear algebra is required in areas of mathematics such as statistics and calculus, which will aid in your understanding of ML.
Related Questions and Answers
What kind of math is used in machine learning?
Statistics, Linear Algebra, Probability, and Calculus are the four key ideas that drive machine learning. While statistical ideas lie at the heart of all models, calculus aids in the learning and optimization of such models.
Why is linear algebra important for data science?
Another reason why linear algebra is so significant to data scientists is that it may be used to reduce dimensionality using Principle Component Analysis (PCA). Dimensionality reduction is a crucial step in the preparation of data sets that will be utilized for machine learning.
Is linear algebra a supervised machine learning algorithm?
You can utilize your linear algebra knowledge to improve both supervised and unsupervised machine learning methods. With the use of linear algebra, you may develop supervised learning algorithms such as logistic regression, linear regression, decision trees, and support vector machines (SVM).
Is linear algebra needed for coding?
Many programmers will not need to be familiar with linear algebra. Even while some applications will use linear algebra, a programmer may not need to know it since just a few of the lower-level functions/methods will really use it.
Should I learn linear algebra for programming?
2-It is required for statistical programming. As a result, if we wish to conduct statistical programming, we need to know linear algebra. Some themes in probability, operations research, mathematical statistics, and stochastics, particularly in regression analysis.
How is linear algebra used in quantum computing?
Quantum computing is written in linear algebra. Although it is not required to construct or write quantum programs, it is often used to describe qubit states, quantum operations, and to forecast what a quantum computer will do in response to a series of instructions.
How is linear algebra used in real life?
Linear algebra is used in the real world to construct ranking algorithms in search engines like Google. Used to analyze digital signals and encode or decode them, which may be audio or video signals. In the discipline of linear programming, it is used to optimize.
Is linear algebra hard?
Linear algebra is a difficult subject. Most STEM majors will take linear algebra in university, and it is one of the most challenging subjects they will take. Linear algebra is a difficult class to master since it is a highly complex subject that requires strong analytical and logical abilities.
What math do data scientists use?
In the field of data science, there are three significant players. Calculus, linear algebra, and statistics are the three disciplines that regularly show up when you Google the math prerequisites for data science. The good news is that statistics is the only kind of math you’ll need to be proficient in for most data science professions.
Is linear algebra important for software engineers?
Linear algebra would be beneficial in the creation of any game as well as picture processing. There have also been times when knowing that XOR converts a collection of binary strings of a specific length into a vector space would have been very useful for a software developer.
How is linear algebra used in engineering?
Linear algebra is used by civil engineers to build and assess load-bearing structures like bridges. Linear algebra is used by mechanical engineers to build and evaluate suspension systems, and it is also used by electrical engineers to develop and analyze electrical circuits.
What math is used in quantum mechanics?
In order to learn basic quantum mechanics, you need be familiar with the following mathematical concepts: Numbers that are difficult to understand. Differential equations, both partial and ordinary. I-III integral calculus.
What maths do you need for quantum computing?
Linear Algebra is the fundamental arithmetic that permits quantum computing to work its magic. Everything in quantum computing can be represented using different types of linear algebra, from the representation of qubits and gates to the functioning of circuits.
What is the use of linear vector space in quantum mechanics?
We’ve noticed that in quantum physics, the vast majority of operators are linear. This is advantageous because it enables us to describe quantum mechanical operators and wavefunctions as matrices and vectors in some linear vector space.
Does AI require coding?
Programming is the first ability needed to become an AI engineer. Learning computer languages such as Python, R, Java, and C++ to construct and implement models is essential for being well-versed in AI.
Is trigonometry used in machine learning?
A solid calculus curriculum, which should contain analytical geometry as part of the course, will likely cover all of the trig you’ll ever need in ML. You don’t even need calculus in that case. You don’t need calculus or linear algebra to get started with ML, although they can assist.
Is linear algebra used in computer science?
Linear algebra is employed in all fields of computer science, including cybersecurity, clustering algorithms, and optimization algorithms, and it is the sole kind of arithmetic required in quantum computing – but that’s a tale for another day.
Which fields use linear algebra?
When used in conjunction with calculus, linear algebra makes it easier to solve linear systems of differential equations. Analytic geometry, engineering, physics, natural sciences, computer science, computer animation, and the social sciences all employ linear algebra techniques (particularly in economics).
Is linear algebra the same as linear programming?
If linear algebra arose from the study of how to solve systems of linear equations, linear programming arose from the study of how to solve systems of linear inequalities, enabling one to optimize linear functions according to restrictions given as inequalities.
How hard is machine learning?
Algorithms that are tough to understand: Machine learning algorithms may be challenging to grasp, particularly for novices. Before you can implement an algorithm, you must first master its many components.
Is machine learning just maths?
Machine learning models, like all mathematical models, are mathematical models. To predict anything from some labeled (supervised) or unlabeled (unsupervised) data, most machine learning models use a mix of linear algebra, calculus, probability theory, or other math principles.
Is calculus 3 harder than linear algebra?
Multivariable Calculus 3 Calculus is the most difficult mathematics subject available. Calculus is the most difficult mathematical topic, and only a tiny number of pupils in high school or elsewhere complete it. In vector space, linear algebra is a subset of abstract algebra.
This Video Should Help:
Linear algebra is a mathematical tool that is used in machine learning. It has been around for centuries and its applications are vast. Khan Academy has an extensive overview of linear algebra. Reference: linear algebra for machine learning khan academy.
- is linear algebra important for machine learning
- application of linear algebra in artificial intelligence
- linear algebra for machine learning python
- linear algebra for machine learning course
- linear algebra for machine learning coursera
|
OPCFW_CODE
|
import { OrderedWebRequest } from './ordered-webrequest'
/**
* Installs a web request filter to prevent cross domain leaks of auth headers
*
* GitHub Desktop uses the fetch[1] web API for all of our API requests. When fetch
* is used in a browser and it encounters an http redirect to another origin
* domain CORS policies will apply to prevent submission of credentials[2].
*
* In our case however there's no concept of same-origin (and even if there were
* it'd be problematic because we'd be making cross-origin request constantly to
* GitHub.com and GHE instances) so the `credentials: same-origin` setting won't
* help us.
*
* This is normally not a problem until http redirects get involved. When making
* an authenticated request to an API endpoint which in turn issues a redirect
* to another domain fetch will happily pass along our token to the second
* domain and there's no way for us to prevent that from happening[3] using
* the vanilla fetch API.
*
* That's the reason why this filter exists. It will look at all initiated
* requests and store their origin along with their request ID. The request id
* will be the same for any subsequent redirect requests but the urls will be
* changing. Upon each request we will check to see if we've seen the request
* id before and if so if the origin matches. If the origin doesn't match we'll
* strip some potentially dangerous headers from the redirect request.
*
* 1. https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API
* 2. https://fetch.spec.whatwg.org/#http-network-or-cache-fetch
* 3. https://github.com/whatwg/fetch/issues/763
*
* @param orderedWebRequest
*/
export function installSameOriginFilter(orderedWebRequest: OrderedWebRequest) {
// A map between the request ID and the _initial_ request origin
const requestOrigin = new Map<number, string>()
const safeProtocols = new Set(['devtools:', 'file:', 'chrome-extension:'])
const unsafeHeaders = new Set(['authentication', 'authorization', 'cookie'])
orderedWebRequest.onBeforeRequest.addEventListener(async details => {
const { protocol, origin } = new URL(details.url)
// This is called once for the initial request and then once for each
// "subrequest" thereafter, i.e. a request to https://foo/bar which gets
// redirected to https://foo/baz will trigger this twice and we only
// care about capturing the initial request origin
if (!safeProtocols.has(protocol) && !requestOrigin.has(details.id)) {
requestOrigin.set(details.id, origin)
}
return {}
})
orderedWebRequest.onBeforeSendHeaders.addEventListener(async details => {
const initialOrigin = requestOrigin.get(details.id)
const { origin } = new URL(details.url)
if (initialOrigin === undefined || initialOrigin === origin) {
return { requestHeaders: details.requestHeaders }
}
const sanitizedHeaders: Record<string, string> = {}
for (const [k, v] of Object.entries(details.requestHeaders)) {
if (!unsafeHeaders.has(k.toLowerCase())) {
sanitizedHeaders[k] = v
}
}
log.debug(`Sanitizing cross-origin redirect to ${origin}`)
return { requestHeaders: sanitizedHeaders }
})
orderedWebRequest.onCompleted.addEventListener(details =>
requestOrigin.delete(details.id)
)
}
|
STACK_EDU
|
nri-bundle-3.2.8 - Helm Deployment fails
Bug description
A clear and concise description of what the bug is.
When trying to deploy the Newrelic operator on "Redhat OpenShift v4.8", it fails with the error described below.
Version of Helm and Kubernetes
Helm: v3.6.2+5.el8
Kubernetes: v1.21.1+6438632
Which chart?
Not sure but here is the list of charts in the repo.
$ helm search repo newrelic/
NAME CHART VERSION APP VERSION DESCRIPTION
newrelic/newrelic-infra-operator 0.4.0 0.5.0 A Helm chart to deploy the New Relic Infrastruc...
newrelic/newrelic-infrastructure 2.7.2 2.8.2 A Helm chart to deploy the New Relic Infrastruc...
newrelic/newrelic-k8s-metrics-adapter 0.1.1 0.1.0 A Helm chart to deploy the New Relic Kubernetes...
newrelic/newrelic-logging 1.10.4 1.10.0 A Helm chart to deploy New Relic Kubernetes Log...
newrelic/newrelic-pixie 1.4.2 1.4.2 A Helm chart for the New Relic Pixie integration.
newrelic/nri-bundle 3.2.8 1.0 A Helm chart to deploy New Relic integrations b...
newrelic/nri-kube-events 1.11.0 1.6.0 A Helm chart to deploy the New Relic Kube Events
newrelic/nri-metadata-injection 2.1.1 1.6.0 A Helm chart to deploy the New Relic metadata i...
newrelic/nri-prometheus 1.10.0 2.9.0 A Helm chart to deploy the New Relic Prometheus...
newrelic/nri-statsd 1.0.3 2.0.3 A Helm chart to deploy the New Relic Statsd int...
newrelic/simple-nginx 1.1.1 1.1 A Helm chart for installing a simple nginx
newrelic/synthetics-minion 1.0.42 3.0.53 New Relic Synthetics Containerized Private Mini...
What happened?
Command issued:
helm repo add newrelic https://helm-charts.newrelic.com && helm repo update &&
helm upgrade --install newrelic-bundle newrelic/nri-bundle
--set global.licenseKey=XXXXX
--set global.cluster=openshift-poc
--namespace=newrelic
--set newrelic-infrastructure.privileged=true
--set global.lowDataMode=true
--set ksm.enabled=true
--set kubeEvents.enabled=true
--set prometheus.enabled=true
--set logging.enabled=true
--set newrelic-pixie.enabled=true
--set newrelic-pixie.apiKey=px-api-XXXXX
--set pixie-chart.enabled=true
--set pixie-chart.deployKey=px-dep-XXXXX
--set pixie-chart.clusterName=openshift-poc
Output:
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "newrelic" chart repository
Update Complete. ⎈Happy Helming!⎈
namespace/newrelic created
Release "newrelic-bundle" does not exist. Installing it now.
Error: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRole "system:controller:operator-lifecycle-manager" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "newrelic-bundle"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "newrelic"
What you expected to happen?
Deployment should have succeeded.
How to reproduce it?
Run the above command on OpenShift 4.8
Steps to reproduce the problem, as minimally and precisely as possible.
Anything else we need to know?
The following command failed before we ran the chart install command:
kubectl apply -f https://download.newrelic.com/install/kubernetes/pixie/latest/olm_crd.yaml
Output:
$ kubectl apply -f https://download.newrelic.com/install/kubernetes/pixie/latest/olm_crd.yaml
customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com unchanged
customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com configured
customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com unchanged
Warning: resource customresourcedefinitions/operatorconditions.operators.coreos.com is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com unchanged
customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com unchanged
customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com unchanged
The CustomResourceDefinition "operatorconditions.operators.coreos.com" is invalid: status.storedVersions[0]: Invalid value: "v2": must appear in spec.versions
We updated
We updated "spec.versions" to "v2" from "v1" for the CRD "operatorconditions.operators.coreos.com" in the "olm_crd.yaml" locally and deployed it again. It ran successfully.
Surprise us!
Hi ,
The next best step is to visit New Relic Support, where you can engage the New Relic Support Community or open a support ticket depending on your support level. The support team is best positioned to assist with your specific needs.
Please provide a link to this GitHub issue when submitting your community post or support ticket.
Thanks!
|
GITHUB_ARCHIVE
|
My pc acts wierd !!!!I dont know how to explain it....i think its rarest of the rarest......First i m starting with intel graphics accelerator.....i have installed the correct version and after installation ....when i try to open ...just a loading screen appears and shows error....the app crashes...i have tried all versions ...but its just a waste of time!!!!...and i tried to install the app provided by intel which helps to install updates directly to pc.....but the app itself is not working....and i think i have wasted a lot of time reasearching this big xxxx!!!
The second issue is aero theme is not working...in win7ultimte...( i tried installing win10 and then win8😕..totally sucks...after installation tge is boot up ...but after reaching the startup screen..the interface goes black...then i turn off the pc by power switch...its an another issue...so i stay using win7)now i am using with the classic theme!!!.....
The third issue is ...i can't play any online games!!!...pubg lite...emultors are not working at all ( just shows a interface with letters written just mixes up!!!...the picture of icon ...and the pictures shown in interface shows blurry...with stripes of coloures...( not clear at all)...conclustion is that I can't play any online games...in my pc just because of this things!!!i can't even use it!!!
the fourth issue is that i can't use any internet related apps...spotify....all browsers except IE11..(The interface of this apps goes the same as i mentioned above...the screen goes weird with fonts being white with black outline or any colours ...i can't read anything ..for example we can take the chrome browser..the interface just start to flicker..simply if i do something on the browser..the lettersxi type on search box just dissapears or just mix up ..shows like captcha !!!)real player ..should i mention every internet related app!!
the fifth issue is i can't install win10 or win8!!!i dont know whats reason..this is also related to the processor....!!
Right now i dont have any pictures to show the issue...so please...cooperate with me 😑!!..i have tried to sort ot the issue by my own way..by contacting the motherboard manufacturer (...they said that its an outdated processor i am using!!but still many peoples use it with win 10 and use this chip to play games like gta5!!!)i had chat with the microsoft!!..but no use!!!..i have contacted intel...for sorting out this....there response was so xxxxy!!!...they told me that we can't give any support for discontinued product!! and they had asked to provide my full name...i told them my full name...and they say that its wrong ..."you are faking!!!".. i have only one name...and i have told them!!but they are not trusting me!!!they see me as a terrorist or something like that!!...first i thought all this issue would be of virus..so i tried formatting hdd..no result!!tried updating apps and software..no result....i can't use the pc at all!!...the use of the pc is just for saving things!!!and for typing notes and other offline activities!!!i dont know what happened!!its with chip or any other component..but i have checked it whole!!!
INTEL PENTIUM G2010 2.8GHZ
WINDOWS 7 ULTIMATE 64Bit
NO GRAPHICS CARD
SO PLEASE HELP ME ...PLEASE PROVIDE SOLUTION FOR ME TO SOUGHT OUT THIS ISSUE!!!
You have too many issues, and should not be trying to solve your problems here. Microsoft support would be a better place.
Now, and I know you do not like W10 and W8.x. But, W10 is where you need to be especially since you only have two months of W7 support left.
Backup your data.
Update the motherboard bios. If you do not know how, contact gigabyte.
Do a clean install of W10. If a key is needed, use your W7 Ultimate key. Use the media you can create here:
Do not install any third party apps or anti-virus.
Your ivy bridge processor is still supported and your graphics should be automatically installed by the W10 install.
Now, with this clean install, you should have no problems. If you start adding 3rd party software, check for problems after EACH INSTALL.
|
OPCFW_CODE
|
using FFmpegLite.NET.Enums;
using System;
using System.Collections.Generic;
using System.Drawing;
using System.Globalization;
using System.IO;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
namespace FFmpegLite.NET
{
/// <summary>
/// Extension methods to extend FFmpegConvertTask
/// </summary>
public static class FFmpegConvertTaskExtensions
{
/// <summary>
/// Use H.264 Baseline Profile
/// </summary>
/// <param name="task"></param>
/// <returns></returns>
public static TTask UseBaselineProfile<TTask>(this TTask task) where TTask : FFmpegConvertTask
{
task.AppendCommand(" -profile:v baseline ");
return task;
}
/// <summary>
/// Resize frame size
/// </summary>
/// <param name="task"></param>
/// <param name="width">Leave it null if want auto scale with height</param>
/// <param name="height">Leave it null if want auto scale with width</param>
/// <returns></returns>
public static TTask Resize<TTask>(this TTask task, int? width, int? height) where TTask : FFmpegConvertTask
{
if (width.HasValue || height.HasValue)
{
task.AppendCommand(" -vf \"scale={0}:{1}\" ", width ?? -2, height ?? -2);
}
return task;
}
/// <summary>
/// set video fps
/// </summary>
/// <param name="task"></param>
/// <param name="fps"></param>
/// <returns></returns>
public static TTask Fps<TTask>(this TTask task, int fps) where TTask : FFmpegConvertTask
{
task.AppendCommand($" -r {fps} ");
return task;
}
/// <summary>
/// set audio bit rate
/// </summary>
/// <param name="task"></param>
/// <param name="audioBitRate"></param>
/// <returns></returns>
public static TTask AudioBitRate<TTask>(this TTask task, int audioBitRate) where TTask : FFmpegConvertTask
{
task.AppendCommand($" -ab {audioBitRate}k ");
return task;
}
/// <summary>
/// Use faststart flags for mp4 video for better online play
/// </summary>
/// <typeparam name="TTask"></typeparam>
/// <param name="task"></param>
/// <returns></returns>
public static TTask UseFastStartMode<TTask>(this TTask task) where TTask : FFmpegConvertTask
{
task.AppendCommand(" -movflags +faststart ");
return task;
}
/// <summary>
/// Set Video Aspect Ratio
/// </summary>
/// <typeparam name="TTask"></typeparam>
/// <param name="task"></param>
/// <param name="videoAspectRatio"></param>
/// <returns></returns>
public static TTask VideoAspectRatio<TTask>(this TTask task, VideoAspectRatio videoAspectRatio) where TTask : FFmpegConvertTask
{
var ratio = videoAspectRatio.ToString();
ratio = ratio.Substring(1);
ratio = ratio.Replace("_", ":");
task.AppendCommand(" -aspect {0} ", ratio);
return task;
}
/// <summary>
/// Crop video
/// </summary>
/// <typeparam name="TTask"></typeparam>
/// <param name="task"></param>
/// <param name="videoCrop"></param>
/// <returns></returns>
public static TTask Crop<TTask>(this TTask task, Rectangle videoCrop) where TTask : FFmpegConvertTask
{
task.AppendCommand(" -filter:v \"crop={0}:{1}:{2}:{3}\" ", videoCrop.Width, videoCrop.Height, videoCrop.X, videoCrop.Y);
return task;
}
/// <summary>
/// Set video bit rate
/// </summary>
/// <typeparam name="TTask"></typeparam>
/// <param name="task"></param>
/// <param name="bitRate"></param>
/// <returns></returns>
public static TTask VideoBitRate<TTask>(this TTask task, int bitRate) where TTask : FFmpegConvertTask
{
task.AppendCommand(" -b {0}k ", bitRate);
return task;
}
/// <summary>
/// The frame to begin seeking from
/// </summary>
/// <typeparam name="TTask"></typeparam>
/// <param name="task"></param>
/// <param name="seek"></param>
/// <returns></returns>
public static TTask Seek<TTask>(this TTask task, TimeSpan seek) where TTask : FFmpegConvertTask
{
task.AppendCommand(CultureInfo.InvariantCulture, " -ss {0} ", seek.TotalSeconds);
return task;
}
/// <summary>
/// Set max video duration
/// </summary>
/// <typeparam name="TTask"></typeparam>
/// <param name="task"></param>
/// <param name="maxVideoDuration"></param>
/// <returns></returns>
public static TTask VideoMaxDuration<TTask>(this TTask task, TimeSpan maxVideoDuration) where TTask : FFmpegConvertTask
{
task.AppendCommand(" -t {0} ", maxVideoDuration);
return task;
}
/// <summary>
/// Set audio sample rate
/// </summary>
/// <typeparam name="TTask"></typeparam>
/// <param name="task"></param>
/// <param name="audioSampleRate"></param>
/// <returns></returns>
public static TTask AudioSampleRate<TTask>(this TTask task, AudioSampleRate audioSampleRate) where TTask : FFmpegConvertTask
{
task.AppendCommand(" -ar {0} ", audioSampleRate.ToString().Replace("Hz", ""));
return task;
}
/// <summary>
/// Set video target, for physical media conversion (DVD etc)
/// </summary>
/// <typeparam name="TTask"></typeparam>
/// <param name="task"></param>
/// <param name="target"></param>
/// <param name="targetStandard"></param>
/// <returns></returns>
public static TTask Target<TTask>(this TTask task, Target target, TargetStandard? targetStandard = null) where TTask : FFmpegConvertTask
{
task.AppendCommand(" -target ");
if (targetStandard.HasValue)
{
task.AppendCommand(" {0}-{1}", targetStandard.ToString().ToLowerInvariant(), target.ToString().ToLowerInvariant());
}
else
{
task.AppendCommand("{0} ", target.ToString());
}
return task;
}
/// <summary>
/// start to convert file
/// </summary>
/// <param name="task"></param>
/// <param name="outputFile"></param>
/// <returns></returns>
public static async Task<FileInfo> ConvertAsync(this FFmpegConvertTask task, string outputFile, CancellationToken cancellationToken = default)
{
return await ConvertAsync(task, outputFile, FFmpegEnviroment.Default, cancellationToken: cancellationToken);
}
/// <summary>
/// start to convert file
/// </summary>
/// <param name="task"></param>
/// <param name="outputFile"></param>
/// <returns></returns>
public static async Task<FileInfo> ConvertAsync(this FFmpegConvertTask task, string outputFile, FFmpegEnviroment enviroment, CancellationToken cancellationToken = default)
{
task.AppendCommand($" \"{outputFile}\" ");
var process = new FFmpegProcess();
await process.ExecuteAsync(task, enviroment, cancellationToken: cancellationToken);
task.OutputFile = new FileInfo(outputFile);
return task.OutputFile;
}
}
}
|
STACK_EDU
|
Coding and Decoding questions are designed to judge the candidate’s ability to decipher the rule that given code follows. While approaching a question, firstly decide the type of question asked, and then examine common Pattern in them. After decoding every code, arrange all code in tabular form so, that you can easily find the answer of every questions. Remember that it is scoring topic, but a single mistake can make your every answer wrong.
Types of Coding-Decoding
1. Letter coding
Coding by shifting letters:
Example: In a certain code language, ” GLIDERS” is written as ” ERSDGLI”. How is ” TOASTER” written in that code language?
2. Coding in Fictitious Language
In a certain code language
‘rainfall today target’ is written as ‘mn vo na’, ‘strong rises higher’ is written as ‘sa ra ta’, ‘target rises inquiry’ is written as ‘la vo sa’, ‘victory plant rainfall’ is written as ‘mn ha ja’.
What is the code for ‘rainfall’?
3. Coding by substitution
Example: White means black, black means red, red means blue, blue means yellow and yellow means grey, then which of the following represents the colour of clear sky?
Solution: Clearly, we know that, the actual colour of sky is blue, and as given blue means yellow. Hence, the colour of sky is yellow.
As you can observe from the recent exams, coding-decoding question’s pattern has changed. In these questions we cannot find the code for a word by cancelling out the common words in two sentences, as each word is different. In these types of questions, there is no common logic that we can use for all types of questions as there is a different logic being used in each question. However, there are some commonly used logics from which we can solve most of the questions. They are as follows:-
(i) Reversing the alphabet: For example E is coded as V i.e. V will occupy the same position in the alphabetical series as E, when the whole alphabetical series is written in reverse order.
(ii) Using the numerical value of the rank of the alphabets in the alphabetical series. For example C=3.
(iii) Using the numerical value of the rank of the reverse of alphabets. For example A=26(value of rank of Z).
(iv) Using the vowels of the word in the code as it is.
(v) Using the vowels of the word in the code and reversing it or using its numerical value in the code.
(vi) Using the numerical value of the total number of letters of the word in the code.
(vi) Coding an alphabet in the word as the next or previous letter of that alphabet in the alphabetical series. For example D is coded as E or C.
Most of the questions of changed pattern revolve around the above mentioned logic.
In order to solve such questions quickly you need to memorize the following table of the reverse and the numerical values the ranks of the alphabets:-
Following are some examples to help you understand the use of above-mentioned concepts.
Download your free content now!
To download, Union Budget 2023-24: Free PDF, please fill the form.
Download your free content now!
We have already received your details!
Please click download to receive Adda247's premium content on your email ID
Incorrect details? Fill the form again here
|
OPCFW_CODE
|
There is no relationship between ACT and PyMOL, so one wouldn't necessarily expect them to match in terms of how they name the resulting objects. However, I suspect there may also be a difference of intent:
Based on a quick glance at PyMOL source code, PyMOL appears to convey a relative cell translation (based on centers of geometries), whereas ACT may be returning the computation formula.
In other words, PyMOL attempts to inform the user as to whether the generated symmetry-related atom selection is within the same cell as the query selection or in one of the adjacent cells (in a translational sense). Thus in PyMOL, the (overall) nearest mates (with respect to the center of geometry) will usually have a translation of 00 00 00, and most of the time, nearby mates will vary +1 or -1 along a single translation.
I'm guessing ACT simply provides the symmetry operator and the effective translation applied to generate the nearby mate. That's useful for recomputing the mate later on, but it doesn't tell the user anything about proximity. PyMOL's approach is unhelpful for recomputing a mate, but one can tell from the object name alone which objects are likely to have the most extensive contacts and how they relate in a relative sense (with respect to cell translations away from the query selection).
Perhaps PyMOL could provide ACT-like naming through an optional setting?
From: email@example.com [mailto:firstname.lastname@example.org]
Sent: Thu 9/17/2009 4:52 PM
Subject: [PyMOL] Pymol Symmetry Mates Naming
I am encountering a problem with symmetry mates generated by pymol. It
seems that the naming system in pymol and ccp4-supported ACT program are
not consistent. I have tested several pdbs in P4212 space group and
attempted to figure out the relationship between these two, but failed. I
appreciate it if you could hint me out.
Some comparisons are listed below.
ACT:NSYM ( number of symmetry operation) followed by number of
translations of one unit cell along x,y,z.
PYMOL: the first two digits are the symmetry operation. The next six
digits correspond to the relative integral unit cell translation xxyyzz.
I figured that the symmetry operation( the first two digits in pymol) is
off by 1 compared to the first digit in ACT, but have no idea how the last
six digits relates to the ACT
Come build with us! The BlackBerry® Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay
ahead of the curve. Join us from November 9-12, 2009. Register now!
PyMOL-users mailing list (PyMOLemail@example.com)
Info Page: https://lists.sourceforge.net/lists/listinfo/pymol-users
|
OPCFW_CODE
|
Best way to enforce this data-integrity constraint?
I have 3 tables (lets call them Foo, Bar and Baz.
Tables:
Foo
FooId
Bar
BarId
FooId
Baz
BazId
BarId
AnotherValue
Obviously the foreign keys make it so that each Baz is associated with a Bar, and hence, associated with a Foo.
Now I want to ensure that for each set of Baz with the same "AnotherValue" all the associated Foo's are unique
For instance, if I had
Foos (1, 2, 3)
Bars ((10, 1), (11, 1), (12, 1), (13, 2))
Bazs ((100, 10, "a"), (101, 10, "b"), (102, 13, "a"), (104, 11, "b"))
this should be blocked because Baz 104 and baz 101 both have AnotherValue "b" and Foo 1.
Options I have thought of (in order of my current preference)
Indexed View
I could create a view over these three tables and put a unique index on the two columns
Computed Column
Add FooId as a computed column to Baz. Then add an index on AnotherValue and FooId.
Check Constraints
I'm pretty sure this can be added and will work. I haven't used check constraints much, and I'm not sure if it is the best way to do this.
Trigger
This just seems ugly to me.
I'm not sure I completely understand the question, but it seems like you want to carry the FooId down to the Baz table and add an Alternate Key (Unique contraint) on FooId,AnotherValue.
Baz
•BazId
•BarId
•FooId
•AnotherValue
This would effectively be the view which is indexed. The "carry down" is handled by the view and then a unique index constraint is applied to the view.
Just restating using different names:
Parent (ParentID)
(1, 2)
Child (ChildID, ParentID)
((10, 1), (11, 1), (13, 2))
GrandChild (GCID, ChildID, GCAndParentVal)
((100, 10, "a"), (101, 10, "b"), (102, 13, "a"), (104, 11, "b"))
Could you have that field in another table with the parentID and baz.anotherValue instead of including it in the table with the childID?
like this:
Parent (ParentID)
(1, 2)
Child (ChildID, ParentID)
((10, 1), (11, 1), (13, 2))
GrandChild (GCID, ChildID, ...)
((100, 10, ...), (101, 10, ...), (102, 13, ...), (104, 11, ...))
AnotherChildValue (ParentID, AnotherVal)
((1, "a"), (1, "b"), (2, "a"), (1, "b"))
I don't really understand your question, but no, the Parent ID needs to come from Child.
This would cause a referential integrity issue between the ParentID on Child and the ParentID on AnotherChildValue.
Why? The PK on the AnotherChildValue table would keep your invalid record (1,"b") from being accepted. The Parent table would have FK relationships with both Child and AnotherChildValue.
|
STACK_EXCHANGE
|
All standard workshops can be adapted to your specific needs - just reach out to clarify the details
Half-day introduction to basic knowledge on AI, machine learning and use-cases.
Primariliy targeting management with tight schedules that need a quick grasp on the topic
Full day introduction to AI. The goal of this day is to enable participants to define requirements for AI projects, find the right data-sources and challenge/steer internal or external implementation providers.
AI for Developers
Multi day deep dive for IT, developers and data science staff. Based on detailed Python examples and working assignements you will learn all the basics from loading and cleaning your data to training sophisticated machine learning models.
We are happy to offer you a tailormade selection of the content modules or prepare special requests for you.
All content modules can be combined to individual workshops - we will be happy to include specific content and use-cases for your needs
Is it artificial intelligence or machine learning? What is intelligence in the first place? And what is deep learning?
How does everything connect to big data, Industry 4.0 and Robotics
In this module we clear up some of the most common buzz words and set the context for the whole workshop
You know it when you see it - which is why we discuss some well known use cases from AlphaGo to self driving cars in the beginning of the workshop to give you a feeling what the frontier of AI research feels like.
While we are currently experiencing a great hype around AI the field itself has quite a history. We explain some of the key moments and discuss what was regarded to be AIin the 80s and 90s
This module in our management introduction focuses entirely how to align your AI efforts with existing company strategy. Do you need an AI strategy or rather a strategy incorporating AI? Do you need an CAIO? Should you start quick with external help or focus more on education of existing teams?
This module is one of the core elements to all of our workshops. You will learn what a machine learning model is on an abstract level. Then you we will explore some specific models and (except in the intro workshop) find our way to neural networks. You will understand the relationship between available data and feasible model complexity.
Of course every great technology trend comes with risks attached. What are adversarial examples and how can we attackan AI with them? What can go wrong when integrating AI models into production environments? How does bias in my training data affect the model results and what ethical aspects should you pay attention to when applying artificial intelligence in your company?
One key skill in an AI driven world will be the ability to translate between increasingly complex business use-cases and state of the art machine learning approaches. In this hands-on workshop section we will focus how to define crisp and clear input and output formats and how to articulate our requirements for the artificial intelligence outcome.
Deep neural networks are at the core of the current exponential developments in artificial intelligence. But how do they work? What are artificial neurons? And what is the link between the very basic machine learning models we saw previously in the workshop and those highly complex models?
Cloud services are easy to use at a low cost. But do you know what data was used to train the service? Is there any bias in the cloud based machine learnin model? How can you make sure that the service satisfies your project requirements?
In this section we will explore several cloud offerings and discuss what potential down-sides of cloud based AI solutions are and how to mitigate them.
This module is meant for developers and IT professionals who already know other programming languages but need a quick start in Python.
We will also introduce the interactive Jupyter notebooks we use in the workshop and guide your through some core principles you need to know for the upcoming interactive training.
There are many exceptionally good libraries and frameworks to help you with machine learning in Python. Three of the most important ones to start are Pandas, Numpy and Scikit learn. Together we will walk through the core principles of all three packages and then you will practice all steps hands-on with classic machine learning datasets like IRIS and MNIST.
We will explore the two most basic forms of machine learning(in this case rather basic maths) and lay the foundation for understanding the concept of training a model. Furthermore we will learn how data can be fed into a model and how classification works - i.e. the output we want from the AI is a class (dog/cat/mouse)
What can go wrong when you train a highly complex model with very few data? How can we detect this effect and what can we do to mitigate this so called overfitting?
Now that we have understood all basic concepts from training linear and logistic regression to overfitting and regularization we are ready to explore a set of classicmachine learning models.
Although this type of classifier is very differnt to the approaches we have learned so far you have to understand the unreasonable effectiveness of naive bayes classifiers and how the are applied to email texts in order to have a full overview of the classic machine learning landscape.
In this demo we will walk through a neural network based anomaly detection method based on autoencoders. You will understand the power of the method and what the prerequisites are to applying it in your use-cases.
One of the most common unsupervised methods in machine learning are clustering algorithms. We will introduce you to several concepts from k-means clustering to DBSCAN and walk you through the pros and cons of the different approaches depending on your dataset.
This module will greatly help you in data exploration and visualization.
Sparse datasets with a very high dimensionality have several problems. The most obvious one is that we cannot visually inspect the data anymore. Further problems include that concepts like nearest neighbors based on distance break down and it can become incredibly hard to train machine learning models with the data.
Luckily there are several methods to reduce the dimensions of a given dataset - in this section we will explore three: PCA, t-SNE and UMAP
We had the pleasure to work with teams of the following companies - of course we cannot share any project details but in case you need a reference will will be happy to make an introduction
We have helped several hundred participants with diverse backgrounds - from HR/Sales/Marketing to IT/Developers/System architects - to master the first steps in AI. We would be happy to welcome you among our clients
We are looking forward to working with you
|
OPCFW_CODE
|
In Photoshop, how can I replicate the color tone (sepia) from one photo to another?
I scan old original photos from the 30's and 40's and for some of them I get some blueish metallic reflections that are not visible "in person" (and the photo paper is not glossy). I think this phenomenon is called silvering (or mirroring?).
Here is an example with one of my photos (look on the person and in the dark areas of the photo you'll see the blueish reflection - look at the photo at 100% size otherwise it can be hard to see):
I use a color deconvolution plug-in in Photoshop to remove these reflections with amazing success:
But the reparation process changes color tone of the image (the sepia). In the example above the original is a bit more brown/yellow.
How, with Photoshop, can I restore the original color tone after removing the reflections?
Before editing the photo, use the eyedropper tool and pick the color of a middle-toned, well saturated area. This will be stored in the foreground color swatch.
Next do the needed adjustments.
Finally, create a new adjustment layer and choose 'Solid Color'.
This will create a layer that is uniformly filled with the color you picked at the beginning (foreground swatch). Click ok, and then change the the newly created layer's blending mode to 'Color'.
In the last step flatten the image to apply the color adjustment (Layer -> Flatten image).
Tip: save (or write on a paper) the given sepia tone's color hex value (#rrggbb) and you can enter this value when creating the color adjustment for another photo. Useful in the case when the foreground color changes during the image processing, or editing another image in the future.
Thanks! Do you have a good trick to find the best middle-toned area of the image? I can't seem to get the right color to get at least a similar sepia.
I made another screenshot. See the eyedropper cursor on the image: https://imgur.com/a/PABLvu8
All you need to do is scan the images black and white and then apply Gradient Map to them.
Gradient Map is bellow the layers (Contrast Icon). There you can first add black and white adjustment layer, then add Gradient Map and choose the colors that you like. You can group these adjustment layers (select and Ctrl+G), then you can paste any picture below them and it would be your perfect custom sepia effect with any picture. You can also save your gradient preset for faster workflow.
I think what the plugin has done is a bit heavy handed to be honest. It has not only removed the bluishness in the shadows, but has kind of stretched out the highlights and resulted in a subtle colour shift over the entire photo. You can see this if you check the histograms for each image.
Maybe try a different technique to fix the photo, something a bit more targeted to the specific areas that need fixing.
Sample a light but saturated area of sepia colour with the eyedropper, for example the lighter line between the darker clouds and horizon.
Choose the Paint Brush tool, and set the brush blending mode to Hue, 100% opacity and flow.
With a large soft edged brush, paint over only the darkest areas where the bluishness is visible.
Alternatively, perhaps make a selection of only the shadows before applying your plugin.
|
STACK_EXCHANGE
|
refers to any of a number of loosely related concepts in different areas of geometry. Intuitively, curvature is the amount by which a geometric object deviates from being flat,
but this is defined in different ways depending on the context. There is a key distinction between extrinsic curvature
, which is defined for objects embedded in another space (usually a Euclidean space
) in a way that relates to the radius of curvature of circles that touch the object, and intrinsic curvature
, which is defined at each point in a differential manifold. This article deals primarily with the first concept.
The primordial example of extrinsic curvature is that of a circle, which has curvature equal to the inverse of its radius everywhere. Smaller circles bend more sharply, and hence have higher curvature. The curvature of a smooth curve is defined as the curvature of its osculating circle at each point.
In a plane, this is a scalar quantity, but in three or more dimensions it is described by a curvature vector that takes into account the direction of the bend as well as its sharpness. The curvature of more complex objects (such as surfaces or even curved n-dimensional spaces) is described by more complex objects from linear algebra, such as the general Riemann curvature tensor.
The remainder of this article discusses, from a mathematical perspective, some geometric examples of curvature: the curvature of a curve embedded in a plane and the curvature of a surface in Euclidean space.
See the links below for further reading.
One dimension in two dimensions: Curvature of plane curves
For a plane curve C, the mathematical definition of curvature uses a parametric representation of C with respect to the arc length parametrization. It can be computed given any regular parametrization by a more complicated formula given below. Let γ(s) be a regular parametric curve, where s is the arc length, or natural parameter. This determines the unit tangent vector T, the unit normal vector N, the curvature κ(s), the oriented or signed curvature k(s), and the radius of curvature at each point:
The curvature of a straight line is identically zero. The curvature of a circle of radius R is constant, i.e. it does not depend on the point and is equal to the reciprocal of the radius:
Thus for a circle, the radius of curvature is simply its radius. Straight lines and circles are the only plane curves whose curvature is constant. Given any curve C and a point P on it where the curvature is non-zero, there is a unique circle which most closely approximates the curve near P, the osculating circle at P. The radius of the osculating circle is the radius of curvature of C at this point.
The meaning of curvature
Suppose that a particle moves on the plane with unit speed. Then the trajectory of the particle will trace out a curve C in the plane. Moreover, taking the time as the parameter, this provides a natural parametrization for C. The instanteneous direction of motion is given by the unit tangent vector T and the curvature measures how fast this vector rotates. If a curve keeps close to the same direction, the unit tangent vector changes very little and the curvature is small; where the curve undergoes a tight turn, the curvature is large.
The magnitude of curvature at points on physical curves can be measured in diopters (also spelled dioptre) — this is the convention in optics. A diopter has the dimension
The signed of the signed curvature k
indicates the direction in which the unit tangent vector rotates as a function of the parameter along the curve. If the unit tangent rotates counterclockwise, then k
> 0. If it rotates clockwise, then k
The signed curvature depends on the particular parametrization chosen for a curve. For example the unit circle can be parametrised by (counterclockwise, with k > 0), or by (clockwise, with k < 0). More precisely, the signed curvature depends only on the choice of orientation of an immersed curve. Every immersed curve in the plane admits two possible orientations.
For a plane curve given parametrically as
the curvature is
and the signed curvature k is
For the less general case of a plane curve given explicitly as the curvature is
|
OPCFW_CODE
|
As technology advances, it’s becoming more and more important to stay up-to-date with the latest skills and tools. In 2023, mastering a few essential tech skills can give you a huge edge over the competition. From coding to machine learning, it’s important to understand these skills in order to stay ahead of the curve.
Here are six essential tech skills to master in 2023.
Enhancing Your Coding Skills
Coding is one of the most essential tech skills to master in 2023. Coding is the process of writing instructions for computers to execute. It is the foundation of all software development and is an invaluable skill for anyone interested in the tech industry.
There are many coding languages to learn. There are many online courses and tutorials available that teach valuable skills. For example, you could take an intro to Java course. Just make sure that you practice coding regularly in order to become proficient.
Strengthening Your Knowledge of Artificial Intelligence
Another essential tech skill to master in 2023 is artificial intelligence (AI). AI refers to computer systems that are able to perform tasks that usually require human intelligence, such as voice recognition and image recognition. AI is becoming increasingly important in the tech industry, and it’s used in many applications and products.
Understanding AI requires knowledge of several topics, such as machine learning, natural language processing, and computer vision. There are many online courses and tutorials available to help you learn AI.
Having a strong knowledge of AI can open up many opportunities in the tech industry. From developing AI-powered applications to creating assistive technologies, understanding AI can help you stand out from the competition.
Understanding Cloud Computing
Cloud computing is another essential tech skill to master in 2023. Cloud computing is the process of storing and accessing data and applications over the Internet, instead of a local server or computer.
In order to understand cloud computing, it’s important to have a basic understanding of computer networking, data storage, and software development.
Learning About Cybersecurity
Cybersecurity is another essential tech skill to master in 2023. Cybersecurity is the practice of protecting networks, systems, and programs from digital attacks. Having a strong knowledge of cybersecurity can open up many opportunities in the tech industry. From developing secure applications to managing security infrastructure, cybersecurity is definitely a tech skill you’ll want to understand and master this year.
Incorporating Data Analysis Into Your Work
Data analysis is another essential tech skill to master. Data analysis is the process of collecting, organizing, and analyzing data in order to draw conclusions and make decisions.
In order to understand data analysis, it’s important to have a basic understanding of database management, statistics, and machine learning.
Utilizing Machine Learning
Lastly, machine learning is a type of artificial intelligence that enables computers to learn from data without being explicitly programmed. Machine learning is becoming increasingly important in the tech industry, and is used by many companies and organizations.
An understanding of machine learning can open up many opportunities in the tech industry. From developing machine learning models to analyzing data for insights, machine learning can provide you with numerous career opportunities.
In 2023, mastering a few essential tech skills can give you a huge edge over the competition.
So, get started with learning a tech skill today!
|
OPCFW_CODE
|
The next 50 seats are JUST $17! (19 left)
"Knowing we can deploy what's effectively an AI librarian & document analyst rolled into one is likely to be a game changer for us on a number of fronts"
If - like us - you have a huge array of different content (videos and documents), knowing that you can analyse meta data, add tags, potentially re-route and update only certain parts of your library without having to manually check or build complex solutions, could be a game changer.
One thing we didn't want to do however is buy into a 'one trick pony' that could only catalogue our document metadata.
We wanted a tool that could pull in data, arrange, add to and complement our SharePoint information with sources outside of SharePoint. For example using OCR to digest our invoices and more.
Microsoft Syntex allows us to do this and unlock a whole lot MORE besides.
If you are keen to know more about "how to solve a problem like content" in your organisation, this workshop is a must see.
What You’ll learn from this Workshop:
Microsoft Syntex brings the power of AI to help categorise, understand and action your content without the need for manual analysis of 1000's of data items or documents. You'll learn how in this in-depth workshop
Introduction to Microsoft Syntex
What are the key features of Microsoft Syntex and how can you put them to use.
The End-Users Experience
Where to look inside Microsoft Syntex and how to get to grips with what its telling you. What are the important moving parts of Syntext that you will use most frequently and where can you find the hidden gems.
Setting up Syntex
How to setup a trial in your environment and configure it to get going.
What costs do you need to be aware of and how can you efficiently solve the problem of value being locked inside your documents without spending big on licences.
How to unlock the powerful Content/ Document analytics and processing capabilities to solve your business problems.
Some big announcements were made at Microsoft Ignite. We will discuss some of the most important ones such as backup, restore, translation and archiving.
This Masterclass will be held over 5 hours and covers:
Part 1 - Getting Started With Microsoft Syntex
- Introduction to Microsoft Syntex (as it is today).
- Get started with setup - how to setup the trial of Syntex in your environment and configuring the environment.
- Build out Content Processing - using Syntex with some sample documents (Note: This uses PnP PowerShell to help get started but it isn't needed to use the same skills with your own content).
Part 2 - The End-User Experience & Licensing
- In this part, we will take a tour around Syntex to fully understand it from an end-users point of view.
- We will also discuss dreaded "L word". We will learn about the licensing needs for model builders, administrators and end users
- The exciting future - an overview of the announcements for Microsoft Syntex from Ignite 2022 including translation, backup/restore and archival
Meet your Workshop Host...
From dev to PM, SharePoint farm owner to analyst, Kevin has done it all and has the hair (or lack of) to prove it. Having started in the world of Financial Services with delivering solutions on Microsoft technologies to help staff collaborate and be more productive, Kevin jumped the fence a few years ago to the consultancy side. He has always loved to share and is a Microsoft MVP in Microsoft 365 Apps and Services, Viva Explorer and co-hosts the weekly modern workplace podcast GreyHatBeardPrincess in between his day job as a Practice Lead for Modern Workplace at CPS.
Kevin McDonnell, Microsoft Valuable Professional
100% Satisfaction Guaranteed
Firstly, like thousands of others, we're confident you will love this content! However, if you decide it's not for you, let us know and we will refund your money within 30 days.
Frequently Asked Questions
As soon as you purchase your pass you will be emailed a receipt. If you'd also like a formal invoice, please send your request to email@example.com.
We offer a full 7-Day free Trial with nothing held back. If you choose to cancel before the 7-days then you pay nothing. For this reason we don't offer refunds.
At the moment, only individual passes are available. Please contact us at firstname.lastname@example.org if you'd like to discuss the purchase of more than 5 passes.
No, only the person who purchases the pass will be able to watch the videos and will receive the ebooks. We ask that you do not share them.
|
OPCFW_CODE
|
The crypto asset industry has created a new vocabulary for discussing events that happen in and around blockchains. As part of Amun’s mission to make crypto assets as easy to invest in as stocks, we need to find a common language to describe the various ways to interpret relevant data and actions. In our experience, the terms “forks” and “airdrops” in particular, lead to questions and uncertainty from investors regarding their impact on the value of a given asset and how best to think of them conceptually. This article deals firstly with forks then finishes with a brief explanation of airdrops.
Forks are the essential corporate governance action within the crypto asset world and they can be thought of in a couple of different ways depending on the specifics of the situation. The following categories are not necessarily mutually exclusive but offer an almost exhaustive categorization of the different sorts of forks possible:
A Soft Fork is a system upgrade wherein newly created blocks made under the updated rules of the soft fork are still accepted by nodes running the older version of the software – therefore maintaining their forwards-compatibility. As such, the set of possible valid blocks under a new soft fork ruleset is a proper subset of the group of possible valid blocks on the pre-forked chain. An example is the SegWit fork on the Bitcoin network.
A Hard Fork is similarly a system upgrade but, in this case, the newly created blocks made under the updated rules of the hard fork are not accepted by nodes running the older version of the software – therefore rendering pre-fork clients no-longer forwards-compatible. An example of a Hard Fork is the Constantinople fork on the Ethereum Network.
A Contested Forkis a fork that is not universally accepted by all network participants. If the upgrade isn’t accepted by all of the participants, this can result in a new chain being created, as was the case with the creation of Ethereum Classic following the DAO hard fork. Much like a stock split, these actions result in the creation of a new asset. This new asset is generally distributed to the holders of the original asset and may be very different from the original product. This operational process is virtually identical to a stock split (including an ex-date announced in advance and a process for claiming those assets on behalf of holders). However, there are a couple of key differences between a contested fork and a stock split: a hard fork does not change the value of the legacy asset-based off historical evidence and you would not expect to see the corresponding drop in the per unit value of the parent asset that you see in the case of a stock split.
The below charts help illustrate this point by comparing Walmart (WMT) historical 2-for-1 stock splits to the Bitcoin-Bitcoin Cash hard fork respectively. The vertical lines represent the days of the Walmart stock splits in the first chart & the single vertical represents the Bitcoin Cash hard fork date in the second chart:
The second chart helps demonstrate how a hard fork does not intrinsically transfer any value away from the legacy blockchain – despite the high profile Bitcoin Cash fork, little value was diverted from Bitcoin. The parent asset still has all the same intrinsic properties (number of nodes, speed of transaction, number of users etc.) and is generally viewed by the market as unchanged by this event. The newly created asset is just a copy of the client code of the parent network (usually with a few edits) that runs on a parallel network. Any potential value which could be transferred away would be a function of the network effects diverted due to users choosing to switch to the new blockchain. In fact, Bitcoin has forked (including chain & software forks) over 70 times yet none of the forks have had measurable effects on the value of BTC. However, even if these fork assets do not affect the value of the legacy asset, they can have significant value in their own right – as is the case for Bitcoin Cash (and then Bitcoin Cash SV & ABC). In the case of stock splits the case is slightly more interesting, as the price-per-share following a 2-to-1 stock split would halve in theory; this fact, however, is not accurately shown in the data as most historical data providers adjust stock prices for stock splits.
An Uncontested Fork, such as Ethereum’s recent Constantinople Fork, is a client upgrade that all of the participants (predominantly miners) in the network agree with. This is similar to a planned change of a member of the management team at a company – for example, the CEO of Goldman announcing retirement with a planned, capable successor – where the market reaction is likely to be neutral. While investors may have opinions on this shift affecting the valuation of the stock (or crypto asset), this isn’t typically a disruptive event. Similarly, in the blockchain world, an uncontested fork can often be a very positive development that improves some feature of the given network.
A Chain Split relates to cases, similar to Contested Forks, where mutually incompatible chains come into existence for various reasons. For example, they can be the result of Hard Forks, Sybil attacks, or two miners discovering new blocks at similar times. However, a chain split is often not the result of deliberate action but instead due to bugs in client code which cause different versions of a given client (or differing clients of a given single chain) to have conflicting states, such as was the case with BIP-50.
Disambiguation between chain fork and software fork
One important point to mention is the distinction between a chain fork and a software fork. A chain fork can be defined as when a given blockchain splits into two unique chains either intentionally (as with the BCH fork) or unintentionally (due to a bug). A software fork is when a developer forks the Git repository where the code for the given blockchain client is hosted and creates a new crypto asset and blockchain that way.
Governance is just as important within the crypto asset world as it is in the corporate world. For example, if forks are like corporate actions and can have a range of impacts on a given crypto asset, who are the key stakeholders who make the difference between the fork being contested or uncontested? Analogous to corporate governance where the various shareholders have the ultimate say in matters related to governance, in the case of crypto assets it is those who have a stake in the network – whether this is the person or company that owns the tokens (for systems with on-chain governance like MakerDAO or Tezos) or the wide range of other mission-critical actors like developers, miners, & community members (for systems with off-chain governance like Bitcoin).
In the case of on-chain governance, similar to shareholders, the larger the token holdings a given address has, the more voting power one has. Over the last few years, several crypto assets have developed complex governance processes which share many similarities to the typical corporate governance process. For example, consider MakerDAO – a project which has launched a stablecoin backed by a decentralized credit facility. Similar to AGMs, MakerDAO hosts two weekly calls where stakeholders can discuss risk and governance topics. Moreover, like a shareholder voting process, token holders can vote on particular issues.
The below chart shows the breakdown of votes (measured in MKR – a Crypto Asset) for a governance initiative for MakerDAO.
On the other hand, governance in off-chain governance is often much more nuanced than traditional corporate governance. It can be argued, for example, that miners are the most important stakeholder, as they are ultimately the ones who maintain the economic security and integrity of a given crypto asset network’s ledger. As a result, miner sentiment is often an especially important metric used when developers are considering whether to initiate a fork or not. Voting power and control over systems upgrades are some of the more practical reasons why there is such heated debate about ostensibly more centralized networks such as XRP and more decentralized networks such as Bitcoin. While there are a number of ideological reasons for people in the crypto asset space preferring one set up over another, from a practical perspective, the more centralized a network is, the more concentrated the voting power will be in the case of key governance events and the more vulnerable the network of failure. It is similar to having a large activist shareholder in a standard corporate security situation. If you agree with their policies, there is no problem but if you disagree, they nevertheless have a significant amount of power to influence the evolution of the company and its commercial policies – even in ways diametrically opposed to your own interests. Moreover, there remains the possibility that a bad actor could sway the opinion of that activist leading them to appoint someone unsuitable to the board or attempt an ill-advised acquisition resulting in significant loss of value for other shareholders.
Airdrops are another important (corporate) governance action within the crypto asset world and they can also be thought of in a couple of different ways depending on the specifics of the situation. For example, Airdrops can be seen as the dividends of the blockchain space. They are essentially free assets given to incentivize certain types of behaviors on a network (e.g. signing up for a wallet, performing tasks to maintain the network, etc.). Generally, the recipient does not control when or how these are received once the action is completed. Furthermore, this type of action usually only occurs on much smaller networks looking to grow their user base. Unlike dividends, however, you would not necessarily expect to see the same decrease in value at the ex-date.
Another analogy for airdrops would be the bonus miles you are awarded for signing up for a new credit card or bank account. The company wants to incentivize you to do a certain thing (such as opening a wallet or a credit line) and is willing to provide a crypto asset with some monetary value to incentivize that behavior whether it be free crypto assets or credit card points.
The aim of this article has been to serve as a bridge between terminology in the crypto asset world and that of the corporate world. As has been shown, there is a great deal of overlap between the mechanisms for governance across the two and this will continue to be the case as engineers of crypto assets continue to experiment with how best to affect stakeholder politics.
The information provided does not constitute a prospectus or other offering material and does not contain or constitute an offer to sell or a solicitation of any offer to buy securities in any jurisdiction.
Some of the information published herein may contain forward-looking statements. Readers are cautioned that any such forward-looking statements are not guarantees of future performance and involve risks and uncertainties and that actual results may differ materially from those in the forward-looking statements as a result of various factors.
The information contained herein may not be considered as economic, legal, tax or other advice and users are cautioned to base investment decisions or other decisions solely on the content hereof.
Both fork types can also be seen as backward-compatible as any given node which wants to verify a blockchain from scratch will have to verify blocks which ran older software than its own (either due to hard- or soft-forks).
A hard fork can either be uncontested or contested, whilst a soft fork, under our definition, can only be uncontested. This does not mean that a proposed soft fork may not cause some governance conflict, although the outcome of that conflict wouldn’t necessarily lead to a new blockchain being created at the moment of the new fork.
Though in cases where the contested fork is over the distribution of the asset in the first place (i.e. the DAO hacker having a large sum of Ether following the DAO hack) this may not apply.
On-chain governance is a system for upgrading crypto assets and blockchains in which code changes are encoded into the protocol and decided by token holder voting.
Off-chain governance is a system by which upgrades to crypto assets and blockchains are coordinated and organized primarily through mailing-lists, forums, & discussions where the governance decision outcomes are generally decided through various signals of different stakeholders (e.g. community sentiment or miner committed hashrate).
|
OPCFW_CODE
|
Download Getting Started With .net Gadgeteer: Learn To Use This .net Micro Framework Powered Platform 2012
download Getting Started with .NET Gadgeteer: Learn to Use This .NET Micro Framework Powered: books in Cryogenic Engineering, Vol. This medication is not interesting as an research. You can implement for Springer Proceedings with Visa, Mastercard, American Express or Paypal. After the habitat you can thoroughly explain the role exist or Thank it multiple. Via MySpringer you can thereafter understand your books. not crossed within 3 to 5 function men. is Mathcad's download Getting Started with .NET Gadgeteer: Learn to Use to be and re-download indivisible something lobes. If you are Matching a found concrete of Mathcad 11 Service Release 1 from a ErrorDocument performer, change deepen your philosopher use to version and become this factor; you should well be and pick the pledge site. This name will add just the bidirectional possible verification and the above modern logo concepts of such device comments of Mathcad 11 Service Release 1 Enterprise Edition. This debate should indeed download been to Pod systems, n't the lot. is text for editor release thoughts. not, right affected robots in a download Getting Started will be formed in its libraries. He is that some parental days do all representations of ll Share. 2011) about polymeric and Other animals. He has that the s is the volume to take one model in organisms of the semantic by metric notions.
outstanding topics will then discuss such in your download Getting Started with .NET Gadgeteer: Learn to Use This of the organisations you 've used. Whether you vary prohibited the & or n't, if you are your new and professional APPLICATIONS Usually books will search relevant individuals that help not for them. advanced author can find from the friendly. If interested, anytime the polymer in its own moment. become you for your download Getting Started with .NET Gadgeteer: Learn to Use This .NET Micro Framework. Other code illness was a study rounding events right there. paid action you are offering to exist a handle up software to a permanent theory, you should navigate it easily poly-functional. was this climate biblical to you? Amazon Giveaway increases you to delete sure covers in methacrylate to be research, install your analysis, and work Interesting officials and results.
The download Getting Started with .NET Gadgeteer: Learn to Use This .NET is now Based. You can be a presentation tool and Refine your aspects. famous Pages will often have open in your login of the examples you need loved. Whether you present encountered the course or n't, if you do your outstanding and Android phases also apps will grasp other reads that are no for them. looking gaps into same plastics. ranging Actions to fast monographs. 2018, American Chemistry Council, Inc. only, the Italian Areas of engineering, as in the representation link, wish only manually same. As a overseas chemical, request isolates interior in book if it is illustrated from 11th Directions. Please FAIL the 4shared materials to share download Getting Started with .NET Gadgeteer: Learn to Use This .NET components if any and review us, we'll like reliable readers or Invariants typically. Your claim sent an new condition. You can date a book list and Look your writers. rational summers will Just content public in your document of the shadows you visit received. If content, Specifically the download Getting Started with .NET Gadgeteer: Learn to Use This .NET Micro Framework Powered Platform 2012 in its key book. You can try a interpolation book and write your basics. Converted Areas will not do same in your role-playing of the HTTP-requests you have known. Whether you am captured the sum or here, if you use your heterodox and quadriplegic cultures download pieces will guide underwater picks that visit not for them.
SpringerLink reveals including Humanities with download Getting Started with .NET Gadgeteer: to masses of special materials from Journals, Books, Protocols and Reference is. Why also meet at our book? Springer International Publishing AG. addition is temporarily priceless. This toolset 2011Contents then better with download. not, there was a download Getting Started with .NET Gadgeteer:. There powered an book pertaining your Wish Lists. no, there browbeat a Science. silicone data, Lie thoughts, and ResearchGate interaction are the upcoming plasticizer of this browser. In download Getting Started with .NET Gadgeteer: Learn to Use This .NET to remove the links to a responsibility, the book is trouble to textbook Lie lubricants and Lie contributions. You should below extend this download on dimensions that permeated Mathcad from a dosage year; if you are n't prior how your ScotsMary of Mathcad conducted been, gather your science browser before helping this or any certain geography. This section will handle well the request ensures for interested households of Mathcad 11 Service Release 1 Single-User Edition. is documentation for book body references. pres Mathcad's email to build and achieve other version theologians.
right, dispatched in English, the download Getting Started with .NET Gadgeteer: Learn to Use This .NET Micro Framework Powered Platform for main reader of this manuscript in a healthy Check, recommended Spiers( 2005) to exist these 20 electromagnetics in possible glimpse and not amp invalid and Converted academic interested systemWindows into five indivisible productions. 8 billion by 2030, flying number technologies are scheduled to be and apply really of this tool. 8 material) took username 1 and 2 as the Additive 2014 book sovereigns registering to a useful example policy in Canada company GDP in 2014. found their digital and such books, and the letter that both these twins see right learning the numerous proper access Share, irradiated this plastic progress into placing the opinion Xcode of great and various interested fencers in Canada by trading the last ITR work into Mandarin and wonderful presentation. experiences were related from 220 Mandarin and Hindi- using Other letters over a website of dialects in the information of December 2014 at the CN Tower, Toronto, Ontario, Canada. Jonathan download Getting Started with .NET Gadgeteer: Learn to ideas dispatched by the review homage in the works and on the textVolume instructor. The Assistant Teacher was well high and chiral in being out to me and using me every enthusiast to check quick in this client! Karen BehrendOver the incremental two trademarks, I are provoked the two links engineers was to be classical AP years readers. I need the collection beliefs see using and various. And the textbooks 've bad in their evidence of file and open Product cocatalysts.
irreversible buffers will Even provide Serbian in your download Getting Started with .NET Gadgeteer: Learn to Use This .NET Micro Framework Powered of the rays you have Founded. Whether you 've read the growth or Concomitantly, if you say your semisimple and petty asteroids no documents will upload other roots that do still for them. An real-world made while often-intimidating this question. All activities on Feedbooks give written and found to our settings, for further kind.
|
OPCFW_CODE
|
# ~*~ coding: utf-8 ~*~
"""
fleaker.peewee.mixins.field_signature
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Module that implements a mixin that can be used for unique indexes across
multiple columns, where at least one of the columns is nullable.
It should be noted that because of Peewee's simplistic signal system, the
signature will only be updated when ``peewee.Model.save`` is called and will
not work with ``UPDATE`` queries.
Because no Fleaker mixins define migrations or create columns automatically,
it is left to the developer to add this column to the model in whatever system
they are using. The SQL needed for this column is roughly equivalent to the
following SQL (assuming the table is called ``folders`` like it is below):
.. code-block:: sql
ALTER TABLE `folders`
ADD CHAR(40) `signature` NULL,
ADD CONSTRAINT `uc_folders_signature` UNIQUE (`signature`);
Example:
To use this mixin, add the class to the model's inheritance chain.
.. code-block:: python
import peewee
from fleaker import db
from fleaker.peewee import ArchivedMixin, FieldSignatureMixin
class Folder(FieldSignatureMixin, ArchivedMixin, db.Model):
# This class represents a Folder in a file system. Two Folders with
# the same name cannot exist in the same Folder. If the Folder has
# no Parent Folder, it exists in the top level of the file system.
name = peewee.CharField(max_length=255, null=False)
parent_folder = peewee.ForeignKeyField('self', null=True)
class Meta:
signature_fields = ('name', 'parent_folder')
# Create a top level Folder
etc_folder = Folder(name='etc')
etc_folder.save()
# Folder now has a signature
assert etc_folder.signature
# Two Folders with the same name cannot exist in the same parent Folder
try:
Folder(name='etc').save()
except peewee.IntegrityError:
assert True
else:
assert False
# The signature of the instance will be nulled out when archived so
# that future records can be active with the same signature.
etc_folder.archive_instance()
assert folder.signature is None
"""
from hashlib import sha1
from peewee import FixedCharField
from playhouse.signals import Model as SignalModel, pre_save
from fleaker._compat import text_type
class FieldSignatureMixin(SignalModel):
"""Mixin that provides reliable multi column unique indexes.
This is done by computing the combined values of specified columns into a
SHA1 hash that can have a unique index applied to it. This value will be
stored in a field named ``signature``. When a field is archived, this hash
is set to null to prevent issues going forward.
This mixin is needed because MySQL cannot have a unique index across
multiple columns if one or more of the columns are nullable. They have
marked this issues a WONTFIX because MySQL is a fickle beast.
Attributes:
signature (str|None): This is a ``sha1`` hash computed from the fields
defined in the ``Meta.signature_fields``. This is nulled out if the
instance is archived.
Meta.signature_fields (tuple[str]): The names of the fields to factor
into the computed signature.
"""
# This is where the signature is stored
signature = FixedCharField(max_length=40, null=True, unique=True)
# This is an overridable list of fields that should be used to compose the
# hash, in order (not that order matters when composing a hash, as long as
# it stays consistent).
class Meta:
signature_fields = ()
def update_signature(self):
"""Update the signature field by hashing the ``signature_fields``.
Raises:
AttributeError: This is raised if ``Meta.signature_fields`` has no
values in it or if a field in there is not a field on the
model.
"""
if not self._meta.signature_fields:
raise AttributeError(
"No fields defined in {}.Meta.signature_fields. Please define "
"at least one.".format(type(self).__name__)
)
# If the field is archived, unset the signature so records in the
# future can have this value.
if getattr(self, 'archived', False):
self.signature = None
return
# Otherwise, combine the values of the fields together and SHA1 them
computed = [getattr(self, value) or ' '
for value in self._meta.signature_fields]
computed = ''.join([text_type(value) for value in computed])
# If computed is a falsey value, that means all the fields were
# None or blank and that will lead to some pain.
if computed:
self.signature = sha1(computed.encode('utf-8')).hexdigest()
@pre_save(sender=FieldSignatureMixin)
def update_signature(sender, instance, **kwargs):
"""Peewee event listener that will update the unique hash for the field
before saving the record.
"""
instance.update_signature()
|
STACK_EDU
|
Hello,every time i join into a game i play 5-15 min and i get dayz has stopped working,i think i have a good pc i really dont know what to do the game has becaome unplayable i die so many times because i get crash and zombie kill me or someon else see me loosing connection.
- Legacy ID
- Game Crash
I tinbk you do not need the DayZ files.
Because of the game of mine and my 2 (!) friends crash everytime exactly at the same moment.
can you please help me i cant find those files,only files i could find are those i sent you and dayz other profile folder
We need the files from
I ve all info and the same problem! so??? ANSWERS = 0! in more than a month from my issue report!!! GOOD JOB BOHEMIA TEAM! I have arma3 and work good! WHY DAYZ DON T WORK???
I think there is very much code just copied and re-pasted in order to "developpe faster".
i found the folder it was hidden i will upload it,in rar there is also the dxdiag file
Same problem here as well been happening on and off for a couple of weeks now. Initially I could load in play 15-20 minutes and it would then freeze, I could hear ambient sounds (wind, birds chirping) but the screen would be frozen and ctrl-alt-del and end task was the only way out. This was happening on various regular servers. After reading various threads I went on to a hardcore server for 10-15 minutes and this seemed to have fixed the problem until two days ago. Since then it has built up to a crescendo. At first I was getting 20 mins play then 10 then 2, now nothing and going on to a hardcore server no longer has any effect in fact hardcore is freezing now also. A few things I have noticed, where I am has no consequence, being around people initially would cause it to happen now it just happens every time. I'm starting on every server with a minimum of 7k and a max of 100k desync. When the freeze takes place my disk usage spikes to 100%. I do not play exp servers this is occuring to me on stable servers. Right now I cant play at all i log in to server and freeze straight away. I love this game man please fix this problem I havent played properly in two days and im beginning to twitch.
N.B. Last night I went on to experimental servers and managed to play for 20 minutes with no freezing. Went back on to stable and happened again within minutes. So it appears this is only happening to me on stable servers. Or the problem just didn't have time to catch up with me. I have uploaded the files from Appdata/local/Dayz and my dxdiag.
Same problem here, I'm using wifi and an old gaming laptop (Cored2duo 2.6 ghz, 260M GTX, 4g ram, latest nvidia drivers, no antivirus) game was running fine in the first months of alpha, now it becomes unresponsive after 10-30 min, and I'm forced to kill the process. Notice it's worse on experimental than stable, but happens on both. Noticed my fps is pretty good (30-40 fps), the freezes just happen out of the blue. I've defragmented my HDD, cleaned up my comp, making sure I'm not overheating, not sure what to do to fix this issue.
http://feedback.dayzgame.com/view.php?id=14493 Read this topic. I've resolved right that.
|
OPCFW_CODE
|
Three-finger head but tastes flat?
I recently brewed a Belgian Wit extract kit. Bottle conditioned with 5 oz of priming sugar. I'm pretty happy with the taste. With a medium speed pour into a Pilsner glass, the beer develops a rather thick, probably three fingers tall head. It slowly fades but a small head remains throughout. Good lacing continues as well. I can see a lot of continuous streams from random nucleation sites in the glass. It looks beautiful IMHO.
The problem is that it doesn't taste very carbonated. Some go as far to say that it tastes flat.
Any ideas why it develops a great head but still tastes flat? Thanks!
You could just be under-carbonated. Wit beer, when done properly, has a gorgeous, thick head from all that protein in the wheat, regardless of how carbonated the bottles are. How long has the beer been bottled? Maybe give it another week or two, and make sure you pour it slowly. Did the beer dry out properly? A wit should be lower than 1.014 or so, I think. Under attenuation could result in a sweeter(sorta) beer, which might be perceived as less "sharp" or carbonated.
I'm with Graham when he asks, "How long has the beer been bottled?" If it has only been in the bottle for a week or two, it's likely that the yeast are done with the priming sugar, but the CO2 isn't well integrated into the beer yet. I have found all of my brews suffer in the way you describe when I open them too early - all head and no carbonation. Wait another week or three and you'll get a 1-2 finger head atop a perfectly fizzy brew.
It's been at around 3 weeks (maybe more). I'll check my brewing calendar at home to confirm. Final gravity was 1.016, I believe. The last 2 bottles have been in the fridge for about a week so I'll pop 2 more bottles in the fridge that have been ~70 degrees for an additional week and try those.
Try a cleaner glass. Sounds like most of the carbonation is coming out of solution during the pour. Give your glass a rinse with super hot water. A quick rinse with cool water to cool the glass off (hot glass will not help with carbonation), but don't worry about chilling the glass much. Don't bother drying the glass. Then pour your beer.
Super clean glassware is vital.
At least that's my guess.
I would assume dirty glassware would create the opposite problem: no head, but the beer is still fizzy, like a soda. How can a dirty glass force the CO2 out of suspension?
If the glassware is really gunky, like with chunks all over it, those would serve as nucleation sites for the CO2 to come out of suspension, like a small-scale diet coke & mentos trick. However, I want to think that Bob Banks isn't using glassware covered in chunks. The more common issue with glassware is soap residue, which is oily and has the effect Graham describes - little to no head and no head retention.
I have poured many a beer into a glass at a friends house only to have plenty of foam and little carbonation. upon a good rinse in a new glass the beer foams less and has more carbonation. The glass does not have to be visibly gunky for this problem to happen. I think he is getting plenty of head because this style is very head retentive. But the head only comes from CO2 escaping the beer. Nucleation in from residue or dirt (visible or not) would serve to bring CO2 out, create head, and leave nothing behind. Grahams idea is good too. As the brewer he has to check on both issues.
I'll give this a shot tonight.
I'm seeing the exact same situation with a Belgium Ale. Granted, very early in the bottle conditioning it was like a soda, fizz, then nothing in a matter of seconds, flat, flat beer. It got better with time, a little bit of a head, but pretty much flat, now about three weeks out, a decent head and better carbonation, so, I'm holding out hope with more time. They don't package patience with beer kits!
|
STACK_EXCHANGE
|
eZ announces the availability of 5.3.5, a maintenance release available for all users of eZ Publish Platform 5.3 containing a notable few updates and fixes.
A newer release is available, rendering update instructions here obsolete and non working. Please see 5.3.x Update Instructions for always uptodate instructions for 5.3 releases.
|Update to 5.3.4 first before you continue with instructions below.|
These instructions take advantage of the new Composer powered update systems in 5.3 for maintenance updates, so make sure you familiarize yourself with the Using Composer page.
For Upgrading from versions prior to 5.3 look at our Upgrading from 5.1 to 5.3 or Upgrading from 5.2 to 5.3 page.
Perform the following command to make sure you you are not affected by conflicts caused by this package:
php -d memory_limit=-1 composer.phar remove behat/mink-selenium-driver --no-update --dev
With this command you'll only update packages from eZ (and Symfony) that have received updates since 5.3.0:
php -d memory_limit=-1 composer.phar update --no-dev --prefer-dist --with-dependencies ezsystems/ezpublish-kernel ezsystems/demobundle ezsystems/ezpublish-legacy symfony/symfony
|If you use either |
Legacy extensions autoload must be regenerated. You can do it by running this command:
This release fixes a vulnerability in the eZ Publish password recovery function. You need to have the PHP OpenSSL extension (ext-openssl) installed to take full advantage of the improved security, but even without it security is improved.
Security Advisory for Community: http://share.ez.no/community-project/security-advisories/ezsa-2015-001-potential-vulnerability-in-ez-publish-password-recovery
An eZ Find user needs to update their solr schema.xml.
For each solr core (located in ezfind/java/solr), you need to edit <my-core-name>/conf/schema.xml
Around line 616, right after:
<field name="meta_priority_si" type="sint" indexed="true" stored="true" multiValued="true"/>
Add the following lines:
<!-- denormalised fields for hidden and visible path elements --> <field name="meta_visible_path_si" type="sint" indexed="true" stored="true" multiValued="true"/> <!-- Visible Location path IDs --> <field name="meta_visible_path_string_ms" type="mstring" indexed="true" stored="true" multiValued="true"/> <!-- Visible Location path string --> <field name="meta_hidden_path_si" type="sint" indexed="true" stored="true" multiValued="true"/> <!-- Hidden Location path IDs --> <field name="meta_hidden_path_string_ms" type="mstring" indexed="true" stored="true" multiValued="true"/> <!-- Hidden Location path string -->
Restart and re-index solr.
The XmlText fix for EZP-23513 (see https://github.com/ezsystems/ezpublish-kernel/pull/1087) deprecates/removes the CustomTags pre-converter in favor of a new Expanding converter. While they're not part of the public API, if you rely on this file in any way, you might want to check and update your code.
Here are the packages that have received an update to 5.3.5 as part of this release:
Other packages that have received update since 5.3.0:
|
OPCFW_CODE
|
Here is my hippocratic oath
You should take one too. It’s NOT about medicine.
You may have noticed that, sometimes, some tiny bit of outrage is shared on the Internet. Someone sees something outrageous and does the right thing: instantaneously, automatically finds and shares online, with properly outraged comments, as many confirmations as possible that that thing (above all: the specific person doing it) is TOTALLY bad. REAL bad. SURELY bad.
That is outrage porn
The term outrage porn was coined by essayist Tim Kreider in 2009 to describe “manufactured indignation, optimized for virality”.
“Manufactured” and “optimized” are the key terms here.
“[S]ocial media, financial incentives and the pitfalls of human psychology have coalesced into a perverse production line, in which we are producer-consumers. Outrage porn is exploited by culture war profiteers, weaponized by memetic tribes, leveraged by wokonomic capitalists and kept alive by an outrage industrial complex.”
“Every angry retweet and snarky reply turns us into useful idiots for the outrage porno machine.”
“Our perpetual outage has reduced our agency, ruined our sense-making apparatus and rendered us powerless to confront the numerous risks that require long-term, collective decision-making.”
The three paragraphs above are my preferred quotes from “Hippocratic Oath for the Culture War”.
Hippocrates is the Greek physician who lived about 25 centuries ago, and is traditionally regarded as the “father of medicine”. The original Hippocratic Oath, attributed to him, is the one taken by doctors wordlwide, for thousands years now, to “treat the ill to the best of one’s ability”.
The Culture War version
The original Hippocratic oath is all about “Do No Harm”. The article from which I took those quotes strongly argues that “a Hippocratic oath for content creators is one possible avenue by which we could improve our social media climate”. Here are the introduction and some points, synthesized, of the first version of the Oath (2019):
I swear to fulfill, to the best of my ability and judgment, this covenant:
- I will be truthful in content creation.
- I will create content and engage others in the hope of improved understanding.
- I will engage in the principle of humanity when interpreting motives.
- I will conduct myself with intellectual humility.
- I will have evidence for the propositions I put forth.
The full version is here. Go read it carefully, and please take the oath.
As far as I am concerned… like everybody else, I cannot be mathematically sure that I never did anything worth of public outrage, and calling out. Outrageousness is in the eye of the beholder, these days. For everybody.
But I have always tried to do my best to apply something very similar to that Oath, and intend to keep doing it:
“Be frank, but nice. Do to others what you would have them do to you. Online and offline, keep mouth and keyboard shut unless it’s absolutely necessary. “
So, take THAT Oath, or take my own version, or anything in between. But do take some Oath like those. Possibly, but not necessarily, in public.
Yes, to a certain extent, this is just “virtue signalling”. But the problem with outrage porn and pointless culture wars is just that they are concretely harmful, even if the only active participants are minorities. Besides, we are all “content creators”, not just regular bloggers or Youtube/Instagram influencers. As far as this Oath is concerned, every comment on every social media, or WhatsApp, Telegram etc… is “content creation”. Every one of us is a “public figure”.
Therefore, even a small percentage of internet users publicly taking such an oath can make a concrete difference. Because, quoting again:
“What we need are public figures who want dialogue, not victory - civic care, not civic destruction”.
Images sources: MemeGenerator, and seveal ancient copies of the Hippocratic Oath from Wikipedia
|
OPCFW_CODE
|
1. Convert the following numbers from decimal to binary and then to hexadecimal:
2. Perform the following subtraction operation in the Binary Number System: 5 18= X2
3. What character string does the following binary ASCI code represent?
1010100 1101000 101001 1110011 0100000 1101001 1110011 0100000 1000101
1000001 1010011 1011001 0100001
4. A photographic image requires 3 bytes per pixel to produce 16 million shades of color.
a) How large video memory is required to store a 640 x 480 image during display? A
1024 x 768 image? A 1280 x 1024 image?
b) How many 1024 x 768 color images will fit on a CD-ROM?
5. Which of the following is the binary representation of the decimal value 24?
6. The abbreviation 'MB' means
and has the value
a) MegaBits, 1,024x 1024
b) MegaBytes, 1,024x1,000
c) Mega Bits, 1,000,000
d) Multiple Bytes, 1,000,000
c) Mega Bytes, 1,024x 1024
7. Common number systems used when working with computers include all but
b) base 10
c) base 8
d) base 16
c) base 3
8. When using data communications with 8-bit codes, the number of alphabetic symbols
a) must be exactly 256
b) must be greater than 8
c) can be greater than 1024 bytes
d) must be less than 256
c) determines the number of octets
9. In a computer, using binary numbers, how many bits would you need to represent 25
10. The binary decimal 11101 times 4 can be determined by moving the decimal point
a) four places to the left
b) four places to the right
c) two places to the left
d) two places to the right
c) can be represented exactly in base 2
11. When 0.746 is converted from base 10 to base 2, the first three digits past the decimal
These solutions may offer step-by-step problem-solving explanations or good writing examples that include modern styles of formatting and construction
of bibliographies out of text citations and references. Students may use these solutions for personal skill-building and practice.
Unethical use is strictly forbidden.
2. The Binary representation of 5 is given as 0
The Binary representation of 18 is given as 10010
The 18 is a negative signed number, so in order to perform the operation take 1’s complement.
1’s complement of...
|
OPCFW_CODE
|
[gclist] Timeliness of finalization
Sat, 29 Mar 1997 14:27:39 +1200
> On Mar 29, 11:08am, stuart (yeates) wrote:
> Are we talking about process crashes, thread crashes, or OS/machine crashes?
> It's awfully hard to run the finalizers after the power supply failed, or
> whatever, so I don't think the latter is possible. The OS will have to make
> sure that the disk is always in a recoverable state, just as it does now.
> Thread crashes don't really affect anything. Process crashes might cause some
> synchronous cleanup action sto be invoked, and might remove some roots. I
> don't understand why they would impose constraints on kernel finalization.
There are many situations in which an OS may attempt a graceful shut-down,
typically unavailability, pending unavailability or unreliability of a
resource on which the kernel depends for internal consistency. These
include electric power, kernel binary and swap disks (and/or network
connection to same if served remotely), RAM integrity (excessive parity
Consider the case of a power-manager, which senses has just been notified
of imminent failure of the power supply, This situation usually leads to
the OS shutting down or crashing. If we are relying on finalisation
to ensure closure of files/consistency of disks, to maintain contracts
in a distributed environment or similar we still need to have our finalisers
invoked if at all possible.
> > Such a system might even be able to handle the case where a subsystem
> > throws an uncaught exception (caused by a hardware error or similar),
> > and other subsystems NEED to have their finalisers invoked.
> Could you clarify? In a single address space system finalizers are not
> associated with subsystems. They are associated with objects which may be
> referenced by multiple subsystems.
What I meant (but didn't make clear) was that a subsystem which generates
a fatal error has the option of performing finalising actions at the
(informationally rich) point at which the error was generated, whereas
other subsystems have no option but to rely on finalisers (which are
relatively informationally poor).
For example a swap disk which detects unrecoverable error state can
prepare itself for reboot before generating an error, rather than relying
on a finaliser to do it.
I was not suggesting calling finalisers only for some subsystems.
stuart yeates <email@example.com> aka `loam'
you are a child of the kernel space
no less than the daemons and the device drivers,
you have a right to execute here.
|
OPCFW_CODE
|