Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
unity-design team mailing list archive
Mailing list archive
Re: Getting rolling for 9.10
> Scott Kitterman wrote:
>> On Thu, 23 Apr 2009 09:46:17 +0100 David Barth
>>> ... Could you coordinate with your "masked" developer friend to
>> him at the Summit. ...
>> You are being too subtle for me here perhaps, unless perhaps you think
>> acting as a sock puppet for Aaron Seigo (since I referenced his mail in
>> previous message). Let me assure you I am nobody's sock puppet and
>> for no one.
> Hey, sorry, I think I misread you when you said:
>> another community developer (I'll thank him by name
>> after it works) is working on getting KDE trunk
OK. No problem. Sorry for not getting where you were headed with that.
I just didn't name him because if it doesn't work out I don't want any
blame. It's not up and working yet.
The relationship of that effort is to this one is to make it easy for the
Ayatana developers working on the KDE aspects of the projects to run KDE
trunk on top of a stable Ubuntu foundation. I hope that this community
initiative is an aid for your development work.
>> I do hope we can have a positive discussion here about how best to serve
>> Kubuntu users that use applications that drive notify-osd notifications.
>> Given the absolute dependence of notify-osd on the indicator (I see it
>> a number of names and I'm not sure which is the current, correct term),
>> seemed reasonable to start the discussion with that.
> Right. The indicator is actually made of 3 parts:
> * indicator-applet, which is a gnome-panel compatible shell hosting...
> * ... the indicator-applet, which is the actual indicator
> * and libindicate which is the client side library made to simplify the
> application developers life (well, provided you're a Gnome application
> developer for the moment that is...)
> What we need to discuss is which part we need to develop for the KDE
> desktop, which one we can re-use on KDE, which one we can share between
> Gnome and KDE.
Of course. Part of why I bring up the new systray protocol now is that
because it offers a different set of capabilities and a different way to
integrate into the systray, I suspect that which protocol you choose to
use will affect some of the decomposition you need to do to get to a well
engineered cross-DE system.
|
OPCFW_CODE
|
Writing your resume is a very similar process to writing the cover letter — we want to use many of the same keywords from the job description, and tie them to our experience.
This time, though, we'll include a more complete work and education history, and write using bullet points instead of paragraphs. Use specific numbers and accomplishments, like "Consistently in the top 10% of representatives for customer satisfaction", rather than generic phrases, like "Achieved customer satisfaction". We'll keep the information on it concise, specific, and relevant to the position.
You may also find that some aspects of your experience aren't as relevant to some jobs; feel free to leave things out to keep the resume focused on what the company you're applying with wants. A great resume is tailored to each individual job posting.
Here's what the resume might look like:
(As with the cover letter, do not copy this example resume — it will be very obvious to Epicodus staff and to employers if you do, especially if your resume looks suspiciously like other students who ignore this warning and copy it.)
And here's our master list of experiences from before, crossed out as we included them in the resume:
Epicodus, C#/React Track, 2016
- About Epicodus
Completed full-time, 27-week program in web and open source development.
- Weekly code reviews on independent projects.
- Hard Skills
Comfortable with the command line Used git and github for development SQL databases
- Domain models
- React.js, Redux
- Best practices
- Test Driven Development
- Why Epicodus?
Passion for the web_discussed in cover letter_ Interest in open-source development_discussed in cover letter_
- Desire for self-improvement.
- Organized potlucks.
- Helped students in newer cohorts with troubleshooting.
- Interpersonal and communication skills
Worked in pairs daily to design and problem solve coding projects.
- Team Week
Oakland Community College, AA Criminal Justice, 2009-2010
3.5 GPA Honors Program
Web Development Intern, Digital Designs, 2016
- Site development
- Styled widgets with responsive design
Learned PHP on the job and by studying at night
- Relevant experience on the job
Developed a dashboard site for project managers to see statuses and blockers. Collaborated closely with project managers to ensure dashboard met their needs
- Made pull requests
Technical Customer Support Representative, Healthcare.gov, 2012-2015
- Technical customer support
Responsible for solving challenging technical issues related to healthcare coverage
- Assisted new customers in navigating their accounts
- Client engagement
- Interviewed clients to spot pain points in navigating our website
Resolved issues thoroughly for a diverse customer base.
Proactively brought up problems the customer may not anticipated_discussed in cover letter_
- Customer satisfaction
Consistently in the top 10% of representatives for customer satisfaction
- Related Buzzwords:
Detailed-oriented_discussed in cover letter_
Excellence_discussed in cover letter_
- Customer Satisfaction
- Accountable for results
- Building, configuring, and troubleshooting
Barista, Lil' Joe's Coffeehouse, 2010-2012
- Attentive to co-worker morale.
- Contributed to fun energy while getting the job done
- Decorated store for holidays and special events.
- Attentive to customers with unique requests
Made coffee and various coffee drinks in a fast-paced setting
Improved inventory tracking system that eliminated shortages
- Related Buzzwords
Created order out of chaos
Missing for this Job Post
Linux_discussed in cover letter_
- Network protocol layers
- Security layers
Drupal_discussed in cover letter_
Make sure your resume is visually appealing — unless you're applying for a design-related job, don't spend too much time on the layout, but make sure that it's formatted cleanly and consistently. For example, don't use bullets in one place and letters in another.
Put your name, phone, and email at the top of the resume. Optionally, you can include your mailing address, Github profile, and LinkedIn profile.
Most people's resumes should be under a page. If you have extensive relevant career experience, it's okay to be a bit longer.
With that, we're done! This cover letter and resume will get us past any bots or even non-technical HR staff by using the terms and phrases from the job description. It will also make it easy for a hiring manager to see that we have the skills and attitudes they're looking for in this role.
One last tip: when applying for a job by email, put your cover letter in the body of your email, and attach it and your resume as a PDF (not as a Word document).
|
OPCFW_CODE
|
Onboardings and wizards - should I show the steps of the process?
So I need to design onboarding screens for a credit card company. I started to look around on different onboardings and collect ideas from different products such as Lemonade, Forward, Grammarly etc.
I noticed that instead of showing the upcoming steps, there’s only a bar indicates the progress.
What’s the logic behind it? I always thought that indicating the exact number of steps is essential information for the user.
What do you think? In which cases showing steps is a must? Do you have any articles or researches on this topic?
Thanks <3
I think if we look at the benefits of a progress bar then we can get a better understand of why and when it might be preferred over showing a list of steps.
It takes up less space in the UI
A simple progress bar helps to keep the UI minimal. This in turn helps incentivise the user as they will feel that there won't be much effort to complete the process. This logic is along the same lines as to why use a multi-step form in the first place - because it is off-putting to dump all the fields on screen at the same time.
It is easier for "non-techies" to understand
Ok, showing the steps isn't exactly rocket science level navigation. However, there are still plenty of people who have little-to-no experience with computers. Displaying steps to a user just adds an unnecessary extra layer on the UI that can distract them and take their focus away from what is important (filling in the form). The simpler the design is, the more appealing it will be to the average user.
It can help cover up a complex process
Take for example, a 20-step process. A short progress bar is certainly more appealing than seeing all 20 steps up front. When the main goal is to attract a user to complete something optional, this helps to convince them it won't take too much of their time.
Look at your screenshot for the first example. A user will think "Oh, all I have to is fill in this one field and get this little progress bar full. It won't take long". As opposed to "Urgh, 20 steps... no thanks, back to watching cat videos".
In your second example, the progress bar isn't even obvious until the second page. By then the user has already mentally committed to completing the process so is less likely to abandon once they see the progress.
It can be far more accurate
Depending on how you implement it, a progress bar can be much more accurate than showing the number of steps in most cases. Consider a 10-step process. Steps 1-9 only have 1 field each, whereas step 10 has 40 fields. If you only show the number of steps, then it is misleading as by step 10 the user will think they are nearly finished, when in reality they have only completed about 20%. With a progress bar you can be more accurate.
Of course, depending on what you are trying to achieve (for example, misleading the user). This point could easily go the other way and be a reason to prefer steps over a progress bar. It really comes down to your data and what you are trying to portray to the user.
People are weird!
(disclaimer: you probably don't want to actually use this point to make a decision)
Me. I am looking at me. I am a gamer, and when I see a progress bar, I want to fill it up! The achievement of getting that little bar to be completely full can make even the most mundane of tasks seem more motivating. Sure, I get that "steps" is basically just a progress bar with labels, but somehow it just isn't the same as seeing an empty bar that needs to be loved.
So why do we even have wizards that show the steps
For comparison, here are some reasons why it might be better to show steps.
You can easily provide navigation back to other pages
You can label the steps to give the user an idea of what kind of data you will be collecting
You have more scope to provide visual feedback for each step and include extra information - such as a brief summary of the data entered in that step
It makes a short process more obvious
Summary
In summary, it really depends on what message you are trying to send to the user. If there is no requirement for the user to be able to navigate back to previous pages (e.g. jump from page 6 to page 3) then it might better to keep it as simple as possible and use a progress bar.
If you have a lot a pages a progress bar is certainly going to be more appealing. Conversely if you have only 2 or 3 pages then showing the steps is an easy way to say "hey, this won't take much time at all".
So review your process, and choose the method that makes your forms more appealing to what you are trying to achieve.
showing steps is a great way to keep the users informed and engaged. I've come across a situation where the number of steps varied based on the user selection. In that case, I had to fall back to showing progress bar only. But if you have finite set of steps, I'd definitely recommend showing them to the user.
|
STACK_EXCHANGE
|
In Azure Monitor we can create two type of alerts for Log Analytics:
Near real-time metric alerts are scoped to specific performance counter and heartbeat events but with Custom Log Search Alerts you can alert on any log in Log Analytics. With Custom Log Search Alerts the alert logic have two types:
- Number of results
- Metric Measurement
In a typical scenario you will use Number of results for logs and events and metric measurement for performance/metric logs. That wouldn’t be a problem if the way the alerts are fired distinguish quite a lot between those. For example in metric measurement you aggregate/summarize results and you alert based on the value from the aggregation/summarization. On top of that different alert instance is fired on each summarized record. In number of results you do not summarize/aggregate and alerts are fired based on the count of the records. For example on 10 records you will get only one alert instead of 10. If you are like me this is a problem as you want to get separate alert instance for your events just like metric measurement alerts.
In this blog post I will show you how to overcome this problem with workaround from the powerful Log Analytics query language.
Continue reading “Using Custom Log Search Alerts Based on Metric Measurement for Event Based Logs”
I’ve stumbled on a great article by Brandon Wilson named Demystifying Schannel on which he explains how we can enable verbose logging for Schannel to found out what protocols our machines are using. As I leave and breathe Log Analytics and love to crunch data I thought would be cool example if we can ingest that data into it and show you some cool example with the new query language on transforming data.
Continue reading “Find if You Are Using Only TLS 1.2 Protocol with Log Analytics”
At Ignite Jo Chan showed us how we can now execute Search queries trough Operations Management Suite API which is basically Azure Resource Manager API. He demonstrated that with a tool called ARMClient. That tool seems nice but I wanted to get results with PowerShell as it is more familiar to me. Continue reading “Programmatically Search Operations Management Suite”
During the last couple of months System Center Advisor or as probably will be known as Microsoft Azure Operational Insights Preview after TechEd Europe 2014 has received a lot of improvements and feature so we are now to Part 7. With this blog post I am also renaming all other blog posts. Here is the full list:
In this post we will have a quick look at a new intelligence pack called SQL Assessment: Continue reading “Microsoft Azure Operational Insights Preview Series – SQL Assessment (Part 7)”
So far I’ve covered almost every Intelligence Pack. Last week a new feature “My Dashboard” was released. This is one of the features I’ve voted on. With this short post I want to share a tip how to make your tiles in My Dashboard more useful. Continue reading “Microsoft Azure Operational Insights Preview Series – Time Matters in Dashboard (Part 6)”
On a SCOM management server I’ve noticed event ID 31553 logged a lot constantly and in detail the error looked like this: Continue reading “Fixing Event ID 31553 On SCOM Management Server”
Here 21 SQL queries that you can run against VMM Database and get useful information. The scripts are kindly provided by Murat Demirkiran a Senior Virtualization Expert at Denizbank in Infrastructure & System Management Group.
|
OPCFW_CODE
|
Are you searching for the subject I Get Email #08 – Workshop Heating, Condensation, and Rust? Are you looking to see how to keep tools from rusting in a shed? If that’s the case, please see it right here.
I Get Email #08 – Workshop Heating, Condensation, and Rust | Most-Buyed Power Tools.
Images related to the topic how to keep tools from rusting in a shed.
In addition to viewing the article on the topic I Get Email #08 – Workshop Heating, Condensation, and Rust, you can see many other articles related to how to keep tools from rusting in a shed here:https://bestfloorscrubbermachine.com/hand-tools-and-power-tools/.
Information related to the topic how to keep tools from rusting in a shed.
Website Article: (links in article)
Get our free e-book, ‘6 Fun Step-by-step Projects’:
Receive our latest content:
Tools I use:
circular saw (alternative tool):
Digital angle finder:
Table saw (*upgraded version):
New table saw blade (recommended):
Circular saw (alternative to a table saw):
Drill and Impact:
Kreg Jig (pocket holes):
Counter sink bit:
Random orbital sander:
Tape measure (lefty/righty):
Tape measure (flat back):
T square (4′):
Bluetooth hearing protection:
Safety glasses (add-on):
Paste wax (for the drawer slides):
See more tools we use:
///Sign up for our NEWSLETTER\ so you don’t miss anything,
///SUBSCRIBE TO OUR MAIN CHANNEL\
///SUBSCRIBE TO OUR SECOND CHANNEL\
///FOLLOW ME ON\
///CHECK OUT MY OTHER VIDEOS\
I Quit My Job To Be A Woodworker:
Kids Activity Table:
Kids Activity Chair:
Drill Charging Station with Storage:
Clamp Rack with Safety:
Farm House Table:
Installing a shop air cleaner:
Build a bench from a bed frame:
Bench vise and dog holes:
If you want it, Make it! Thanks for Watching! ..
we hope that this information brings you lots of value.
Thank you very much
Search related to topics I Get Email #08 – Workshop Heating, Condensation, and Rust.
how to keep tools from rusting in a shed
woodworking,wood,woodshop,workshop,shop,project,diy,stone,sons,do it yourself,hobby,video,how-to,how,to,build,make,kids,simple,climate control,workshop climate control,condensation,heating,rust,tools
#Email #Workshop #Heating #Condensation #Rust
power tools harbor freight
power tools jigsaw
power tools battery
power tools cordless
power tools at harbor freight
power tools used
power tools list
power tools grinder
power tools repair
power tools bundle
|
OPCFW_CODE
|
I prefer to use Chilipeppr to to send G-Code to my X-Carve. However I am unable to send large files (tried a file with 400k lines just now). Is there a way to increase something to allow larger files? I am currently running the file with UGS and no issues, I just prefer the chilipeppr interface.
Any help would be appreciated.
For some details, I am running windows 10 on a fairly powerful system, so the system is not in question. 3Ghz quad core processor with 16gb of ram, dual video cards…etc…
I am running in Chrome.
Hey Eric, I didn’t try but, there is a setting on top left corner, click sender setting and see how much is your Max command line. Maybe increasing that number allows you to load bigger Gcode.
I’ll take a look, have to wait until the carve is finished…lol
I think maybe that is the command line length for per line. If it is Row, you’re the winner.
By the way, Pic and Grave is the one takes huge Gcodes.
I have loaded huge gcode files on chilipeppr with no problem, I use firefox. When. File is too big ( 5 megabytes +) then it will move the file and store it in the ram to run. Which sometimes may crash your browser ( happened to me when running chrome) so I tried it on Firefox and I haven’t had a problem since.
I’ll try it in firefox, but my file was 10.3 mb. Rechecked it, 471k lines of code.
I’ve ran a 780,000 line file from ChiliPeppr on my bogo, low spec, 2013 MacBook without issues. That was a 9.5 hour job. Sure, it couldn’t store it in localStorage and I got the warnings, but it ran just fine.
I always get the warnings, but on big files, it crashes…never finishing the load.
Chrome version 45.0.2454.93 (latest version) windows 10.
Loaded the 10mb file with Chilipeppr and the latest Firefox, appeared to load fine. Tried a 15mb gcode file, no luck…Firefox crashed. Looks like I will continue to use UGS with large files. Unless anyone has any suggestions.
Hmm, that is odd,I’ve ran nearly a million line file… (15 hour job) with no problem… I gotten the errors buy ran through just fine… specs of my computer.
3.0 ghz over clocked quad-core with hyper threading ( 3.8 max)
Integrated Intel graphics
No idea…oh well…UGS works…I just prefer Chilipeppr,…more options.
Seriously, try picsender. I believe trial version has time limit, means you can load and see,If it works, licence is lunch money. There is instruction and discussion on this forum, you may want to search.
@AlanDavis I will give it a shot this weekend. Thanks.
I installed the trial version of Picsender.
Connected to the correct COM port with 115200 for speed. (It connects with no issues)
I have my startup block to reflect imperial and a default speedrate.
Current Status shows Idle
I have selected 1 for my jogging distance
Incremental (G91) is shown above the jogging buttons
I press the onscreen button to move any axis and the screen updates as if the machine has moved, but no actual movement on the machine.
Changed $13=0 to $13=1 (report inches) and it now jogs correctly…odd?
Paid the $15…I’ll do a test cut on one of the crosses and see how it goes.
Apparently I have no patience…waiting for the registration code…want to send the file to the machine…lol
|
OPCFW_CODE
|
Sunday, September 11, 2011
Guess, you people have unzipped the CSU face evaluation system, about which I explained in the previous post. After unzipping, consider the folder in which we unzipped is called Main folder.
Inside the Main folder, there will be a README file, that instructs how to install the CSU face evaluation system. Do it, and make sure it happens with out any errors. If there is an error, it is related to some package absense. Look through the error carefully and install the missing packages.
After installing, with in that README file, there are some tests related to scraps data which is present in the Mainfolder/data/csuScrapShots/source/pgm folder. These are the images from the year book of 1925 of CSU. So we can perform initial tests of PCA, LDA, Bayesian and EBGM approaches using the commands listed in the README.
Also, with in that README file, if you have FERET database, there are tests related to grey FERET database. If you don't have a FERET database, you can try to get it online searching in google. I would have given you, but it's redistribution is prohibited.
After running the tests on either of them or any one of them, you can view the results in Mainfolder/results folder.
The UsersGuide pdf document, that is present in the main folder, explains you everything you need to know, to get started using that system. It is like a bible for CSU face evaluation system.
In the main folder, the src folder contains C-programmes related to various executables that we use while we are running tests related to biometric face recognition.
The data folder is the epi centre of all the data that we use in running the tests. The normalised images are also stored here.
The distances folder contains the distance metrics related to PCA, LDA and other approaches. Carefully observe these distance files, relating it to the data. It is because, if you build a face recognition system, you have to make the distance files similar to the ones present here and keep them in a folder just like PCA, LDA and all and trigger the comparison scripts.
The final results are stored in results folder. The training data, which is created when we are doing PCA, LDA or Bayesian training is stored in train folder.
This is it guys, I have shown you a way into biometric face recognition. Now it's up to you to figure out. If you need any help, leave the comments, I am always ready to help. If I think there is a need for extra post, I will definitely do it.
But I don't want you to stick to my blog for your entire career. That is the reason I kept a time frame.
Life has to go on. Hit the "+1" on right side bar and say good bye.
|
OPCFW_CODE
|
cannot load new trained model for inference
Dear Cellpose developers,
Train
I trained a new model with my own data by
python -m cellpose --train --use_gpu --dir /media/cellpose/imagesTr/imagesTrTif/ --chan 2 --chan2 1 --n_epochs 600 --pretrained_model None --batch_size 8 --dir_above --save_each --verbose --img_filter '_img' --mask_filter '_masks'
The trained model was saved in the image path
Inference
from cellpose import models, io
import os
join = os.path.join
import numpy as np
from tqdm import tqdm
import tifffile as tif
# model_type='cyto' or model_type='nuclei'
model_path = '/media/cellpose/imagesTr/imagesTrTif/models/cellpose.028618_epoch_599'
model = models.CellposeModel(gpu=True, model_type='cyto', pretrained_model = model_path)
print('model info:', model.pretrained_model)
print('start predicting....')
img_path = '/media/cellpose/imagesTs/images'
names = sorted(os.listdir(img_path))
seg_path = '/media/seg-images/cellpose-seg'
os.makedirs(seg_path, exist_ok=True)
chan = [2,1]
for i, name in enumerate(tqdm(names)):
save_name = name.split('_img')[0]+'_label.tiff'
if not os.path.isfile(join(seg_path, save_name)):
if name.endswith('.tif') or name.endswith('.tiff'):
img = tif.imread(join(img_path, name))
else:
img = io.imread(join(img_path, name))
masks, flows, styles = model.eval(img, diameter=None, channels=chan, net_avg=False, progress=True)
However, I find the script does not load my trained model. It still loads the default model [/home/.cellpose/models/cytotorch_0]
How can I load the new trained model for inference?
I also tried to replace the defalut model file in .cellpose/models by the new trained model, but got the following error
model = models.CellposeModel(gpu=True, model_type='cyto', pretrained_model = model_path, net_avg=False)
File "/path to/cellpose/models.py", line 417, in __init__
self.net.load_model(self.pretrained_model[0], cpu=(not self.gpu))
File "/path to/cellpose/resnet_torch.py", line 212, in load_model
self.load_state_dict(torch.load(filename))
File "path to/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1497, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for CPnet:
Unexpected key(s) in state_dict: "diam_mean", "diam_labels".
I've added the function io.add_model(/full/path/to/model) to add the model file to the hidden folder to use with GUI and CLI. You can also access the function by running python -m cellpose --add_model /full/path/to/model.
In your case though, it wasn't working because you were using model_type='cyto' -- the model was by default using that input, instead in the future if you don't want to add your models to the hidden folder say
model = models.CellposeModel(gpu=True, pretrained_model = model_path)
|
GITHUB_ARCHIVE
|
DISCLAIMER: This is a CUSTOM ROM! Use at YOUR OWN RISK!!! By using this ROM, you understand that I am not liable for any or all consequences of using this ROM. If you don't know what you're doing, DON'T TRY IT. Use at your own risk!
Here ya go folks, CyanogenMod 10 for the Warp Sequent!!! Well, let's be honest, unless you're living under a rock, you probably know what Android 4.1 (JellyBean) is. Furthermore, you should know what the CyanogenMod project is. However, if a rock is really your home and you'd like to learn more, please visit their site: http://www.cyanogenmod.com
Developers: PlayfulGod, Hroark13, SuperR, Junkie2100
Description: CyanogenMod 10 for the Warp Sequent
CyanogenMod 10 What is this? This is a build of the popular CyanogenMod 10 for the ZTE Warp Sequent N861. This is built from source.
People to Thank
- koush - for his kickass recovery.
- CyanogenMod - for the best ROM out.
- Robotech - for his endless testing, feedback, and input on fixes.
- hroark13 - for helping and his configs used from the warps device repo.
- downthemachine & Agat - for their modded kernel source and mic fix.
- Shinru2004 - for taking time away from his rom to lend a hand, and entertaining us by picking on dizzle lol
- DM47021 - for his video codec building fix
- Dimbulb - for taking some initiative and getting PG a dev unit to work on
- numerous testers - thanks for testing & keeping the faith.
- IRC - for the ongoing help and shared knowledge.
** #wrpsequent (Skyn3t)
** #oudhitsquad (Freenode)
** #cyanogenmod-dev (Freenode)
** #koush (Freenode)
Whats NOT working
- It boots
- Mobile Data
- Voice Calling
Remember: THIS IS NOT FOR DAILY USE, THIS IS ONLY A TEST!
- Video Playback
How do I install this?
- Download the .zip & copy to the sdcard (make sure wifi is enabled if dl'd from phone).
Zip can be installed from the external sd in CWM
MAKE A BACKUP!!!
DO A FACTORY RESET!!! - this will wipe data & cache!
Go to install zip from sdcard
Choose zip from sdcard
Select cm-10-xxxxxxxxxx-UNOFFICIAL-warp2.zip
Repeat Steps 5 -8 for gapps
0.1alpha (Initial Release)
-main speaker working
-usb adb, and mtp connections
-fixed mobile data connection, always says 1x but dont be fooled
-voice calling works not though due to the mic bug not always usable
-sms works, mms is still broken till we get the apns right
-enabled wifi hardware
-some work on audio routing, still buggy and tends to cut out the mic
-repaired wifi tethering, and enabled tethering menu
-fixed problems with wifi password storage
-wifi temporarily broken due to a flaw in the service start code (will fix shortly)
-audio routing and mic fixed
-GPS working thanks to whatever it is playfulgod did... still have no idea how he did that lol
-wifi working other than wifi direct
This list will be updated as we find them and fix them.
-bluetooth not functional at all
-video codec buffers become easily overloaded causing the inability to play most formats
-camera is not functional at all
-compass non functional
-proximity sensor non functional
-mms doesnt work to send or recieve without reprogramming it manually
updated zte kernel for cm10
cm10.1 is in the works now thanks to pg and dimbulb's purchase of the development unit. ill change this OP up for it as soon as we get it up to par with cm10 (which as far as i know is down to getting audio to work) so stay tuned. and for the newest builds you can hit up the wiki
|
OPCFW_CODE
|
central a/c won't cool
My A/C unit was just checked last week- the tech said it had enough freon. Monday we had a nasty storm- electricity flickered, but never went out. After the storm, we noticed the a/c went on, but it doesn't cool. Checked the thermostats and the circut breaker- both are working correctly. What can be wrong? The unit is 22yrs old. The tech did say we are running on borrowed time. He did mention replacing the capacitor and contactor.
When did "the tech" say that the time was borrowed?
How have you verified that the thermostat is working? Is it turning the A/C unit on and off on the "cool" mode?
It could be a variety of things, but there's no way to be sure without troubleshooting the system.
Thermostat
It could be that the thermostat isn't signalling the A/C unit to start.
Things to check
The indoor blower comes on
The outdoor unit turns on
There's voltage on the A/C signal wire from the thermostat
Control board
The control board in the air handler/furnace could be bad.
Things to check
The indoor blower comes on
The outdoor unit turns on
There's voltage on the A/C signal wire to the A/C unit
Contactor
If the contactor is bad, the A/C unit will get the signal to come on, but the unit will not turn on.
Things to check
The outdoor unit turns on
There's voltage at the normally open contacts of the contactor (T1, T2, T3)
Blower
If the blower in the air handler/furnace is bad, cool air will not be blown throughout the home.
Things to check
The blower in the air handler/furnace comes on
Air flows from the supply ducts
Condenser fan
If the condenser fan isn't working, the system will have difficulty removing heat from the refrigerant.
Things to check
The fan on the outdoor unit (condenser fan) comes on
Compressor
If the compressor in the condensing unit is bad, the system will not move refrigerant through the system.
Things to check
The refrigerant lines are different temperatures
The compressor turns on (may have to touch the unit to tell)
Restricted air
If air isn't moving through the system properly, it won't work well.
Things to check
Air is being moved by condenser fan
Condensing unit coils look clean and free of debris
Air handler/furnace filter(s) are clean
Air is flowing from the supply ducts
Evaporator coils are clean and free of dust and ice
Refrigerant level
If there's not the correct amount of refrigerant in the system, it will not function properly.
Things to check
Refrigerant lines are at the proper operating temperature (requires knowing the operating temperature)
Operating pressure of the refrigerant lines (requires a set of gauges)
|
STACK_EXCHANGE
|
Can you identify this gunk from a washing machine?
I’ve got a clothes washer that drains into a slip sink in our basement. This evening we did a normal load of towels, and later discovered that the sink had clogged up and overflowed. In the drain I found a lot of slightly rubbery, soft material that looks like this:
Any idea what it is? The machine is 20+ years old. My guess is that it’s just some scummy stuff that builds up over years and then one day the whole layer peels off and washes away. But I’m concerned that it might instead be some important part of the machine, like a seal that has broken down. Has anyone encountered something like this before?
Comments have been moved to chat; please do not continue the discussion here. Before posting a comment below this one, please review the purposes of comments. Comments that do not request clarification or suggest improvements usually belong as an answer, on [meta], or in [chat]. Comments continuing discussion may be removed.
It looks to me very much like 1,743 layers of fabric softener deposit from the inside of the outer tub (that's about sixteen years worth of loads at twice a week). Fabric softener leaves a thin layer of of oily residue every time it is used. Some people disassemble their washing machine and remove this gunk as part of a cleaning or refurbishing procedure.
Usually it is quite a chore to clean, but I can imagine chunks of it breaking off given some circumstances, like an extra large load washed with hot water or after some time of disuse where the stuff had time to dry out and shrink, breaking loose from the plastic tub by itself.
Using less fabric softener than the people who make money off it would like you to use, and not using it in every load, reduces the problem significantly. I use very little, and have never had more than a thin skin of build-up even after several years (when I had to dismantle the machine for unrelated repairs). I dislike the smell, so only use it on towels that are to be dried outside, plus it's not good for sportswear which accounts for a lot of my washing.
That's a pretty good hypothesis. This is also something that can result from using too much detergent because many detergents contain fabric softeners. That tends to be more of an issue with high-efficiency washers, though.
@JimmyJames good point - and detergent manufacturers often recommend using too much. If I did as I was told with (powder) detergent, it wouldn't all dissolve and it would accumulate in the dispenser drawer. Even if it doesn't contain fabric softener itself, detergent can still contribute residue
@ChrisH When I got my high-efficiency washer, the salesperson stressed the importance of not using too much detergent. It's apparently the #1 cause of issues with these washing machines. Apparently, my spouse didn't hear that, and we started having issues with what I would call funk-ass-smell. Got some cleaning tablets and those seem to have helped along with using a very small amount of detergent. It really doesn't take much at all.
FWIW we do a lot more than 2 loads/week, but don't normally use any fabric softener beyond whatever is mixed in with the detergent. Even so, the essential point here is that it looks like something built up over time, and not some deteriorated part of the machine itself. The beige surfaces with greenish interior gives it a man-made look to me, but that's just an impression.
Yeah, I just invented the numbers... but it certainly isn't a part of that style of washer, which has black rubber seals where needed. I do agree with Chris's comment about usually using less than the sellers recommend, no matter what the type of machine--although in the case of front loaders, it is a bigger deal because they use so much less water per cycle.
@Conrado Just to add to your comment on front-loaders: I have a high-efficiency top-loader. The easiest way to tell the difference is that there's no agitator post on HE top-loaded washers.
@Conrado mine is a front-loader. Top-loaders are rare here - we tend to have less floor space in our houses and want a useful surface over the top (even those of us who are lucky enough to have utility/laundry rooms. We also have higher expectations for water and energy efficiency.
Ruskes won't like this because I'm giving an answer that doesn't 100% match the question, but I'm doing it anyway, because that's what I do!
I don't know what stuff is. It could be:
Some collected residue that broke loose
Some part that broke apart
Something that was wrapped in one of the towels
If the machine still works then I wouldn't worry about it. If it won't run properly any more, or if it leaks, then it needs investigation.
But having had way too many drain clogs over the years, for a variety of reasons, there are some basic things you can and should do to prevent at least some of the clogs. One of them is specifically for the washing machine if it drains into a sink. A lint trap can do wonders:
This is an example from Amazon (easiest way to find stuff with pictures) but you can get them in any hardware store and even in many grocery stores. This will prevent most lint (which everyone has) from going down the drain, and it will definitely catch the bigger stuff like you encountered today.
Change it every few weeks (depending on how many loads you do) when it starts to get clogged up. Inexpensive protection against one of the ways your drain can clog.
What a contraption, it will not prevent the deposits from washing powder, which OP has
@Ruskes These are extremely common types of lint traps. The gunk that OP found would absolutely have been blocked by one of these lint traps.
Just to be clear, yes, it's true that a trap like this wouldn't prevent the deposit from forming, but it would indeed have either caught it or forced it to break into pieces small enough that they wouldn't have lodged in the drain.
That looks like the bacteria film that I find in my sink drains.
|
STACK_EXCHANGE
|
Gralog 3.0 (beta)
Hiya - have you tested this with v3.0 (beta)? Getting an error 500 when clicking install after importing the first pack:
Error
Installing content pack failed with status: Error: cannot POST http://<IP_ADDRESS>:9000/api/system/content_packs/e37b2103-eee3-4b77-bcfd-e3d9957ab79f/0/installations (500). Could not install content pack with ID: e37b2103-eee3-4b77-bcfd-e3d9957ab79f
Cheers
Adam
I'm also receiving the same error when using Graylog 3.0.
Same here. Any chance for an updated content pack?
Hey There!
It's good to see that you are interested in running the content pack in Graylog3.0.
Because of compatibility issues with my config i'm actually stuck in 2.x.
BUT there will be an release of graylog-cp-watchguard which is capable to run on graylog 3.x.
Please be patient an feel free to contribute.
Have a nice day!
Cheers
Thomas
Great stuff – thanks Thomas.
Will contribute if we can!
From: Thomas<EMAIL_ADDRESS>Sent: Wednesday, 15 May 2019 11:45
To: ThoZed/graylog-cp-watchguard<EMAIL_ADDRESS>Cc: Ads<EMAIL_ADDRESS>Author<EMAIL_ADDRESS>Subject: Re: [ThoZed/graylog-cp-watchguard] Gralog 3.0 (beta) (#26)
Hey There!
It's good to see that you are interested in running the content pack in Graylog3.0.
Because of compatibility issues with my config i'm actually stuck in 2.x.
BUT there will be an release of graylog-cp-watchguard which is capable to run on graylog 3.x.
Please be patient an feel free to contribute.
Have a nice day!
Cheers
Thomas
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHubhttps://github.com/ThoZed/graylog-cp-watchguard/issues/26?email_source=notifications&email_token=AK4NC4UV6RHWL55I5ULXIKDPVPSSXA5CNFSM4GTINZYKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODVOIR5Q#issuecomment-492603638, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AK4NC4XDMDCXFSJSGPVVCHDPVPSSXANCNFSM4GTINZYA.
Hey There!
It's good to see that you are interested in running the content pack in Graylog3.0.
Because of compatibility issues with my config i'm actually stuck in 2.x.
BUT there will be an release of graylog-cp-watchguard which is capable to run on graylog 3.x.
Please be patient an feel free to contribute.
Have a nice day!
Cheers
Thomas
Hi Thomas! I created 4 PRs for the Content Packs listed. I mostly went through and recreated everything by hand, and as such, there may be something I missed. I had some odd issues with the existing regex, like double or triple backslashes causing Greylog to throw errors, but it all seems to work with my 5 Fireboxes.
Thanks a lot! manikmakki.
there is a new branch "legacy" to support old versions. your patches are merged in development branch for the upcomming version 0.6...
Meanwhile everybody with compatibility issues should use the develop branch.
Thanks!
Cheers Thomas
|
GITHUB_ARCHIVE
|
A small PHP based blog engine rendering static content.
- Generate static content
- Web admin on-line, no custom tools, deploy scripts etc.
- Built in PHP, as it is available everywhere
- The whole thing as a single PHP file
- No SQL database
- Simple yet fully usable
- A web server site that can run PHP.
- FTP access or other method to initially create the admin directory on the site and to upload the index.php file.
- PHP scripts must be allowed to browse and modify files and directories within the site.
- Blog posts can be created, edited and deleted
- Posts can be worked on internally and later be published
- Support for creating posts as plain text or HTML (no fancy editor yet)
- Markdown can be used if Parsedown.php is downloaded and put into the admin directory next to index.php.
- Primitive Atom (RSS) feed
- Skinning support
- Log into the site with FTP or via some other transfer method.
- Create your admin directory. You should give it a name that is not easy for a potential hacker to guess, but a name you will remember. The admin page will also be protected by password, but the best way to keep hacking attacks away is that the path to admin is never guessed.
- Upload index.php into your admin directory.
- Open your browser and surf into your site and the admin directory you created. E.g. http://mycoolsite.com/longsecretadmindirectory/
- Follow the installation wizard
MogBlog can be customised with a skin file. An example rendering the MogBlog default theme is available as the file skin.php. The skin file must be named exactly like this and be placed in the same directory as index.php. See the comments in skin.php for explanations about the macro placeholders that can be used.
Known issues and missing stuff
- No support yet for uploading images or other files. You need to upload such things manually for now. File handling feature is planned.
- Only one admin user can be configured.
- No paging or "archives". All posts are listed on the front page as of now.
- Changing time zone will make all posts appear as new in the Atom feed.
- No auto-saving or revision handling (backup) of changes.
- Better documentation needed
- No comment system. This is partly by design, but built-in support for Disqus or similar would be nice.
- No built-in search function. You have to rely on being indexed by public search engines like DuckDuckGo
- No friendly reset procedure if you forget your password. You are forced to manually edit config.php and set the user hash to an empty string to allow login without password so you can set a new one.
|
OPCFW_CODE
|
Support of intel packages
I would like to package in a pex, the intel packages of the classic python numerical packages (numpy, scipy, etc.)
Pex seems not able to find the package that provides numpy because the name differs.
Here a simple reproducer:
System description
pex version: 1.6.11
python version: 3.6.3
OS: centos7
Using classic numpy, it works
pex numpy -o numpy.pex
./numpy.pex -c "import numpy; print(numpy.__file__)"
/home/jd.lesage/.pex/install/numpy-1.17.2-cp36-cp36m-manylinux1_x86_64.whl.0833169d405facf4fccb13d37dcbf723d0827c1e/numpy-1.17.2-cp36-cp36m-manylinux1_x86_64.whl/numpy/init.py
Using intel numpy, import fails
pex intel_numpy -o intel_numpy.pex
./intel_numpy.pex -c "import numpy; print(numpy.__file__)"
Traceback (most recent call last):
File ".bootstrap/pex/pex.py", line 397, in execute
File ".bootstrap/pex/pex.py", line 329, in _wrap_coverage
File ".bootstrap/pex/pex.py", line 360, in _wrap_profiling
File ".bootstrap/pex/pex.py", line 445, in _execute
File ".bootstrap/pex/pex.py", line 484, in execute_interpreter
File ".bootstrap/pex/pex.py", line 535, in execute_content
File ".bootstrap/pex/compatibility.py", line 81, in exec_function
File "-c ", line 1, in
ModuleNotFoundError: No module named 'numpy'
Alrighty - intel-numpy is an exceedingly odd package:
$ unzip -l intel_numpy-1.21.5-cp39-cp39-manylinux2014_x86_64.whl | grep numpy/__init__.py
16819 2023-11-30 14:03 intel_numpy-1.21.5.data/data/lib/python3.9/site-packages/numpy/__init__.py
133925 2023-11-30 14:03 intel_numpy-1.21.5.data/data/lib/python3.9/site-packages/numpy/__init__.pyi
It stuffs the code in data directories instead of at the top-level. This will never work with a traditional PEX, but things do work in --venv mode though:
$ pex intel-numpy --venv -o intel_numpy.pex && ./intel_numpy.pex -c "import numpy; print(numpy.__file__)"
/home/jsirois/.pex/venvs/440b59b00eab62a3ec204417f71948250dfe5db1/108a3ddc84230ab282ea6312e06cb68f51008ce5/lib/python3.9/site-packages/numpy/__init__.py:164: UserWarning: mkl-service package failed to import, therefore Intel(R) MKL initialization ensuring its correct out-of-the box operation under condition when Gnu OpenMP had already been loaded by Python process is not assured. Please install mkl-service package, see http://github.com/IntelPython/mkl-service
from . import _distributor_init
/home/jsirois/.pex/venvs/440b59b00eab62a3ec204417f71948250dfe5db1/108a3ddc84230ab282ea6312e06cb68f51008ce5/lib/python3.9/site-packages/mkl_fft/_numpy_fft.py:206: SyntaxWarning: "is" with a literal. Did you mean "=="?
elif norm is "forward":
/home/jsirois/.pex/venvs/440b59b00eab62a3ec204417f71948250dfe5db1/108a3ddc84230ab282ea6312e06cb68f51008ce5/lib/python3.9/site-packages/numpy/__init__.py
In general, building a pex with --venv prepend --venv-site-packages-copies will net you a runtime environment most compatible with what even atypical distributions expect. The cost is a slightly slower cold boot, but then faster runs from then forward. You can use PEX_TOOLS=1 intel_numpy.pex venv --compile --site-packages-copies right/here to explicitly turn the PEX file into a venv as well if, for example, you're building a Docker image from it.
@jdlesage I'm going to close this as an answered question. The Pex --venv functionality did not exist in 2019, but it has now for several years and is generally what you want. You may also want to investigate --sh-boot.
|
GITHUB_ARCHIVE
|
By Greg Weinger, Engagement Product Line Manager
I'm excited to announce our official support for two new mobile app development frameworks: Unity and Xamarin. Combined with official support for PhoneGap/Cordova, we now support three of the top frameworks in market.
Urban Airship provides bindings or plugins for each of these frameworks, open source access to framework code, and full documentation:
Why Development Frameworks?
Frameworks can accelerate app development by reducing the amount of code necessary to build a mobile app. Code typically runs on multiple platforms (at least iOS and Android), saving the effort of building and maintaining separate native apps.
Not only do frameworks save the effort of building the same app in multiple languages, they take on the significant burden of bug fixing and keeping up with updates to mobile OS versions.
Further, mobile development is a complex and specialized skill, with its developers high in demand. By reducing complexity, frameworks reduce the overall skill level required to develop a mobile app, enabling companies to hire less expensive developers, (potentially) leading to significant cost savings over time.
Historically, the trade off with framework-based apps has been reduced flexibility: it’s easy to do the simple things, but hard or impossible to do anything more complex. Some companies ended up accepting less functionality than they desired, while others were forced into a costly rewrite of the app in native code on slipped deadlines.
Frameworks have also created waves of apps with a more homogenized look and feel, as companies relying on less-skilled developers get little more than what the framework can offer out of the box. They can expose the company to quirks of the framework, which are necessarily less well-tested than the operating system they sit on.
Newer Development Frameworks
This latest generation of mobile app frameworks have matured, and their increasing adoption and popularity results in a more thoroughly tested product. They are providing greater flexibility with better quality than in the past. Some frameworks, such as Xamarin, provide maximum flexibility by allowing developers direct access to native APIs, so that nothing is out of reach.
We see many customers successfully using development frameworks in a variety of ways. Some customers use frameworks for the first version of their app, gather feedback, and then go on to build subsequent generations of the app in native code. This allows them to iterate rapidly to the point where they feel they can commit to a certain direction.
Other customers have found that frameworks support their current app use cases well into the future.
We have chosen three leading frameworks, based on market developments and the demand we’ve seen from our customers and prospects. All of our frameworks provide support for iOS and Android only, sitting atop our mobile SDKs.
Urban Airship offers plugins for PhoneGap on both iOS and Android. For more details see PhoneGap/Cordova.
In contrast, Xamarin lets developers create native apps in C#. It provides an IDE for development and provides native performance, with direct, full access to native APIs. This gives developers the best of both worlds.
Read our documentation for Xamarin plugins. Our native bindings mean that you have full access to our entire SDK.
Last but not least, Unity is a platform for developing 2D and 3D games and interactive apps. It ships with its own editor, suited to the demands of visual programming.
Get started with Unity on Urban Airship.
Developer Frameworks are a great way to accelerate or reduce cost of your mobile development efforts. We’re excited to watch the market develop as these and other frameworks continue to mature.
|
OPCFW_CODE
|
Thanks for all your help over the last couple of days. One more question:
- Can I plot particles on a volume rendered image?
I have stars and I want to show where they are!
Elizabeth Harper-Clark MA MSci
PhD Candidate, Canadian Institute for Theoretical Astrophysics, UofT
Sciences and Engineering Coordinator, Teaching Assistants' Training Program,
Astronomy office phone: +1-416-978-5759
Does anyone out there have a technique for getting the variance out of
a profile object? A profile object is good at getting <X> vs. B, I'd
then like to get < (X - <X>)^2 > vs B. Matt and I had spittballed the
possibility some time ago, but I was wondering if anyone out there had
successfully done it.
Sent from my computer.
I'm trying to make in-situ visualization available for a simulation,
and I'm considering different softwares for that (VisIt, YT,...).
I've never used YT, so my question is: is it possible to add
some code in a simulation (written in C++) to have some kind of
distributed rendering engine using YT, just like VisIt does with
ENS Cachan, antenne de Bretagne
Département informatique et télécommunication
I may be asking something that is obvious to others, but it's not to
me. It is it possible to have more than one data field displayed in a
single volume projection image? Say, each with a different color map?
If so, how? Thanks!
510.621.3687 (google voice)
Is there a way of passing field parameters to extract_connected_sets?
works fine but
contours = dd.extract_connected_sets("NegEscapeVelocity", 1, 30.0, maxv, log_space=False)
complains it can't find the set field variables.
I have a 1536^3 ENZO datacube that I want to smooth to essentially become a
512^3 datacube. I want to be able to still use the datacube to do further
analysis with yt, so I need the format of the data not to change. I only
need to smooth the data of 3 fields, but I need to keep the Derived
Quantities and parallel capabilities of yt. Is there any yt implementation
that can do this or any other suggestions?
I don't know how widely this has been discussed (sorry if it has and
I've just missed it), but how easy would it be to install YT such that
it interacts with my existing Python packages? Is the easiest thing
simply to re-install everything to the YT directory and re-update the
PYTHONPATH environment variable for any packages not in the default
Python path, or is there a better way to allow YT and my existing Python
packages to co-exist without activating/deactivating YT?
I'm having trouble adding text through a callback onto a phase plot.
I tried this code:
pc.add_phase_sphere(r200[j],'kpc',["Density","Temperature","CellMassMsun"],weight = None)
And I get this error:
Traceback (most recent call last):
File "test_phase_plots.py", line 41, in <module>
"/share/home/01112/tg803911/yt_17May2011/yt-x86_64/src/yt-hg/yt/visualization/plot_collection.py", line 157, in save
"/share/home/01112/tg803911/yt_17May2011/yt-x86_64/src/yt-hg/yt/visualization/plot_types.py", line 108, in save_image
"/share/home/01112/tg803911/yt_17May2011/yt-x86_64/src/yt-hg/yt/visualization/plot_types.py", line 903, in _redraw_image
"/share/home/01112/tg803911/yt_17May2011/yt-x86_64/src/yt-hg/yt/visualization/plot_types.py", line 800, in _run_callbacks
"/share/home/01112/tg803911/yt_17May2011/yt-x86_64/src/yt-hg/yt/visualization/plot_modifications.py", line 910, in __call__
y = plot.image._A.shape * self.pos
IndexError: tuple index out of range
The error arises in the save command, but the problem only happens when
I add the modify["text"] command. Any ideas?
thanks a lot, setting y_log=False does the trick!
> Message: 3
> Date: Fri, 30 Sep 2011 16:32:58 -0600
> From: David Collins <dcollins(a)physics.ucsd.edu>
> To: Discussion of the yt analysis package
> Subject: Re: [yt-users] log-linear phase plot
> Content-Type: text/plain; charset=ISO-8859-1
> I've run into this as well. I believe you need to give
> add_phase_object the log value by hand, with
> On Fri, Sep 30, 2011 at 3:52 PM, Geoffrey So <gsiisg(a)gmail.com> wrote:
>> Just wondering, if FieldOne is already logged, do you still want the limits
>> to encompass 1 to 1e10, have you tried something like (1e9,1e10) ?
>> On Fri, Sep 30, 2011 at 2:37 PM, Wolfram Schmidt
>> <schmidt(a)astro.physik.uni-goettingen.de> wrote:
>>> Hi everyone,
>>> I defined two fields, say, FieldOne and FieldTwo, to make 2D phase plot.
>>> FieldOne is logarithmic by default, FieldTwo is linear (i.e., I set
>>> take_log=False in add_field).
>>> I want to produce a phase plot with logarithmic bins for FieldOne and
>>> linear bins for FieldTwo, where the range of FieldTwo is [-10,10]
>>> I thought that
>>> pc.add_phase_object(dd, ["FieldOne", "FieldTwo", "CellMass"], weight=None,
>>> x_bins=100,y_bins=100, \
>>> ? ? ? ? ? ? ? ? ? ?x_bounds=[1e0,1e10], y_bounds=[-10,10])
>>> might do the job, but ?yt returns the error message:
>>> Warning: invalid value encountered in log10
>>> yt : [ERROR ? ?] 2011-09-30 16:55:05,610 Your min/max values for x, y have
>>> given me a nan.
>>> yt : [ERROR ? ?] 2011-09-30 16:55:05,610 Usually this means you are asking
>>> for log, with a zero bound.
>>> Traceback (most recent call last):
>>> ?File "stability.py", line 352, in <module>
>>> ? ?x_bounds=[1e0,1e10], y_bounds=[-10,10])
>>> line 1149, in add_phase_object
>>> ? ?lazy_reader)
>>> line 408, in __init__
>>> ? ?raise ValueError
>>> It appears that add_phase_object treats FieldTwo logarithmically, although
>>> it is linear.
>>> So how can I do a log-linear phase plot?
>>> yt-users mailing list
>> yt-users mailing list
|
OPCFW_CODE
|
PostgreSQL "pass-through" aggregate function that uses other aggregate functions
Is there a way to create a PostgreSQL aggregate function that can call other aggregate functions, handle exceptions, and return a single value?
This query takes a set of geometries, merges them together, tests if a constant geometry is fully within the merged result, and returns true false. The problem is that often merging will raise an exception for various reasons, so I need to do a fallback, i.e. an iferror() excel-style function that will execute another operation like st_union() instead of st_collect(). If that also fails, ideally I should iterate over all geometries individually to see if any one of them matches my test.
SELECT ST_WITHIN(
ST_GeomFromText('POLYGON((0 4096,0 0,4096 0,4096 4096,0 4096))', 3857),
ST_COLLECT(geomtry)
) AS IsEmpty
FROM (select geomtry from ...) AS src;
Pseudo code:
FUNCTION test_is_empty(geometries: SET<geometry>)
TRY:
RETURN ST_WITHIN(
ST_GeomFromText('POLYGON((0 4096,0 0,4096 0,4096 4096,0 4096))', 3857),
ST_COLLECT(geometries));
EXCEPT:
TRY:
RETURN ST_WITHIN(
ST_GeomFromText('POLYGON((0 4096,0 0,4096 0,4096 4096,0 4096))', 3857),
ST_UNION(geometries));
EXCEPT:
TRY:
RETURN SELECT MAX(ST_WITHIN(
ST_GeomFromText('POLYGON((0 4096,0 0,4096 0,4096 4096,0 4096))', 3857),
individual_value))
FROM geometries;
EXCEPT:
RETURN FALSE;
END
Yes, you can define your own aggregates. However, calling other aggregate functions is not that easy, as you would need to actually call their internals. A workaround would be to collect all the rows in an array and then run your custom function (quite like you did define it) as a finalfunc. Not sure about performance of that approach though, you might not want to run that on large groups.
Why do you need a new aggregate function? Just implement your pseudo-code.
Thanks @Bergi, I was worried about the perf too, and it seems calling internals of another func might be a path forward (not desirable, but seems to be unavoidable). @ Laurenz Albe my concern was exactly as Bergi mentioned - dealing with an aggregates as parameters.
|
STACK_EXCHANGE
|
In the past couple of years, the field of data science seems to have rapidly shot to fame. The main reason for this is ‘data’. In our information-driven world, all of us play quite a crucial role. Since the moment we wake up and glance at our phones, to the last moment of the day, we are all digital labourers, trying to generate the very data that acts as a fodder material for the companies over at Silicon Valley.
All of these high tech companies today are all for the increasing demands of data scientists and data analysts. Jobs in this sphere have been steadily increasing and have taken up permanent residence at the top job search engines all over the web. The various titles that beckon data aspirants are the likes of Data Scientists, Data Analysts, Data Engineers and many others. While the prefix of all of these job titles may lead you to believe that all of these professionals carry out the same functions, it is not really so. As data science happens to be a vast field with so many diverse verticals and untapped areas, there is always something new to do with someone new.
Coming back to how these similar sounding positions are actually quite different. Let us start with Data Scientists, these professionals are popularly known as the rock stars of the Information Technology industry. They are usually in charge of making accurate predictions, which help the businesses take the most lucrative decisions. These individuals have a treasure trove of educational qualifications and experience.
They usually belong to a background of computer science applications, modelling, statistics or math. They have an ‘IT’ factor in the form of a combination of brilliant business skills and excellent communication skills that set them apart from the general public involved in the industry. Further division of roles for a Data Scientist could be becoming a Data Researcher, Data Developer, Data Creatives and even Data Businessmen.
Apart from Data Scientists, there is another career option which is called as Data Analyst. These professionals perform a wider spectrum of functions like collecting, organizing and analysing of data in order to derive important information from the same. They are also known as Data Visualizers as they are supposed to present this collected and processed data in the form of charts, graphs and tables and go on to build other related databases for their firms. They could diversify their careers by going into roles like Data Architect, Database Administrators, Analytics Engineers, and Operations and so on.
The major differences between these two positions are that a Data Scientist usually is required to be familiar with database systems like MySQL as well as Java, Python and so on. Whereas a data analyst must be familiar with other data warehousing and business intelligence concepts and must have an in-depth knowledge of SQL and analytics.
If we put the differences between the two aside, then we would infer that both the positions require a professional to do a thorough course in programming tools like Python, Big Data Hadoop, SAS Programming, and R Programming and so on. While these tools could be learnt through self-study, but most prefer institutes like Imarticus Learning to help them along their journey.
|
OPCFW_CODE
|
README.md should describe two expected src directory layouts
Please consider having README.md describe, early on, the
expected source directory layout. This is especially
tricky for a reader because there two layouts are used.
Reading from the top down, the first encountered layout
is for the trio project. If I understand correctly, the
.crossType(CrossType.Pure) // [Pure, Full, Dummy], default: CrossType.Full
causes the bar/.js, bar/.jvm, bar/native, shared, form to be used. Since
these are hidden directories, it makes it exceedingly hard to
figure out.
Since I like prove out my understanding, I usually start simple
and work my way to more complex. I jumped into the second,
duo example before I attempted the trio example. It took
more work than I expected, but I figured out the expected
bar/jvm, bar/native, shared directory structure. This gives
nice, visible directories.
When I extended my duo project to the trio project
described in the README.md, sbt could no longer
find my source code. After ratting around, I discovered
that this is because the duo project defaulted to 'CrossType.Full'.
The trio project had an explicit 'CrossType.Pure', which expected
a different source directory structure.
As both scala-cross & my understanding of it mature, I hope
to understand when one would use a Full & when one would
use a Pure. For now, I am using Full seems to work, so I
am using it to allow me to see my directories.
The good ascii art in file CrossType.scala helped me get the
trio project up and running. Adding that art to the README.md
would, I believe, ease the way of other early & potential adopters.
Thank you for considering this suggestion. As always, thank you
for sbt-cross. I have learned from following the README.md.
Lee
This guy gave it a shot. What do you think?
https://github.com/scala-native/sbt-cross/pull/19/commits/bb595ac71a5673a890398e1b9285a869e26f34ba
Thank you for asking my opinion.
First approximation:
I think bb595ac71a5673a890398e1b9285a869e26f34ba is a useful contribution and a good base for evolution. The perfect is the enemy of the done & useful. Thank you, @rom1dep
Second approximation
I would probably add a "NOTE WELL the leading dot" to the
├── .js
├── .jvm
├── .native
It would be nice to explain the intended purpose/use of the .mumble
directories. I have no clue.
It would be nice to mention which cross type is the default & why.
If CrossType.Full is the default, I would introduce it first.
I am loath to reveal the limits of my understanding to the world, but
I do not understand what the section at the bottom is trying to say.
I will blame the hour for my shortcoming .,,
`.crossType({/*custom*/})`
+
+One can easily extend CrossType and provide a custom tree structure.
Perhaps the author's intent can be realized by moving the last line way
earlier & providing an example at the bottom .crossType(CrossType.Custom)
+sbt-cross provides 3 implementations, described below, of the CrossType class that one can
+pass as `.crossType` parameter: CrossType.Full, CrossType.Pure, CrossType.Dummy.
CrossType.Full is the default if .crossType() is given no argument.
+One can easily extend CrossType to provide a custom directory tree structure.
Putting my actions where my mouth/typing is, I could do a PR after bb595ac71a5673a890398e1b9285a869e26f34ba is merged.
Third approximation, picky level.
I read for concept, not to as a copy editor, but the line
This layout is preferred for codebases which do not contain any platform-specific code.
jumped out at me. Please consider something like the more direct
This layout is preferred for codebases which contain no platform-specific code.
Fixed in https://github.com/scala-native/sbt-crossproject/pull/39
|
GITHUB_ARCHIVE
|
Inverse Limit of Rings (Pt. II)
Point of Post: This is a continuation of this post.
Examples of Inverse Limits
Now that we have defined inverse limits of inverse systems of rings, let’s see if we can explicitly find inverse limits for each of the inverse systems we submitted as examples previously in this post.
For our first example consider the module . It’s easy to see that with the natural projections , and the map is an inverse limit for this inverse system. This is called a pull-back of the diagram associated to the inverse system. These, and their direct limit analogues, will have their own post soon enough–they are of the utmost importance.
Now, for our second example let’s now take a specific example where we have a chain of the form where is some prime. What we are then looking at is the set of rings and the set of morphisms . We call the inverse limit of this inverse system (which, as we shall shortly show, always exists) the -adic integers and denote it by . There is a natural way to realize , namely we can view as the subring of consisting of those tuples such that (it’s a slightly annoying, but totally straightforward exercise, to check that this defines a unital subring) . So, what are the natural maps ? Why just the projections onto the coordinate of course! Ok, to prove this we must verify that for each and that the satisfy the universal property of inverse limits. For this first we must merely note that but, by assumption (on the way the coordinates interact) this is equal to which is equal to . To see that the ‘s satisfy the universal mapping properties we suppose that is a set of ring homomorphisms with some unital ring such that for each . We must now define a map , but clearly it suffices to define a map and merely prove that . But, by the universal characterization of products to do this we must merely define what we want to be for each . But, we merely define . Now, all we have to do (as already stated) is prove that , but this is equivalent to showing that whenever but note that this is equivalent to or that which is true by assumption. Thus, our map is clearly a well-defined and satisfies . We must now check that this was the only feasible way to define . More concretely, suppose that is another map such that . We note that, if we include into the product by the inclusion mapping we have by assumption that for all , but by the definition of product this tells us that , and since is injective this implies that .
To get an example of the third type of inverse system we consider the special case where we look at the polynomial ring for some field and consider the chain of ideals . We then have the naturals maps which are just the modulo maps. This gives us a nice, well-defined, inverse system of rings. What we claim is that an inverse limit of this system is the ring of formal power series over . To see this we define the obvious maps by . This clearly satisfies
It remains now to show that given any set of maps such that then there exists a unique such that . To do this let be arbitrary. We note then that to define it suffices to define numbers for each . To do this we merely note that there exists a unique such that . We then define . The fact that guarantees that is a ring homomorphism, and trivially we see that . Thus, is the inverse limit of the .
What we claim is that if we take the trivial inverse system on a set of rings we get the product ring as a result. To see this we note that we have natural maps (where is the product ring) given by the natural projections. These trivially satisfy the compatibility relations since we must only check that . Moreover, the universal property of the inverse limits then clearly just translates to the usual universal characterization of products.
Dummit, David Steven., and Richard M. Foote. Abstract Algebra. Hoboken, NJ: Wiley, 2004. Print.
Rotman, Joseph J. Advanced Modern Algebra. Providence, RI: American Mathematical Society, 2010. Print.
Blyth, T. S. Module Theory. Clarendon, 1990. Print.
Lang, Serge. Algebra. Reading, MA: Addison-Wesley Pub., 1965. Print.
|
OPCFW_CODE
|
Oops I mean "over a hundred million instanced trees" obviously haha.
Don't forget that when new technology is announced they always list the upper limits of a technology. So it has 1,000 times the potential of current best-case-scenario NAND but you won't see that 1,000 performance boost for 3 decades when they tap out the technology's maximum potential.
I'm rendering a project right now that has over a hundred instanced trees in a forest. So the forest is pretty much instanced, but each tree instance is around 1GB of memory and there are about 12 individual models. Then once I get into terrain geometry and villages and trains and that isn't even getting into the volumetric sparse oct-trees for like smoke from chimneys... etc.. anyway long story short 32GB is already gone *with* massive scale instancing.
Obviously this would follow regular air traffic regulations where there are staggered altitude exclusion zones around airports, national security sites etc.
If I can buy a TB of RAM that's maybe DDR4 speeds but not DDR5 speeds but at say somewhere between NAND and DRAM pricing that would be huge.
Also incredibly useful for something like a phone where you might want to shoot 4k video. The CPU would have a hard time processing that but if you buffered to say a 64GB cache and then processed you could shoot highspeed for a minute instead of 2-3 seconds.
One of the articles says the initial products will be PCIe and NVMe.
The Toms Hardware Article is much better:
Intel indicated the new memory would connect to the host system via the PCIe bus, which is yet another reason that Intel and Micron have been vocal proponents of NVMe. The NVMe protocol was designed from the ground up for non-volatile memory technologies, and not NAND in particular. Now it is apparent that Intel and Micron were laying the groundwork for something more as they developed the new protocol.
Clearly this memory will necessitate new motherboards. But I would also love to see this on Nvidia cards.
MPEG-LA claims to have full H265 patent coverage, so it'll be decided in the courts if MPEG-LA can defend their H265 claims against HEVC Advance. My guess is that MPEG-LA knows what they've got and HEVC Advance is making a big show for shareholders. Technicolor already put it in their last quarterly earnings report that they had massive profit potential from their HEVC patents. To me this looks like a fake out by companies like Technicolor to trump up the value of their patents while MPEG-LA continues to do real business with reasonable terms. By the time Technicolor et-all's stock holders realize that they aren't making anything off of their ludicrous terms they'll have moved on to the next scam.
Not to mention bandwidth. How are you going to move 500TB to the cloud and back in a reasonable time frame? You're looking at several months even over a gigabit connection.
What are your performance requirements. If you just need a giant dump of semi-offline storage then look into building a backblaze Storage Pod.
For about $30,000 you could build four storage pods. Speed would not be terrific. Backups are handled through RAID. If you want faster, more redundant or fully serviced your next step up in price is probably a $300,000 NAS solution. Which might serve you better anyway.
That's unfair, I use a lot of software that I pay for but want to make peripheral changes to. For instance I use a compute job scheduler that costs about $180 per compute node + maintenance. It's worth the price for the existing features, but I also want to implement esoteric features that maybe nobody else needs but will help it work better with our workflow so I have forked a number of the built in options. The company even has a github repository of the latest release so that you can get performance and reliability bug fixes from the developer while still keeping your one-off tweaks or even share with other companies who need similar fixes.
This same company eventually even bought a substantial addition I made to make it a core feature. This model works well for everybody, if you need a custom feature you just need to add that one small feature without starting from scratch and I don't have to worry about maintaining the code and add big core features like moving to a new database or extending the SDK or writing a web interface or creating a native Python library.
As the author stated, this sort of situation doesn't lend itself well to a support model since most of the users have the same needs so it's only fair that everybody pay a share of the updates and many of the studios that use the software *could* write it themselves if they were just going to develop it indirectly.
No we have two different statistics competing. I can say that 20% of the population died this year, oh my god end of days tragedy! But I can also encourage 50% of the population to procreate this year and have a child. Yay! Population growth everything is fine no need for doom and gloom!
There are more hives. But bees are still dying in record numbers. Both can be true simultaneously. If 50% of every hive died you would have 50% less bees even if the number of hives increased slightly.
I was just going to say that I am using a cpu raster renderer from 1993 on my latest project. Why? Because for simple data passes it's the fastest renderer available and it's single threaded! I can run 14 concurrent instances on one machine and render in near real-time on the CPU but with proper shading and filtering unlike GPU rendering.
It's pretty much all phong and blinn shading but that goes back to 1977 and the birth of computer graphics.
TESTING requires destruction of this kind of thing as I read the article. They will NOT be doing 100% testing to failure of their stock of struts, except to prove to themselves how bad their supplier really was.
Nobody said they would be testing to failure. You can test every unit to say 150% of failure. If the material is rated for 1000% of failure then 150% should be safe. If it doesn't fail at 150% once it probably won't fail at 100% 100 times. So now you're at 99 times until mean failure instead of 100.
So a song supposedly written in the last 80 years or so having a copyright still on it means that you have zero respect for the GPL?
1) If Microsoft wanted to include a secret NSA screen recording app they could hide it in the code and you would never know.
2) If your'e worried about exploits then you should just worry about the fact that your GPU's drivers already offer this capability.
3) Recording your screen is the most useless way to learn things about you that I can think of. If you have access to the system to such a level that you can execute arbitrary code it's far more effective to run a keylogger than a video screen system which would require gigabytes of data to get meaningful information. Install your keylogger and then have millions of computers dump their keystrokes to a database that doesn't require you to sneak terrabytes of data from millions of computers to your server. Then run some data mining software to identify likely username/password combinations.
This is occum's razor shit people. Screen capture software only requires 1-2MBs. It's not like they can't be hidden. And even if they couldn't be easily hidden they're mostly useless.
|
OPCFW_CODE
|
Over 46,000+ Business Solution Developers Find answers, ask questions, and connect with our community of business solutions developers, business owners and partners.
Your first two auto enter options are mutually exclusive. These are auto enter on creation or auto enter on modification. Your list of choices is exactly that same for each. This includes the current date, the current time, the current timestamp (a string of text that combines date and time information), the user name or the account name.
So you can setup key fields within your database to capture data the moment a record is created or the moment the records data has been updated! It is not a bad idea to include a field within each table of your database that will do each of these.
Date, Time and Timestamp information is gathered from the operating system of the computer running FileMaker. So you can setup a field to capture the particular day and time a record was created or modified. You may have noticed that I tossed in the tidbit about your operating system. If you did notice, good for you! The date and time information for auto entry operations is captured from the persons computer. So if Joe Shmoe over in shipping has his computer set to January 1st, 1904, when he creates or edits a record, that is the data FileMaker will be using.
FYI… If you use scripts for creating or modifying records, you can use the date and time that is on the computer that is hosting the database. This comes into play more with FileMaker databases that are accessed from a FileMaker Server.
The same situation about a users computer settings are true for the auto enter of a user name. The user name information is gathered from the user settings of the computers operating system or the name the user has set in their FileMaker preferences. So if Joe Shmoe in shipping has the name of “the shipping god” in his FileMaker preferences, that is what is captured with the auto enter option of user name.
Now our last auto enter option of this discussion (but just the tip of the iceberg of all your auto enter options) is auto enter by account name. Well, now that is a whole different kettle of fish. The account name information is gathered by the name the user used to open a secure FileMaker database that uses account names and passwords. This does require that you setup security properly on your FileMaker database and this is just one more good reason to do so.
Now if you have 10 users that open the database with the user name of admin, then this feature is not that helpful. The auto enter information is always going to be admin. However, if you set up each user with their own account name, then you can truly capture information about who created and modified a FileMaker record with FileMaker auto enter options.
© 2010 – Dwayne Wright – dwaynewright.com
|
OPCFW_CODE
|
LabVIEW dot Net DataGrid Overview
A very useful and easy to use data grid to replace LabVIEW's tables and multicolumn listboxes. This datagrid supports more of the standard expected table/grid functions for sorting, filters and auto fitting content, and best of all, it supports some extended datatypes embedded within the grid.
Basically, this grid allows better table support, with more built in features that you can use standard LabVIEW data with and basic properties to your own string data into a more friendly grid and content display.
- Auto drawing and formatting of content
- Auto Column width sizing (auto, cell, fill, none, etc)
- Inline objects for combobox, buttons, checks and images
- Clickable Columns to sort (Asc/Dsc)
- Dragable columns to reorder them
- Minimum column widths
- Basic Events integration with LabVIEW Event structure for integration into your app
Easy to Use
OpenG is used for several variant data inspectors
See the example code. Four simple steps to use it:
1.Place a .NET DataGrid control on the front panel
2.Initialize the DataGrid Class using the .net control reference
3.Define the column parameters
Currently, there are several datatypes supported:
- Images (can set name of built in images, or add full path to custom images)
- Combobox (selection lists)
All events are automatically registered as user events for use in LabVIEW's event structure. Each event type has a "getData.vi" that can be used to convert the event class data into elements within the event handler. Using the easygrid helper functions, these events are all automatically registered, so can simply be connected to any event structure for use.
The events currently available are:
- Cell Edit Ended - This single event is used currently as a sample for callback events.
- Cell Value Changed -
- Cell Validating - This is an automatic callback event, that cancels any edits when the cell value doesn't validate. No LabVIEW events are currently generated from this.
- DataError - when formattting fails on the data for a cell on change
- UserAddedRow - returns the row
- UserDeletedRow - returns the row
Grab the latest releases to use this project here from github.
This repository was originally created for LabIVEW 2013 SP1, some updates and maitenance now is bing down in LabIVEW 2014. See the Todo.md for areas you can contribute.
The datagrid can be easily deployed into an EXE using the library such as in the example provided. It can also be used by the packed library versions for LabVIEW and kept as a dynamic plugin module. Both methods require that you include the images folder for supporting default built in images (or to add your own) so they can be used by name, instead of specifying a full path.
No Support is provided directly for this add-on, since its a free product, but you can use this community github page or the LabVIEW forums for help and questions
|
OPCFW_CODE
|
Five years on, the first Cosmoparticle PhD Student and her two supervisors reflect on the experience
19 May 2021
In 2016 Constance Mahony was the first PhD student to be accepted onto the Cosmoparticle Initiative’s innovative PhD programme, which offers the student two joint supervisors from different fields.
The supervisors reflect on the experience
Constance Mahony’s PhD project From camera to cosmology with LSST weak lensing magnification embraced the disciplines of Astrophysics and High Energy Physics, a prospect that excited both her supervisors. As Andreas Korn (HEP) explained, “Trying methods from one field in another has led to important advances in physics. For example, the Higgs mechanism in particle physics has some origins in the description of superconductivity. Hence, we went into this new multi-disciplinary endeavour with some excitement.”
The plan for the PhD changed from the original direction, as Benjamin Joachimi (Astrophysics) explains: “We originally planned to split the PhD between cosmological analysis and work on the detector, close to my and Andreas’ expertise. That was too wide a gap to bridge in the end, and there were also external factors that drove us more towards the cosmology route. However, under the cosmology umbrella, Constance worked on large-scale structure as well as neutrino physics.”
Reflecting on the PhD direction change, Benjamin says “Constance’s PhD was truly interdisciplinary, albeit in a completely different way than originally anticipated”, and Andreas comments that the reason for its success was because, “with Constance we appointed an open-minded student with broad interests, who even embarked on a project on the periphery of both of her main supervisors’ expertise!”
There were challenges, and Andreas reflects that there was “also a bit of naivety on how difficult it would be. I always instil into my students that at the end of their study they will be the experts in their specific subject. The constellation of one student with two supervisors from different fields, did pose some unique challenges. Not only from a physics point of view, but also reconciling different cultures and expectations.” Echoing this view, Benjamin says “Pioneering joint supervision of a PhD student in two quite different areas of physics proved to be a challenge, but turned out beneficial for everyone involved.”
And how do the supervisors feel about the experience looking back? “We learned a lot from each other. I certainly learned a great deal about cosmology and methodology in astrophysics from Benjamin and Constance and found the experience rather enjoyable”, says Andreas. Benjamin adds: “We evidently enjoyed the experience as we have jointly since taken on another PhD student!”
Constance reflects on her PhD
Q. Looking back, was this what attracted you to applying for a CPI PhD?
I actually never applied for a CPI PhD as when I was applying it didn’t exist! I did my masters project in the High Energy Physics group at Imperial, but had also undertaken a couple of summer projects in the Imperial Astrophysics group. I was interested in both areas so I applied to a mix of PhD positions, with more of an Astrophysics skew. I applied to the UCL Astrophysics group and it was at the interview day that I learnt about the possibility of a joint position. It seemed like the perfect fit.
Q. How did it work out having two first supervisors?
It was great. Having two first supervisors gives you two people to learn from and since they come from different disciplines you are exposed to different ways of working. My two supervisors had different strengths and I learnt a huge amount from both of them. It also means there is always someone around to answer your questions.
Q. Did you encounter any challenges?
Yes of course! Being exposed to such a range of research was great at the start of my PhD but became more difficult as my PhD progressed, as it was hard to stay focussed on completing specific projects. The other challenge was that the projects I worked on in my PhD were more varied than many other PhD students. That made switching between them difficult as they didn’t easily relate to one another.
Q. How did you overcome these challenges?
Towards the end of my PhD I had to block out all the exciting work that was going on and focus on my own projects. This meant I became more disconnected from the High Energy Physics group, which was a necessary evil. I also ended up prioritising one project at a time, so that I didn’t waste so much time switching between them.
The main benefit of a CPI PhD is getting to know both research groups. Most PhD students tend to interact within their own research group, whereas I was able to meet two great groups of people. I learnt about such a large range of research just from attending both groups’ events.
I would have liked to work on more projects in my PhD, but I think this is a common issue for all PhDs. It is easy to forget how much you have learned, and how opaque things seemed at the start of a project.
Q. How has undertaking the CPI PhD enabled to you to develop your research interests and direction?
Undertaking a CPI PhD got me really interested in neutrinos, since they kept popping up in both cosmology and particle physics talks. Neutrino physics is a truly interdisciplinary area where astrophysics, cosmology and particle physics all come together. My own work has come from a cosmology angle but I am always interested in learning about developments in other areas.
Q. Where are you now, and what is your research area?
I am now a fellow at the German Centre for Cosmological Lensing based at the Ruhr-Universität Bochum. My current research focuses on using galaxy surveys such as the Kilo-Degree Survey and the Vera C. Rubin Observatory Legacy Survey of Space and Time to learn about the structure and evolution of the Universe.
Q. What would you say to someone considering applying for a CPI PHD?
Apply! I very nearly did not apply for any PhDs at all because I did not think I was good enough. It took the encouragement of a lovely personal tutor for me to even put in an application, and I can’t thank him enough. If a CPI PhD sounds remotely interesting to you, just take the first step and submit an application!
|
OPCFW_CODE
|
Divisa iT, Informática y Telecomunicaciones
Divisa Informática y Telecomunicaciones wants to put within reach of all, a Web with accessible design so that the greater possible number of people can get the information which is transmitted and use the services that through it are lent, independently of the limitations of the person or whom are derived from the use context.
In order to reach this target, the WCAG 1,0 accessibility guidelines (Web Content Accessibility Guidelines version 1.0) established by the work group WAI (Web Accessibility Initiative) that belong to the W3C (Consortium World Wide Web) are considered. In particular, it is wanted that this Web fulfils the guidelines that are applicable of priority 1, all the guidelines that are applicable of priority 2 and a subgroup of the guidelines of priority 3.
Thus, in the design of Divisa iT Web the following guidelines have been followed:
- Marking Labels
- Available, intuitive and alternative navigation systems.
- Alternative descriptions in the images.
- Verifications in the visualization with different navigators and devices.
- Use of universal and alternative formats.
Size of text
The accessible design allows that the user can fit the sources to the size which is more suitable. This action can be carried out of different ways according to the navigating Web used. Next, the menu action where this functionality is in the most frequent navigators is indicated:
- Mozilla Firefox: View > Size of text > Increase Internet
- Internet Explorer: View > Size of text > bigger
- Internet Explorer Mobile: View > Zoom > bigger
- Konqueror: View > zoom
- Opera: View > Zoom > %
- Safari: View > Make text bigger
- Google chrome: Tools > Activate zoom
Together with the standards of Web accessibility, 1,0 XHTML Transitional standard to mark the content and the cascading style sheets (CSS) has been adopted for the design. These standards guarantee the access to the information through any navigator that follows the standards and the recommendations of the W3C.
Formats of the contents
In some cases, it could be that HTML web sites are complemented with contents available in other formats of presentation. In these cases, formats already implanted and with free plug-ins have been chosen
In order to read pdf documents, the full version of the Adobe Reader program has had to be installed. It incorporates a plug-in of accessibility for people with visual difficulties: Download Adobe Reader
Flash Format is used for videos, animations or integration multimedia
Useful information about accessibility
Next a number of organized links in different useful categories is exposed to obtain more data about the accessibility.
Accessibility technical norms and documents:
World Wide Web consortium, work group of international scope that determine the main lines related to the Web.
Web Accessibility initiative of the W3C.
HTML code, CSS Style sheets validators and Accessibility:
HTML code validator
CSS style sheets validator
Taw validator: verifies diverse aspects of the accessibility of Web sites.
Contact to improve the accessibility
Help us to make our Web accessible, we will thank any suggestions and commentaries about the accessibility, you can send a message through the contact you will see in the upper of the Web.
|
OPCFW_CODE
|
We all try to be the best in what we do, to give the maximum of ourselves, and to achieve the highest results. Often in our desire to achieve the most, we try to look at problems from different angles and come up with the perfect solutions. We prepare very well and get to work. In theory, this sounds great. However, in our strive to develop the best solution, unfortunately sometimes we put pitfalls in our path. These pitfalls have a comprehensive expression. In this article, we will talk about two specific ones. From my personal experience, they are quite common. As a result, the famous rules, whose abbreviations are KISS & YAGNI were born.
Since both principles have existed since the dawn of time, most people are aware of their existence, but not of their importance. Therefore, I will mention each of them, what they represent, and the reasons why people neglect them.
KISS – Keep it simple stupid
This principle fights against the unnecessary complication of the software. Failure to comply with it occurs when instead of finding a simple and workable solution, we come up with a much more complicated version of what is required of us.
Let’s look at the main problems we may encounter when creating unnecessarily complex software:
- More difficult to develop – it takes more energy and effort of the programmer to invent and implement it. Energy and effort that can be distributed and invested more efficiently in other tasks.
- More time for development – needless to say, that time is one of our most valuable resources.
- A more complex technical solution implies more potential problems later on – this could also be costly. At best, the functionality won’t be approved by QA and we will have to fix bugs. However, there is a much more unpleasant scenario in which our functionality is already used by real users and they are the ones who discover the problem. Then the headaches increase many times over.
- The more complex technical solution is also a sure prerequisite for more difficult maintenance – as we know, programming is not strictly following rules and regulations, there are no textbooks for solving specific problems and as a result, their solution is mostly creativity. When it comes to creativity, the human brain has vast possibilities and understandings that are individual to each. Having that in mind, when creating new functionality, we must be aware that everyone who passes by us will have his own reading, and the more complex and abstract what we have done, the more likely this person will be confused and not understanding what we did.
- More complicated to change – in the dynamic world we live in, the rules of the jungle apply in full force to business as well – “It is not the strongest or the most intelligent who survives, but the one who best adapts to change.” Given this idea, it is extremely important to be able to react as quickly and efficiently as possible in case of change, and this cannot happen if our functionality is super sophisticated and two months after we created it, we can no longer understand it.
- Junior programmers suffer terribly – this is something that is relevant when creating software by more experienced programmers and affects the inexperienced. In my opinion, an extremely neglected point and something that few experienced people take into account. Initially from my observations, few people care about the inexperienced and do not try to make their lives easier, but in reality they are the heirs of what the elders have created and they are the ones who will continue their development.
YAGNI – You ain’t gonna need it
This principle is about not developing something that no one needs or something that might one day be used by someone. Let’s look at some examples:
- Functionality that is not required but we believe one day will be used – often when developing a solution, the programmer decides to be extremely resourceful and try to look a few steps ahead of the business people. He takes on the role of an oracle and decides to develop an additional or extended version of the functionality that is required of him at his own discretion and in an expression of goodwill. This would be great if it did not lead to the corresponding negative consequences:
- more time spent for him and everyone involved in the chain afterwards.
- Potential defects
- Higher probability that its bonus functionality will never see the light of day.
- A technical implementation that we don’t need – for example, we decide to make a helper function that converts dates from our time zone to UTC because we need it. Along with it, however, we think that one day, at some point, we might need functionality that converts one date format to another format. So we have started to implement the first functionality, why don’t we just implement the other one real quick. No, we don’t need that, and whoever will need it, will write it when the time comes.
- It doesn’t make much sense, but it looks cool – we’ve learned some new design patterns or some latest fad in programming and we decide to apply it as much as we can to solve our problem. It works, but as a bonus we add unnecessary complexity and functionality to our solution. This point largely overlaps with the KISS principle.
As a summary of this principle, we can say that most often actions contrary to this principle lead to unpleasant consequences similar to the KISS principle, such as – more effort, respectively slower development time, more maintenance and a higher risk of defects.
When and why KISS & YAGNI are not taken into account
Having considered the possible issues that non-compliance with these two simple principles can lead to, let’s consider the main reasons for their non-compliance, relevant at different stages of the professional development of each programmer. I will look at the main characters and share all my observations from my years of experience in the field:
Junior software engineers
The motivated, thirsty for knowledge and to prove themselves novice programmers – these are undoubtedly the uncut diamonds and the future driving force of any company. Everyone wants to have such people in their team. They often give a positive charge with their strong desire to face challenges and climb headlong up the ladder of their professional development. I dare say that in the first years of my professional career I was this type of programmer. In my desire to succeed, I fell into the trap of inexperience and ego.
The lack of practical experience made me think that the more complex a solution I come up with, the better I will impress those more experienced than me, which, as I already understand, is a big mistake. On the other hand, I often tried to think one step ahead of people who are much more skilled than me. It took me several years of work to make a proper self-assessment, and a lot of observation and training of other people to reach these conclusions.
Experienced software engineers
- Quite a large number of them who have encountered in their practice problems of different nature and difficulty tend to apply the so-called “over-engineering” approach. Initially, every experienced programmer tries to look at the problem from all angles. They also try to cover all possible input and output scenarios, to anticipate all possible problems that might arise. As a result, a solution is often reached that is possible to be much more than what is really needed. In other words, the most complex solution to the problem is implemented from the very beginning. A more appropriate approach would be:
- take into account all the criteria for drawing up the functionality.
- build a step-by-step plan, outlining all possible scenarios that need to be covered in the future.
- Implement as a matter of priority when necessary, instead of all at once.
- Some of them fail to mature and come to the conclusion that the complex solution of a problem does not mean that it is better. For some, the reason is ignorance and lack of experience. For others, it is the ego, the desire for self-expression and self-proof. Unfortunately, this results in less experienced programmers who do not have the skills and knowledge to understand more complex solutions, because they lack the experience and critical eye to determine whether a solution is optimal or not.
- Other senior programmers, decide that they want to try new and more interesting solutions to the given problems. This is a result of the long years of work in which the solutions to the problems begin to become repeatable. This is a great idea at first, but it often crosses the line. Projects with simple business requirements use unimaginably many new modern technologies and techniques that will make the daily life of the programmer more interesting. However, that comes at the expense of longer deadlines and therefore more expensive product performance.
- And last but not least, some experienced programmers just forget where they came from. They forget how hard it took them to figure out what a colleague with ten years of experience had written in their first month at work. This is something that everyone should keep in mind when writing code. Everyone needs to know that there will be both experienced and inexperienced programmers to support him. From this point of view, he should try to develop solutions that are as understandable and simple as possible, so that he does not have to explain them to every newcomer. This does not mean compromising the quality of our code so that beginners can understand it. It’s all about writing it in a simple and understandable way.
I would say that following some basic simple rules would bring great benefits to our professional development. I will try to summarize some of the main ones I follow when solving a problem:
- If the solution is too complicated, try and find a simpler one by looking at the problem from another angle.
- If the solution remains so complicated, Always discuss it with a business representative and explain in detail with arguments why a solution to the problem would cost a lot of effort. Then discuss possible measures that could be:
- Simplification of the requirements so that the effort meets the expectations of the business for resources and deadlines, but at the same time the results bring the desired value.
- Break down the problem into smaller parts and solve it over time by prioritizing and respectively delivering semantically separated valuable functionalities.
- Share the problem with as many colleagues as possible and gather feedback and ideas from them. “Two heads are better than one”.
- Creating a POC (proof of concept). Whenever there are more unknowns, it is good to make a simple solution to test whether the ideas behind it are well thought out.
- For beginners – whenever something seems too complicated, ask more experienced teammates about the problem or do a detailed investigation on the Internet. Given that you are a beginner, you are generally given more trivial problems, which often have a trivial solution. Do not try to “rediscover” the wheel.
These programming principles are quite tricky to apply. Each of us should try to write an easy-to-understand, simple, and easy-to-maintain code. At the same time, this must be done not at the expense of the quality of the final product and it must fully comply with our requirements. Good luck!
Written by Motion Software‘s Antoan Elenkov
|
OPCFW_CODE
|
2015-11-20 06:11 AM
I'm trying to install to install Graphite/Grafana & NetApp Harvest on a freshly installed RedHat 7.1 server. Although I find your instructions very clear in the installation document you made, I'm running in some issues. Can you help me with this...as Google is not my friend in this particular case...
I'm stuck at page 12 of your quick install document:
[root@xxxxxxxx tmp]# pip install 'django<1.5'
Downloading Django-1.4.22.tar.gz (7.8MB)
100% |████████████████████████████████| 7.8MB 51kB/s
Building wheels for collected packages: django
Running setup.py bdist_wheel for django
Complete output from command /usr/bin/python -c "import setuptools;__file__='/tmp/pip-build-J_JzqI/django/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" bdist_wheel -d /tmp/tmpG_XWvfpip-wheel-:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-build-J_JzqI/django/setup.py", line 69, in <module>
raise RuntimeError('Django 1.4 does not support wheel. This error is safe to ignore.')
RuntimeError: Django 1.4 does not support wheel. This error is safe to ignore.
Failed building wheel for django
Failed to build django
Installing collected packages: django
Running setup.py install for django
Successfully installed django-1.4.22
[root@xxxxxxxx tmp]# git clone https://github.com/graphite-project/graphite-web.git
Cloning into 'graphite-web'...
fatal: unable to access 'https://github.com/graphite-project/graphite-web.git/': Peer's certificate issuer has been marked as not trusted by the user.
[root@xxxxxxxx tmp]# git clone https://github.com/graphite-project/carbon.git
Cloning into 'carbon'...
fatal: unable to access 'https://github.com/graphite-project/carbon.git/': Peer's certificate issuer has been marked as not trusted by the user.
[root@xxxxxxxx tmp]# git clone https://github.com/graphite-project/whisper.git
Cloning into 'whisper'...
fatal: unable to access 'https://github.com/graphite-project/whisper.git/': Peer's certificate issuer has been marked as not trusted by the user.
Can you perhaps point me in the right direction?
2015-11-20 01:55 PM - edited 2015-11-20 03:47 PM
These errors are probably caused because your Linux box is not configured to trust the certificate authority that issued the github certificates.
You can either add the certificate authorities on your Linux box, or you can disable git's validating of SSL certificates using one of these commands before you run your git clone commands.
git config --global http.sslVerify false # Persistent
$ env GIT_SSL_NO_VERIFY=true # For this session only
WARNING: Disabling SSL certificate verification has security implications. Without verification of the authenticity of SSL/HTTPS connections, a malicious attacker can impersonate a trusted endpoint (such as GitHub or some other remote Git host), and you'll be vulnerable to a Man-in-the-Middle Attack. Be sure you fully understand the security issues before using this as a solution.
Hope that helps.
EDIT: I can't spell.
|
OPCFW_CODE
|
[17:07] <Simes> I had Pidgin installed. On upgrading from 22.04 to 22.04.1 it uninstalled Pidgin. Is there different preferred instant messenger software recommended for Xubuntu now?
[17:08] <Simes> The documentation suggests it should be the default app, so I am surprised it got uninstalled by the update.
[17:08] <Simes> https://xubuntu.github.io/xubuntu-docs/user/C/default-apps.html#internet
[17:31] <tomreyn> Simes: it's not clear why it got uninstalled. maybe you can provide more insight on that from the apt logs at /var/log/apt - start with history.log - records are sorted by date.
[17:32] <tomreyn> 22.04.0 to .1 is not a relase upgrade, but happens as part of receiving normal system updates.
[17:35] <krytarik> Simes: While I guess you meant upgrade from *20.04*, yes Pidgin got removed from the default apps sometime in between as per LP #1936417 - but if you actually still got use cases for it as per instant messaging (rather than just IRC, which we now got HexChat for), you could of course just install it back.
[17:39] <Simes> Ah. @tomreyn & @kyrtarik - thank you. I suspect you are right and I upgraded from 20.04, not 22.04. I /thought/ I was on 22.04; I see I cannot have been. (I would have put money on it - and lost it!)
[17:41] <Simes> @krytarik - I cannot even copy your name right when it's on the screen. I probably need more sleep. I'll crawl back under my stone now. :-) Thank you all.
[17:42] <krytarik> Well, I've seen it anyway.. :)
[18:56] <xu-irc19w> Wifi and Bluetooth icons disappeared after 22.04LTS upgrade...do you have a solution?
[19:19] <novanipples> Hello, world?
[19:22] <xubuntu3700> Hello, world!
[19:22] <xubuntu3700> Kbye
|
UBUNTU_IRC
|
Error creating thread: Resource temporarily unavailable
Related report:
Another crash
Also something has changed with stream thumbnails.. they pop in and out a lot more now. I get the message
Thumbnail for source ""ball"" is temporarily unavailable. Try again later. Details: deadline has elapsed""""
Also happened on a customers ROV right now. They were at 60m depth and i had to go in and restart the MCM service.
Here are the recent logs as well as terminal output.
Possible thread leak, leading to SO thread limit, but which threads? From a brief investigation, it seems to be Mavlink and Signaller.
2023-11-21T09:29:36.117533Z DEBUG MavlinkCamera-9 ThreadId(1375) heartbeat_loop{component_id=100}: src/mavlink/mavlink_camera.rs:159: Heartbeat sent
(mavlink-camera-manager:79): GStreamer-WARNING **: 09:29:36.347: failed to create thread: Error creating thread: Resource temporarily unavailable
2023-11-21T09:29:36.996633Z DEBUG Signaller-1 ThreadId(06) accept_connection{peer=<IP_ADDRESS>:42002}:handle_connection{peer=<IP_ADDRESS>:42002}:handle_message{msg=Text(""{\""type\"":\""question\"",\""content\"":{\""type\"":\""startSession\"",\""content\"":{\""consumer_id\"":\""c891ea9f-18d1-42a9-9e4e-22c9c6a7c696\"",\""producer_id\"":\""b8bdeb99-07b0-42a6-989f-9e84105096e6\""}}}"") sender=UnboundedSender { chan: Tx { inner: Chan { tx: Tx { block_tail: 0x24f3d8f0, tail_position: 2 }, semaphore: Semaphore(0), rx_waker: AtomicWaker, tx_count: 1, rx_fields: ""..."" } } }}:add_session{bind=BindOffer { consumer_id: c891ea9f-18d1-42a9-9e4e-22c9c6a7c696, producer_id: b8bdeb99-07b0-42a6-989f-9e84105096e6 } sender=UnboundedSender { chan: Tx { inner: Chan { tx: Tx { block_tail: 0x24f3d8f0, tail_position: 2 }, semaphore: Semaphore(0), rx_waker: AtomicWaker, tx_count: 2, rx_fields: ""..."" } } }}: src/stream/manager.rs:430: WebRTC session created: dc94a03d-7175-4388-9985-7ac947511dbc
2023-11-21T09:29:37.003044Z DEBUG Signaller-1 ThreadId(06) accept_connection{peer=<IP_ADDRESS>:42002}:handle_connection{peer=<IP_ADDRESS>:42002}: /cargo/registry/src/index.crates.io-6f17d22bba15001f/tungstenite-0.20.0/src/protocol/mod.rs:666: Received close frame: None
2023-11-21T09:29:37.003179Z DEBUG Signaller-1 ThreadId(06) accept_connection{peer=<IP_ADDRESS>:42002}:handle_connection{peer=<IP_ADDRESS>:42002}: /cargo/registry/src/index.crates.io-6f17d22bba15001f/tungstenite-0.20.0/src/protocol/mod.rs:683: Replying to close with Frame { header: FrameHeader { is_final: true, rsv1: false, rsv2: false, rsv3: false, opcode: Control(Close), mask: None }, payload: [] }
2023-11-21T09:29:37.004473Z DEBUG Signaller-1 ThreadId(06) /cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-tungstenite-0.20.0/src/lib.rs:355: websocket start_send error: WebSocket protocol error: Sending after closing is not allowed
2023-11-21T09:29:37.004602Z ERROR Signaller-1 ThreadId(06) src/stream/webrtc/signalling_server.rs:158: Failed repassing message from the MPSC to the WebSocket. Reason: Protocol(SendAfterClosing)
2023-11-21T09:29:37.007489Z INFO Signaller-1 ThreadId(06) src/stream/webrtc/signalling_server.rs:167: WebSocket connection closed: <IP_ADDRESS>:42002
2023-11-21T09:29:37.007693Z DEBUG Signaller-1 ThreadId(06) accept_connection{peer=<IP_ADDRESS>:42002}:handle_connection{peer=<IP_ADDRESS>:42002}: src/stream/webrtc/signalling_server.rs:199: Connection terminated: <IP_ADDRESS>:42002
(mavlink-camera-manager:79): GLib-ERROR **: 09:29:37.029: creating thread 'webrtcbin-a843d7db-be79-437c-985c-895a9c9d74c6:ice': Error creating thread: Resource temporarily unavailable
Trace/breakpoint trap (core dumped)
We need to find a way to shutdown the runtime, from the local test, it appears that the heartbeat and send_message threads stop, but the runtime itself drops but waits forever. with the runtime shutdown function we can force it to kill the OS threads.
From a simple test fetching the thread names on htop when a camera connects and disconnected it appears that the origin of the problem comes from the queue thread.
code starts
camera reconnects
Can't reproduce, probably fixed by #362
|
GITHUB_ARCHIVE
|
Tooltip
Proposed Design
<Tooltip content={(This is my <strong>content</strong>)}>
<Button>Save</Button>
</Tooltip>
content will accept some react component(s) and render it in the body of the tooltip.
the Tooltip component will add a hover handler to its child element and control the display of the tooltip
tether will be used for positioning
@jdrush89 thoughts?
what happens if you have multiple children of the tooltip
I don't like the idea of a list of targets. Finding a workable format to pass them in through is tricky. It's not hard to validate that a tooltip should only have one child, and i don't think attaching to multiple elements is a very common case. (Can you think of any big examples?)
@willcumbie:
can you explain what's difficult about passing props?
why enforce only one child?
the most prolific tooltip we use on reach is the "disable due to rbac" tooltip. would be nice to not need a different tooltip rendered for each item.
maybe we can take some inspiration from react-bootstrap one this one
@j00k Because in order to have something to pass in, you have to already have rendered them. If you want the tooltip to be a sibling of the element it's attached to, you're hosed. You can't have a reference to a particular element on the page until it's been rendered onto the document, so passing around references to them can't happen until rendering occurs. Which means more complexity in the lifecycle, because you have to wait for the render to finish in order to pass it to the tooltip, etc.
I feel like I must be misunderstanding what you're suggesting, because what you're suggesting doesn't make any sense to me at all.
React-Bootstrap does it in a similar way:
http://react-bootstrap.github.io/components.html#tooltips
Instead of one tooltip component, they split the logic into two parts--the tooltip itself and the OverlayTrigger which handles binding it to an element. I think that's overkill (especially since our tooltips don't have those little arrows), but it's within reason. Kept the same is the idea that attaching the tooltip should be handled by a parent component that attaches it to its child.
It seems like a pretty huge limitation to pass HTML instead of using JSX. Why not treat it like a popover where you can a have a trigger that references an instance of a Tooltip and maintain consistent React behavior all the way down?
Using HTML also prevents the use of shared components inside a tooltip, right?
Ignore me. I think I get it now. Just need to make sure tooltips are rendered into a separate tree.
@willcumbie i do prefer the react-bootstrap example there
this is actually meant to closely follow the react-bootstrap example. i should have named things better in the example above. it would look more like this:
<TooltipTrigger content={(This is my <strong>content</strong>)}>
<Button>Save</Button>
</TooltipTrigger>
content takes jsx, so it could take a custom component, or in the example just simple html. The trigger would render a Tooltip component on hover/focus of its child, and the content prop would be rendered into the body of the Tooltip component.
Do we want the current reach behavior where tooltips display on the mouse cursor's position? It should be doable with tether, but might be tricky to get the offset right.
@jdrush89 that's a good question that I'm struggling with right now. it is proving pretty complicated to get the tooltip to display at the mouse cursor's position. the actual tooltip library created by tether, and bootstrap tooltips all are rendered at a fixed position relative to the target instead of the mouse cursor.
canon doesn't really advise on the positioning of the tooltip, and i see a mix of behavior on the canon docs site where the example tooltip is rendered at a fixed position relative to the target, but other tooltips on the page are rendered at the mouse cursor position.
it's certainly easier to just pick a position and render it there (or let the consumer decide). @bradgignac what's your take? do you know any history about how tooltips should be positioned?
Position it relative to the element, not the mouse. :)
Tooltip implementation here: https://github.com/rackerlabs/canon-react/pull/36
review please!
merged
|
GITHUB_ARCHIVE
|
Queues not recreated after broker failure
I'm using Spring-AMQP-rabbit in one of applications which acts as a message-consumer. The queues are created and subscribed to the exchange at startup.
My problem:
When the RabbitMq server is restarted or removed and added completely, the Queue's are not recreated. The connection to the RabbitMq server is re-stored, but not the Queues.
I've tried to do the queue admin within a ConnectionListener but that hangs on startup. I guess the admin is connection aware and should do queue management upon connection restore isn't?
My Queues are created by a service:
@Lazy
@Service
public class AMQPEventSubscriber implements EventSubscriber {
private final ConnectionFactory mConnectionFactory;
private final AmqpAdmin mAmqpAdmin;
@Autowired
public AMQPEventSubscriber(final AmqpAdmin amqpAdmin,
final ConnectionFactory connectionFactory,
final ObjectMapper objectMapper) {
mConnectionFactory = connectionFactory;
mAmqpAdmin = amqpAdmin;
mObjectMapper = objectMapper;
}
@Override
public <T extends DomainEvent<?>> void subscribe(final Class<T> topic, final EventHandler<T> handler) {
final EventName topicName = topic.getAnnotation(EventName.class);
if (topicName != null) {
final MessageListenerAdapter adapter = new MessageListenerAdapter(handler, "handleEvent");
final Jackson2JsonMessageConverter converter = new Jackson2JsonMessageConverter();
converter.setJsonObjectMapper(mObjectMapper);
adapter.setMessageConverter(converter);
final Queue queue = new Queue(handler.getId(), true, false, false, QUEUE_ARGS);
mAmqpAdmin.declareQueue(queue);
final Binding binding = BindingBuilder.bind(queue).to(Constants.DOMAIN_EVENT_TOPIC).with(topicName.value());
mAmqpAdmin.declareBinding(binding);
final SimpleMessageListenerContainer listener = new SimpleMessageListenerContainer(mConnectionFactory);
listener.setQueues(queue);
listener.setMessageListener(adapter);
listener.start();
} else {
throw new IllegalArgumentException("subscribed Event type has no exchange key!");
}
}
}
Part of my handler app:
@Component
public class FooEventHandler implements EventHandler<FooEvent> {
private final UserCallbackMessenger mUserCallbackMessenger;
private final HorseTeamPager mHorseTeamPager;
@Autowired
public FooEventHandler(final EventSubscriber subscriber) {
subscriber.subscribe(FooEvent.class, this);
}
@Override
public void handleEvent(final FooEvent event) {
// do stuff
}
}
I wonder why out-of-the-box feature with the RabbitAdmin and beans for Broker entities doesn't fit your requirements:
A further benefit of doing the auto declarations in a listener is that if the connection is dropped for any reason (e.g. broker death, network glitch, etc.) they will be applied again the next time they are needed.
See more info in the Reference Manual.
I know, very odd, I was expecting this from a managed context. Maybe good to say, I've removed the server completely and re-added it (using Docker). Somehow the Queues did not get recreated after the engine reconnected.
Could it be that it fails since I declare the Queues in the constructors of the handlers within my engine app?
Could. As I said: you have to declare queues, exchanges and bindings between them as beans: http://docs.spring.io/spring-amqp/docs/1.6.0.RELEASE/reference/html/_reference.html#broker-configuration
That sucks pretty much, since my application registers around 75 queues dynamically (see edit how)
Well, you can use beanFactory.registerSingleton() from your code there on the matter. But RabbitAdmin as a bean must be there anyway.
You can register your code with the connection factory as a ConnectionListener and re-declare the queues etc whenever the connection is established. That's how the RabbitAdmin does it.
|
STACK_EXCHANGE
|
Ruby or Python?
This question is extremely subjective and open-ended. It might even sound like something I should just research for myself and make my own decision. But I'd like to put it out there and get some thoughts from others.
Long story short - I burned out with the rat race and am on a self-funded sabbatical this year. Much of it is to take a break from the corporate grind and travel around, but I also want to play around with new technologies and do some self-learning projects, to stay up to speed on programming, and well - I just love tinkering with programming, when there's no pressure!
Here's the thing: I am a lifetime C/C++/Java programmer. I'm a bit of a squiggly bracket snob since I've been working with this family of languages for my entire programming career. So I'd like to learn a language which isn't so closely syntactically related to this group. What I'm basically looking for is a language which is relatively general purpose, fun to learn, has some new concepts that are different from C++/Java, and has a good community. A secondary consideration is that it has good web development frameworks. A tertiary consideration is that it's not totally academic (read: there are real world jobs out there using it).
I've narrowed it down to Ruby or Python. My impression of Ruby is that it is extremely web oriented - that the only real application of it is as a server side scripting language for doing web stuff (mainly Ruby on Rails). I don't have much of an impression of Python at all, except that it seems to have a passionate fan base and appears to be a fairly versatile language.
TL;DR and to put it as succinctly as possible: which of these would be better for a C++/Java guy to learn to get some new perspectives on programming? And which is more open and general purpose and applicable to a wider set of applications? I'm leaning towards Ruby at the moment, but I worry to an extent that it looks like it's used as nothing but a server side web language.
"For Python I'm not so sure"? Of what? How is this relevant?
You can find a lot more on Ruby vs Python on StackOverflow. I am surprised that Lennart himself has not commented yet.
http://regebro.wordpress.com/2009/07/12/python-vs-ruby/
http://stackoverflow.com/questions/1113611/what-does-ruby-have-that-python-doesnt-and-vice-versa
@S.Lott: Sorry, wasn't clear ebough. Just meant that I don't have a very detailed impression of Python at all yet. Except that it's versatile and has a strong fan base.
Perl, of course.
This falls into the "What technology is better?" category of questions, which according to the FAQ are considered off-topic.
Don't let the fact the Ruby rose into the common parlance largely because of Rails (the web application framework) fool you. It is a general-purpose programming language, and you can use it for anything that you can use any other language for.
Play around with Ruby and see if you fall in love with it. You either will or you won't. It's kind of like the Grateful Dead's music; you either love it or you can't stand it.
Ruby will stretch your brain. In many respects, it is as far from C++/Java as you can get. I come from a C and C# background, and I found Ruby's dynamicness and meta-programming power to be quite intoxicating.
That being said, Python is an absolutely outstanding language, and it'll bust you out of your curlybracketness.
Why not learn both? I use both on a regular basis: Ruby for programming with Rails and Python for working with Google AppEngine.
+1 for a Dead reference, ;)
Thanks. I'll go with Ruby first and see if I fall in love with it. :)
I'm a little late to the party, but http://www.trypython.org/ and http://tryruby.org/ are great sites to try out the languages.
I've hardly used Ruby, admittedly, but here are my impressions of Python:
when I write pseudocode to pencil out a function, I find that what I write practically is Python, and sometimes remarkably little rewriting is necessary to make it actual code. You might even skip the pseudocode all together and just express your thoughts directly in Python
when I need to do something that seems like a common task, Python tends to have the necessary functions (at a high level) built into its standard library. For instance, early on when I wanted to open a file and scan it line by line, the answer was as simple as 'for lines in myFile: dostuff(lines)'. This I believe they call the 'batteries included' approach, and it differs from some other languages I've used where everyday operations are a lot more fiddly
Those two things stand out to me.
I think those are exactly the reasons i prefer Ruby. Ruby is more OO and has less keywords. E.g. in Ruby "[1,3,5].length()" and "[1..10].each() ..." vs Python "len([1,3,5])" and "for i in range(1,11)"
@Lenny - Ruby actually has more keywords: http://krijnhoetmer.nl/stuff/ruby/keywords/ vs. http://zetcode.com/tutorials/pythontutorial/keywords/ - for Python 2.6, that's 31 keywords for Ruby's 38. Also, 'more OO' seems to be used fairly subjectively here, since in both languages "everything is an object" applies (and even more literally with Python 3.x, as everything subclasses from object). Also, len(obj) is a shortcut for calling obj.__len__(), and for i in range also abstracts the operational details of operating generator objects. I think /equivalent but different/ is more accurate.
i've got a feeling Lenny meant you use less keywords in your typical statement, for instance 'for i in range(1,11) has 3 keywords (for, in, range). Of course, sometimes more words is better.
range is not a keyword it is a function
If you are taking a whole year sabbatical, then I would suggest spending a week or two learning each and then decide for yourself which you like best. I have experience with both and in my opinion they are both so capable that you really just need to decide which one you prefer.
+1, Both seem pretty easy. Clojure, on the other hand ... is a lot of fun, but is also harder.
IMO, you should go with Python. The reason is that it is more versatile, you can use it for almost everything. Ruby is, as you noticed, more used in web development due to its web frameworks. Unlike Python, Ruby is not that good for development of gui desktop applications, numerical, statistical or image processing programs.
Can you point out what makes Ruby "not that good for development of gui desktop applications, numerical, statistical or image processing programs"? When i was programming in Ruby in 2001/2002, i was happily doing all those things.
@Lenny222. In Ruby there are no libraries such as numpy, scipy, sympy, PIL, matplotlib. Whatever numerical libraries they are, they are far behind that from Python. Similarly with documentation and libraries for development of gui applications.
You say Python is more versatile and then point at libraries for why. I'll grant you that it's not as easy to do some things in Ruby for lack of a good library, but that doesn't mean Ruby itself is somehow less versatile.
@Twisol that's exactly the Why: Python being very simple (and consistent) is the reason why it's so versatile and why there are so many useful libraries (IMO).
Ruby has been around a lot longer than rails has, so let me put it out there in the world that Ruby != web, although it does that very well. There's a host of systems related things it can and does do. It just seems like the whole Rails framework swallowed up the rest of the Ruby world. And yes, I am a Ruby fan.
Python on the other hand has a lot going for it, and it has been integrated with nearly everything on Linux. That tells me it is probably fairly easy to incorporate into larger programs (compared to Ruby, Java, etc.). There's a fair amount of Ubuntu Linux infrastructure written with Python, which tells me that Python has application in systems programming. I hear its web framework is really nice, I haven't played with it yet.
That said, both Ruby and Python are equally capable languages, and you'll find them make your life a lot easier. Ruby has a lot more web heads in its community, but that's not the entirety of the community. I've used it on a number of infrastructure projects as well.
This is a special case of "Ideal Programming Language Learning Sequence" and
similar questions. What you need is not "the one perfect language", you need multiple language paradigms and multiple learning experiences to open your mind up.
I know you said you narrowed it down to Ruby and Python, but I suggest you start with Racket (a popular Scheme). It's built for learning and it will nicely stretch your brain toward functional programming, interactive programming and dynamic typing. There are no jobs (literally zero), a very small, fragmented community, and no major web framework, which is exactly why you won't get stuck on it; fry your brains for awhile and then move on.
Second, you want to learn Ruby or Python for possible jobs. I suggest you learn both. Learning the second one of those two will be much easier than the first despite their differences. As for possible jobs, my gut feeling is that there's more Ruby work because of Rails, but I know there is also some Zope work in this area. Do invest in at least one of them, but also do check out the other one at least long enough to build something small.
Honestly, you will probably learn more your first week on the job with either of them than you ever learned by yourself; they both have big ecosystems with lots of tools and culture and idioms.
tl;dr: Both and Scheme.
+1 All I saw was "the one perfect language" and "scheme". :)
I, too, came from Java/C++ background and have been programming in Python with the Django framework over the past 4 months and it's really great. Whenever I have a problem or question I can find explanations on existing posts. I can't vouch for Ruby since I haven't used it yet, but I'll definitely try it when I have some time.
I personally like how you can do stuff really quick with Python as it has a lot of useful functions built-in.
Id vote for Ruby. I came from .Net and C# background, tried Python first, but Ruby just charmed me=)
Im writing a lot of system stuff in it, and some Rails dev to. It is capable as Python in system programming, and is awesome at web. And it feels more polished to me..
|
STACK_EXCHANGE
|
Auditing Log Query
KubeSphere supports the query of auditing logs among isolated tenants. This tutorial demonstrates how to use the query function, including the interface, search parameters and detail pages.
You need to enable KubeSphere Auditing Logs.
Enter the Query Interface
The query function is available for all users. Log in to the console with any user, hover over the in the lower-right corner and select Audit Log Search.
Any user has the permission to query auditing logs, while the logs that each user is able to see are different.
- If a user has the permission to view resources in a project, it can see the auditing log that happens in this project, such as workload creation in the project.
- If a user has the permission to list projects in a workspace, it can see the auditing log that happens in this workspace but not in projects, such as project creation in the workspace.
- If a user has the permission to list projects in a cluster, it can see the auditing log that happens in this cluster but not in workspaces and projects, such as workspace creation in the cluster.
In the pop-up window, you can view log trends in the last 12 hours.
The Audit Log Search console supports the following query parameters:
Parameter Description Cluster Cluster where the operation happens. It is enabled if the multi-cluster feature is turned on. Project Project where the operation happens. It supports exact query and fuzzy query. Workspace Workspace where the operation happens. It supports exact query and fuzzy query. Resource Type Type of resource associated with the request. It supports fuzzy query. Resource Name Name of the resource associated with the request. It supports fuzzy query. Verb Kubernetes verb associated with the request. For non-resource requests, this is the lower-case HTTP method. It supports exact query. Status Code HTTP response code. It supports exact query. Operation Account User who calls this request. It supports exact and fuzzy query. Source IP IP address from where the request originated and intermediate proxies. It supports fuzzy query. Time Range Time when the request reaches the apiserver.
- Fuzzy query supports case-insensitive fuzzy matching and retrieval of full terms by the first half of a word or phrase based on Elasticsearch segmentation rules.
- KubeSphere stores logs for the last seven days by default. You can modify the retention period in the ConfigMap
Enter Query Parameters
Select a filter and enter the keyword you want to search. For example, query auditing logs containing the information of
You can click the results to see the auditing log details.
Was this page Helpful?
Receive the latest news, articles and updates from KubeSphere
Thanks for the feedback. If you have a specific question about how to use KubeSphere, ask it on Slack. Open an issue in the GitHub repo if you want to report a problem or suggest an improvement.
|
OPCFW_CODE
|
Featured image: A comic strip explaining the difference between correlation and causation. Image source: xkcd.com
As noted in this classic science comic, people often think correlation implies causation. And this is probably every scientist’s pet peeve — it is definitely one of mine.
Let’s talk correlation first
It is a direct relationship between two things. That is, both A and B behave in a similar pattern, as in the following examples:
- Global temperature correlates with carbon dioxide in the atmosphere
- Health and social problems correlate with income inequality
- Divorce rate in Maine correlates with per capita consumption of margarine
- Per capita consumption of mozzarella cheese correlates with civil engineering doctorates awarded
You probably raised your eyebrows when reading the last two examples. So what does it mean for correlation?
At least, on its own, it means nothing. Because it is quite easy to find two factors which show similar patterns when there a million factors out there (check this great website for more spurious correlations). Often, we also find two factors linked when, in fact, they are linked to a common third factor and not to each other. See these news articles below:
- Sincere smiling promotes longevity. Or does reduced stress?
- Watching too much TV can kill you (early). Or does lack of exercise?
- Credit cards can make you fat. Or does increased junk food spending and consumption?
You get the picture. And sometimes unfortunately, the examples aren’t as ludicrous. For example, a correlation has been shown between autism and vaccine. Believing such spurious correlations, in spite of the overwhelming evidence against it, can do serious damage.
Moving on to causation
Causation is what many journalists intend to show when they show correlation. It is the causal relationship between two things. That is, A causes an effect B. Examples include:
- Smoking causes lung cancer
- Human activity is responsible for global warming
- Weight training makes you stronger
Two scientists, Koch and Dale, listed criteria which, if fulfilled, would prove causation. Here is a combined and simplified version of it:
A –> M –> B
A is the agent which causes the effect B. M is the mediator which is formed by A, and which leads to B. The following must happen for causation to be proved:
- If formation of mediator M is blocked, then there should be no effect B.
- If action of mediator M is blocked, there should be no effect B.
- Mediator M is formed only in response to agent A.
- If mediator M is given, it has the same effect B.
In the above examples, A = smoking, human activity and weight training; B is lung cancer, global warming and stronger and; M is presence of harmful chemicals, increased carbon dioxide and increased muscle mass.
However, mediator M need not only be formed in response to agent A. For example, lung cancer occurs in non-smokers too, especially to those who have been continually exposed to radon gas or asbestos.
When could correlation imply causation?
Another scientist, Hill, listed criteria which, if fulfilled, would imply a causal relationship between two factors that are correlated. This is helpful when there are no experimental data or when effects are caused indirectly, due to interaction of multiple factors. The necessary criteria are:
- Strong correlation between the agent (A) & effect (B) (though a weak association does not preclude causation)
- Consistent correlation between A & B (same results shown by different people in different places)
- Specific correlation between A & B (only A seems to cause B)
- B occurs only after exposure to A
- Typically, greater exposure to A results in a bigger incidence of B (though the opposite could also happen)
- Possible way (mechanism) through which A causes B (though this data is not always available)
- Similar correlation shown between A & B in the lab and in the population (though lack of lab, i.e., experimental, evidence does not mean the association doesn’t exist in the population)
- If experiments could be done, it should show that A leads to B
In other words, it is not easy to prove causation, especially in our complex world. Think about this the next time you are asked to believe a direct relationship, rather than a causal one.
|
OPCFW_CODE
|
NAMEamavisd-milter - sendmail milter for amavisd-new
SYNOPSISamavisd-milter [-fhv] [-d debug-level] [-D delivery-care-of] [-m max-conns] [-M max-wait] [-p pidfile] [-P] [-q backlog] [-s socket] [-t timeout] [-S socket] [-T timeout] [-w directory]
DESCRIPTIONThe amavisd-milter is a sendmail milter (mail filter) for amavisd-new 2.4.3 and above and sendmail 8.13 and above (limited support for 8.12 is provided). Instead of older amavis-milter helper program, full amavisd-new functionality is available, including adding spam and virus information header fields, modifying Subject, adding address extensions and removing certain recipients from delivery while delivering the same message to the rest. For more information you can visit amavisd-milter website: //amavisd-milter.sourceforge.net/ and SourceForge project: //sourceforge.net/projects/amavisd-milter Options The options are as follows: -d debug-level Set the debug level to debug-level. Debugging traces become more verbose as the debug level increases. Maximum is 9. -D delivery-care-of Set AM.PDP request attribute delivery_care_of to client (default) or server. When client method is used then amavisd-milter is responsible to forward the message to recipients. This method doesn't allow personalized header or body modification. When server method is used then amavisd-new is responsible to forward the message to recipients and can provide personalized header and body modification. $forward_method in amavisd.conf must point to some place willing to accept mail without further checking in amavisd-new. -f Run amavisd-milter in the foreground (i.e. do not daemonize). Print debug messages to the terminal. -h Print help page and exit. -m max-conns Maximum concurrent amavisd connections (default 0 - unlimited number of connections). It must agree with the $max_servers entry in amavisd.conf. -M max-wait Maximum wait for connection to amavisd in seconds (default 300 = 5 minutes). It must be less then sending MTA timeout for a response to the final "." that terminates a message on sending MTA. sendmail has default value 1 hour, postfix 10 minutes and qmail 20 minutes. We suggest to use less than 10 minutes. -p pidfile Use this pid file (default /var/run/amavis/amavisd-milter.pid). -P When amavisd-new fails mail will be passed through unchecked. -q backlog Sets the incoming socket backlog used by listen(2). If it is not set or set to zero, the operating system default is used. -s socket Communication socket between sendmail and amavisd-milter (default /var/lib/amavis/amavisd-milter.sock). The protocol spoken over this socket is MILTER (Mail FILTER). It must agree with the INPUT_MAIL_FILTER entry in sendmail.mc The socket should be in "proto:address" format: +
FILES/var/run/amavis/amavisd-milter.pid The default process-id file. /var/lib/amavis/amavisd-milter.sock The default sendmail communication socket. /var/lib/amavis/amavisd.sock Th default amavisd-new communication socket. /var/lib/amavis/tmp The default working directory.
POLICY BANKWhen remote client is authenticated, amavisd-milter forward this information to amavisd-new through AM.PDP request attribute policy_bank: SMTP_AUTH Indicate that the remote client is authenticated. SMTP_AUTH_<MECH> Remote client authentication mechanism. SMTP_AUTH_<MECH>_<BITS> The number of bits used for the key of the symmetric cipher when authentication mechanism use it.
EXAMPLESConfiguring amavisd-new In amavisd.conf file change protocol and socket settings to: $protocol = "AM.PDP"; # Use AM.PDP protocol $unix_socketname = "$MYHOME/amavisd.sock"; # Listen on Unix socket ### $inet_socket_port = 10024; # Don't listen on TCP port Then (re)start amavisd daemon. Configuring sendmail To the sendmail.mc file add the following entries: define('confMILTER_MACROS_ENVFROM', confMILTER_MACROS_ENVFROM', r, b') INPUT_MAIL_FILTER('amavisd-milter', 'S=local:/var/lib/amavis/amavisd-milter.sock, F=T, T=S:10m;R:10m;E:10m') Then rebuild your sendmail.cf file, install it (usually to /etc/mail/sendmail.cf) and (re)start sendmail daemon. Running amavisd-milter This example assume that amavisd-new is running as user amavis. It must agree with the entry $daemon_user in amavisd.conf. First create working directory: mkdir /var/lib/amavis/tmp chmod 750 /var/lib/amavis/tmp chown amavis /var/lib/amavis/tmp Then start amavisd-milter as non-priviledged user amavis: su - amavis -c "amavisd-milter -w /var/lib/amavis/tmp" Limiting maximum concurrent connections to amavisd To limit concurrent connections to 4 and fail after 10 minutes (10*60 secs) of waiting run amavisd-milter with this options: su - amavis -c "amavisd-milter -w /var/lib/amavis/tmp -m 4 -M 600" Troubleshooting For troubleshooting run amavisd-milter on the foreground and set debug level to appropriate level: su - amavis -c "amavisd-milter -w /var/lib/amavis/tmp -f -d level" where debug levels are: 1 Not errors but unexpected states (connection abort etc). 2 Main states in message processing. 3 All amavisd-milter debug messages. 4-9 Milter communication debugging (smfi_setdbg 1-6).
SEE ALSO//amavisd-milter.sourceforge.net //www.ijs.si/software/amavisd/ //www.milter.org/developers //www.sendmail.org
AUTHORSThis manual page was written by Petr Rehor <email@example.com> and is based on Jerzy Sakol <firstname.lastname@example.org> initial work.
BUGSA community mailing lists are available at: //sourceforge.net/mail/?group_id=138169 Enhancements, requests and problem reports are welcome. If you run into problems first check the users mailing list archive before asking questions on the list. It's highly likely somebody has already come across the same problem and it's been solved. AMAVISD-MILTER(8)
|
OPCFW_CODE
|
NgTube is a Youtube Player using the public Youtube API.
It's always annoying when you have to stop watching your video when you want to search for other videos. The Youtube mobile application solves this problem by having a small size player on the bottom right of your screen. This application takes this idea and brings it to desktop.
Real size screenshot: http://i.imgur.com/OOB5DSH.png
NgTube is not really responsive on small screens. Its main goal is to work on bigger screen with some functionalities from the youtube mobile application.
First thing is making your search in the upper navigation bar: results appears in a grid of at most 3 videos a line.
You can now select:
Once a video is selected, commands will be available.
Youtube player's buttons and footer ones are synchronized. Except for the volume. There is no way to listen volume change on the youtube player.
A single, anonymous, playlist can be created by a user. Its state is stored on the client (localStorage) browser and is restored at application launch.
The Youtube Player API may emit events in loop sometimes, making our controls go crazy, if it happens, click on another video.
Nice looking website, loved the playlist concept :)
The UI is really cool!
Certainly an eye-catcher. A good app. (Felt glad to see a couple of my videos on the playlist as well.). Excellent effort team.
not the best concept
i like the picture in picture mode
this one, you can only love it :D
I really like the app but i have two screens on portrait mode (on my job for my excel sheets) and the image doesn't re size or fit the screen.
I was about to add an example image but can't paste here.
The playlist editing is better than youtube's own :)
Really Cool, liked the idea of bringing the feature to the web, would be quite useful.
Hey, I should use this instead of the real Youtube from now on :)
Great design and usability! Very easy and quick to create a playlist.
The design is smooth, liked it :) Future tips: maybe add a login and some functionality so it would offer something more that youtube doesn't :)
Cool app! The playlist feature is quite nice.
Very nice implementation of not only the main feature of adding a smaller player while searching for new videos, but also a general set of YouTube controls and playlist functionality.
In some respects it is actually nicer and easier to use than the actual YouTube site! It's much less cluttered if you just want to view videos.
One small suggestion I would have is to resize the video back into the window if the user removes their search.
This is so cool... But the big vertical margin between the cards is irretating.
The small player on the right which keeps playing and allows user to browse other videos is good. Left sidebar is almost unnecessary and 2 sets of player controls is little irritating.
Nice looking design, simple yet powerful functionality. It's similar to an app I have started to develop myself but not yet completed. :) The repeat-1 function replaces the need for endlessyoutube and repeatyoutube ;) Great work guys
i like it !!!
Cool idea. Would be even better as a browser plugin so I can browse the web at the same time as watching YT.
love this UI
Nice UI and nice concept!
Very nice! Like the idea. How about being able to share the playlist, perhaps a "youtube party" mode?
It would look even better if there was no scroll bar inside the video section. Dont know if you can remove the original players controls, they seem a bit in the way.
It would've been cool if it used the Google's API to authenticate the user and get the user's playlist; then it would've made this entry truly useful. UI needs a bit more polishing.
Nice. Youtube should add this.
I liked it, Good job.
Just allow to add more playlists and edit the playlist name.
Cool~ Nice Design! :D
|
OPCFW_CODE
|
INN commit: trunk/innfeed (procbatch.in)
Russ_Allbery at isc.org
Thu Oct 4 19:25:35 UTC 2007
Date: Thursday, October 4, 2007 @ 12:25:34
Initialize the value of $missing (perl -w...).
Fix some typos at the same time.
procbatch.in | 19 ++++++++++---------
1 file changed, 10 insertions(+), 9 deletions(-)
--- procbatch.in 2007-10-01 12:15:55 UTC (rev 7694)
+++ procbatch.in 2007-10-04 19:25:34 UTC (rev 7695)
@@ -7,11 +7,11 @@
# File: procbatch.pl
# RCSId: $Id$
-# Description: Take a file of the form generated by innd in the out.going
-# directory ("Wnm*") and separate it into tape files for inn.
+# Description: Take a file of the form generated by innd in the outgoing
+# directory ("Wnm*") and separate it into tape files for INN.
# Thanks to Clayton O'Neill <coneill at premier.net> for serious speed
@@ -37,30 +37,31 @@
-c to check pathnames of articles before storing them
-s dir to specify where the news articles are
- -m to have $0 move the new files to the backlog directory.
+ -m to have $0 move the new files to the backlog directory
-t dir to specify the backlog directory ($tapeDir)
-u to unlink the input files when finished
-v for verbosity
- -q quiet mode; only display error messages. Good for cron jobs.
+ -q quiet mode: only display error messages; good for cron jobs
-$0 will take an inn funnel file (normally a file in
+$0 will take an INN funnel file (normally a file in
$outGoing), or an innfeed ``dropped'' file,
which is presumed to be of the format:
pathname message-id peer1 peer2 peer3 ...
and will break it up into files peer1.tmp peer2.tmp peer3.tmp... Each of
-these files wil be of the format:
+these files will be of the format:
-that is the same as innfeed's backlog file format. Simply rename these files
+that is the same as innfeed's backlog file format. Simply rename these files
to peer1 peer2 peer3 in a running innfeed's backlog directory and they will be
-picked up automatically and processed by innfeed. Use the '-m' flag and
+picked up automatically and processed by innfeed. Use the '-m' flag and
they'll be moved automatically.
$opt_u = $opt_h = ""; # shut up, perl -w
+$missing = 0;
&Getopts ("he:t:s:d:cvumq") || die $usage ;
die $usage if ( $opt_h ) ;
More information about the inn-committers
|
OPCFW_CODE
|
Total logic / data decoupling with SQLAlchemy
I am trying (purely as an exercice) to build a graphical software able to draw some charts representing Market data and doing some simple analysis.
I am using Python3.5 + SQLAlchemy1.1 for persistence (and PyQt but not relevant here).
To make it simple, I want to have a class 'Stock' representing a trading stock and a class 'Exchange' representing a market exchange (Paris market, London market, etc.). Each Stock has a Market.
By using Declarative Mapping, this would give the following:
from sqlalchemy import Column, ForeignKey, String
from sqlalchemy.orm import relationship
Base = declarative_base()
class Exchange(Base):
__tablename__ = 'exchange'
ticker = Column(String(2), primary_key=True)
name = Column(String(20))
class Stock(Base):
__tablename__ = 'stock'
ticker = Column(String(5), primary_key=True)
exchange_ticker = Column('exchange', String(2), ForeignKey('exchange.ticker'),
primary_key=True)
name = Column(String(20), unique=True)
exchange = relationship(Exchange)
But as I said, this is purely an exercice, and I would like to make it more 'decoupled': I would like to totally separate the 'business logic' from the 'persistence layer' (I am using quotes because I see these expressions everywhere and I think I got their meaning, but I am not a professional developer so I may not see the whole thing).
Ultimately, I would like to separate the business logic and the persistence layer in 2 modules (or packages, same), potentially 'gitsubmoduling' them, and be able to install and launch the logic module without any persistence (without even installing sqlalchemy / psycopg2 for PostgreSQL, etc.). This is a good practice, or at least a good objective, right?
So I had a deeper look at SQLAlchemy doc and I found the concept of Classical Mapping (automatically handled by Declarative Mapping). This would gives:
core.py (purely logic, possibly functional without any persistence)
class Exchange:
def __init__(self, ticker: str, name: str):
self.ticker = ticker
self.name = name
class Stock:
def __init__(self, ticker: str, exchange: Exchange, name: str=None):
self.ticker = ticker
self.exchange = exchange
self.name = name
# Some business logic: correlations, data analysis, etc.
model.py (mapping logic to database)
from sqlalchemy import Column, ForeignKey, Integer, MetaData, String, Table, UniqueConstraint
from sqlalchemy.orm import mapper
from core import Exchange, Stock
metadata = MetaData()
stock = Table(
'stock', metadata,
Column('ticker', String(5), primary_key=True),
Column('exchange', String(2), ForeignKey('exchange.ticker'), primary_key=True),
Column('name', String(20))
)
exchange = Table(
'exchange', metadata,
Column('ticker', String(2), primary_key=True),
Column('name', String(20))
)
mapper(Stock, stock)
mapper(Exchange, exchange)
This does not work: when I try to insert some fixtures:
sqlalchemy.exc.InterfaceError: (sqlite3.InterfaceError) Error binding parameter 2 - probably unsupported type. [SQL: 'INSERT INTO stock (ticker, name, exchange) VALUES (?, ?, ?)'] [parameters: ('CAC', None, <core.Exchange object at 0x108bd0fd0>)]
As you can see, he tries to 'insert' an Exchange object directly. Therefore, I tried (with different options and configs):
mapper(Stock, stock, properties={
'exchange': relationship(Exchange)
})
But SQLAlchemy raises an error:
sqlalchemy.exc.ArgumentError: WARNING: when configuring property 'exchange' on Mapper|Stock|stock, column 'exchange' conflicts with property '<RelationshipProperty at 0x1096d5cf8; exchange>'.
It seems that whatever I try (I tried a lot of other stuffs, with exclusive_properties, naming the relationship, etc.) SQLAlchemy wants a relationship object in my Stock (which makes sense..). So I am wondering is that level of decoupling is really reachable.
My questions are:
Am I missing something obvious?
Am I wrong to want that level of 'decoupling'?
Am I facing the Object-relational impedance mismatch which is referenced in some places (as here, here or here)?
You should rename your column from exchange to exchange_ticker and use the relationship as you declared it.
Sorry for the delay, but it worked, thanks. Sorry to be annoying, but would you know any way to keep the name exchange?
You certainly can have two dogs with the same name, but that would be very confusing for dogs. And it is not possible to have two variables with the same name in the same scope in any programming language (i know of).
So you either use exchange for the column containing "ticker" value (in which case you need to pick different name for the relationship to actual Exchange), or you use exchange for the relationship, but then rename the column to be exchange_ticker. The latter is the most chosen option in my practice
|
STACK_EXCHANGE
|
Tags: Artikel mit dem Tag «Zend Framework» durchstöbern
A huge part of web applications is usually the interaction with the SQL database. This is why I want as little work as possible connecting, escaping values, getting the right tables an so on in PHP. But it should stay simple and allow modular approaches. Therefor I'm using some nested APIs for doing queries easily:
The very fist thing I am using is PDO. It can handle many RDBMS, but I am most of the times using MySQL or SQLite. By using PDO as an API for the following layers I can make sure most of the code will work for many RDBMSs. PDO even simplifies transactions and prepared statements. Here's some sample PHP code using PDO:
$pdo=new PDO('mysql:host='.$host.';dbname='.$db, $user, $passwort); $pdo->exec('UPDATE test SET foo="bar" WHERE id=4'); $satement=$pdo->prepare('SELECT * FROM blogeintraege WHERE id=:id'); $satement->bindValue(':id',3,PDO::PARAM_INT); $satement->execute(); $data=$satement->fetchAll();
The next layer is a class that will hold a MySQL database Connection (a PDO Object) and offer some simple functions for doing e.g. a simple prepared statement. Instead of binding each values manually, you can throw an array in.
It also includes a cache, if you want to run statements more than once. It can append a prefix to all queried tables and checks dynamically inserted tables for validity to avoid SQL-injections and MySQL errors. It is used like that:
$res=$db->sql("SELECT * FROM blogeintraege"); $res=$db->sql( "SELECT * FROM #test WHERE id=:id", array('id'=>$id),array('id'=>PDO::PARAM_INT), array('test'=>'blogeintraege'), array('limits'=>array(0,$l),'buffered'=>false) );
For one array element this does not look too tiny, but the more values are bound, the more useful it gets. And it is very useful if you already have your values in an array, like
Note that nearly everything is optional. The table array can contain more tables, for example you can have an array of tables for different languages, if they are in different tables. The bind-types don't need to be specified too. You can even leave out everything except the query as shown in the fist line of code. The Result will by default be returned as a nice array (the GROUP_CONCAT fields are array'ed too) but you can use all other PDO fetch types.
This layer follows a rather functional approve, so I needed another layer for accessing the central
sql()-Function in an OOP manner. This should avoid some runtime errors and you can modify the SQL in a modular system.
So I created a wrapper object, that holds a pointer to the database and will construct the parameters for
sql(). This comes in handy as more and more optional parameters are added.
The PDO Simplifier has a method to build such statement-objects called
sqlO(). This is how the wrapper is used:
$db->sqlO('INSERT INTO blogtaglinks SET ##,type=3') ->setSet(array('ID_tag'=>$lasttagid,'ID_entry'=>$id)) ->exec(); $res=$db->sqlO("SELECT * FROM #test WHERE id=:id") ->setData(array('id'=>$id)) ->setDataTypes(array('id'=>PDO::PARAM_INT)) ->setTables('test'=>'blogeintraege'), ->setLimits(0,$l) ->setBuffered(false) ->exec(); );
As you can see, it is a little more code, but the code is pretty self-explanatory and now one can build the sets and the other parameters as arrays and then include them easily in the statements.
A bit different: Zend Framework's
A next step would be to build queries with a single API. This is a feature implemented by the Zend Framework, where you can build your SQL with some API functions and it will even work across various databases:
select = $db->select() ->from('blogeintraege',array('id','Titel')) ->where('id < ?', $id) ->order('id DESC') ->limit(0,10);
Well doesn't that look nice?
|
OPCFW_CODE
|
package org.zopa.loanprovider;
import org.junit.Rule;
import org.junit.Test;
import org.junit.rules.ExpectedException;
import java.io.IOException;
import java.math.BigDecimal;
import java.util.ArrayList;
import java.util.List;
import java.util.Optional;
import static org.assertj.core.api.Assertions.assertThat;
import static org.zopa.loanprovider.HelpersTest.*;
/**
* Test loan calculator, data processor, file parsing, offers comparators.
*/
public class LoanProviderTest {
@Rule
public final ExpectedException exception = ExpectedException.none();
@Test
public void testLoanBaseCase() throws IOException {
List<LenderData> marketData = new ArrayList<LenderData>() {{
add(new LenderData("Martin", BigDecimal.valueOf(0.075), BigDecimal.valueOf(650)));
add(new LenderData("Peter", BigDecimal.valueOf(0.069), BigDecimal.valueOf(480)));
add(new LenderData("John", BigDecimal.valueOf(0.071), BigDecimal.valueOf(520)));
}};
Optional<Loan> loan = Main.getLoan(marketData, new BigDecimal(1000));
assertThat(loan.isPresent()).isTrue();
BigDecimal rate = roundRate(loan.get().getRate());
BigDecimal monthlyPayment = roundPayment(loan.get().getMonthlyRepayment());
BigDecimal totalRepayment = roundPayment(loan.get().getTotalRepayment());
assertThat(rate).isEqualTo(BigDecimal.valueOf(7.0));
assertThat(monthlyPayment).isEqualTo(BigDecimal.valueOf(30.88));
assertThat(loan.get().getRequestedAmount()).isEqualTo(BigDecimal.valueOf(1000));
assertThat(totalRepayment).isEqualTo(BigDecimal.valueOf(1111.65));
}
@Test
public void testInvalidAmountInputResultEmpty() throws IOException {
Optional<Loan> loan = Main.getLoan(new ArrayList<>(), new BigDecimal(113));
assertThat(loan).isEmpty();
}
@Test
public void testEmptyMarket() {
List<LenderData> marketData = new ArrayList<>();
LoanProcessor loanProcessor = new LoanProcessor(marketData, new FrenchAmortizationMethod());
assertThat(loanProcessor.findLoanFor(BigDecimal.valueOf(800), 36)).isEmpty();
}
@Test
public void testNotEnoughOffer() {
BigDecimal amountRequestedExpected = BigDecimal.valueOf(8000);
List<LenderData> marketData = new ArrayList<LenderData>() {{
add(new LenderData("Martin", BigDecimal.valueOf(0.05), BigDecimal.valueOf(3000)));
}};
assertLoanFor(marketData, amountRequestedExpected);
}
@Test
public void testJustOneLenderWithOffer() {
BigDecimal rateExpected = BigDecimal.valueOf(5.0);
BigDecimal amountRequestedExpected = BigDecimal.valueOf(1500);
List<LenderData> marketData = new ArrayList<LenderData>() {{
add(new LenderData("Martin", BigDecimal.valueOf(0.05), BigDecimal.valueOf(3000)));
}};
assertLoanFor(marketData, rateExpected, amountRequestedExpected);
}
@Test
public void testMoreThanOneLenderWithOffer() {
BigDecimal rateExpected = BigDecimal.valueOf(2.2);
BigDecimal amountRequestedExpected = BigDecimal.valueOf(1500);
List<LenderData> marketData = new ArrayList<LenderData>() {{
add(new LenderData("Martin", BigDecimal.valueOf(0.03), BigDecimal.valueOf(400)));
add(new LenderData("Peter", BigDecimal.valueOf(0.06), BigDecimal.valueOf(60)));
add(new LenderData("John", BigDecimal.valueOf(0.02), BigDecimal.valueOf(1200)));
}};
assertLoanFor(marketData, rateExpected, amountRequestedExpected);
}
@Test
public void testMoreThanOneLenderWithOfferWithLowBalance() {
BigDecimal rateExpected = BigDecimal.valueOf(2.0);
BigDecimal amountRequestedExpected = BigDecimal.valueOf(15000);
List<LenderData> marketData = new ArrayList<LenderData>() {{
add(new LenderData("Martin", BigDecimal.valueOf(0.03), BigDecimal.valueOf(0.5)));
add(new LenderData("Peter", BigDecimal.valueOf(0.06), BigDecimal.valueOf(0.5)));
add(new LenderData("John", BigDecimal.valueOf(0.02), BigDecimal.valueOf(14999)));
}};
assertLoanFor(marketData, rateExpected, amountRequestedExpected);
}
@Test
public void testMoreThanOneLenderWithOfferFirst() {
BigDecimal rateExpected = BigDecimal.valueOf(3.0);
BigDecimal amountRequestedExpected = BigDecimal.valueOf(1600);
List<LenderData> marketData = new ArrayList<LenderData>() {{
add(new LenderData("Martin", BigDecimal.valueOf(0.03), BigDecimal.valueOf(2000)));
add(new LenderData("Peter", BigDecimal.valueOf(0.06), BigDecimal.valueOf(500)));
add(new LenderData("John", BigDecimal.valueOf(0.05), BigDecimal.valueOf(1499)));
}};
assertLoanFor(marketData, rateExpected, amountRequestedExpected);
}
@Test
public void testSeveralLendersButNotEnough() {
BigDecimal amountRequestedExpected = BigDecimal.valueOf(14000);
List<LenderData> marketData = new ArrayList<LenderData>() {{
add(new LenderData("Martin", BigDecimal.valueOf(0.03), BigDecimal.valueOf(40)));
add(new LenderData("Peter", BigDecimal.valueOf(0.06), BigDecimal.valueOf(60)));
add(new LenderData("John", BigDecimal.valueOf(0.02), BigDecimal.valueOf(120)));
add(new LenderData("Max", BigDecimal.valueOf(0.03), BigDecimal.valueOf(4)));
add(new LenderData("Mary", BigDecimal.valueOf(0.06), BigDecimal.valueOf(600)));
add(new LenderData("Anna", BigDecimal.valueOf(0.02), BigDecimal.valueOf(500)));
}};
assertLoanFor(marketData, amountRequestedExpected);
}
@Test
public void testSeveralLendersButAlmostOffer() {
BigDecimal amountRequestedExpected = BigDecimal.valueOf(1000);
List<LenderData> marketData = new ArrayList<LenderData>() {{
add(new LenderData("Martin", BigDecimal.valueOf(0.03), BigDecimal.valueOf(500)));
add(new LenderData("Peter", BigDecimal.valueOf(0.06), BigDecimal.valueOf(499)));
}};
assertLoanFor(marketData, amountRequestedExpected);
}
@Test
public void testSeveralLendersWithAlmostNotEnough() {
BigDecimal rateExpected = BigDecimal.valueOf(5.0);
BigDecimal amountRequestedExpected = BigDecimal.valueOf(1000);
List<LenderData> marketData = new ArrayList<LenderData>() {{
add(new LenderData("Martin", BigDecimal.valueOf(0.05), BigDecimal.valueOf(500)));
add(new LenderData("Peter", BigDecimal.valueOf(0.05), BigDecimal.valueOf(501)));
}};
assertLoanFor(marketData, rateExpected, amountRequestedExpected);
}
@Test
public void testLoanComparator() {
List<LenderData> data = new ArrayList<LenderData>() {{
add(new LenderData("Martin", BigDecimal.valueOf(0.06), BigDecimal.valueOf(4320)));
add(new LenderData("Peter", BigDecimal.valueOf(0.03), BigDecimal.valueOf(608)));
add(new LenderData("Anna", BigDecimal.valueOf(0.05), BigDecimal.valueOf(1200)));
}};
assertThat(data.get(0).getName()).isEqualTo("Martin");
assertThat(data.get(1).getName()).isEqualTo("Peter");
assertThat(data.get(2).getName()).isEqualTo("Anna");
data.sort(LoanComparator.byRate);
assertThat(data.get(0).getName()).isEqualTo("Peter");
assertThat(data.get(1).getName()).isEqualTo("Anna");
assertThat(data.get(2).getName()).isEqualTo("Martin");
}
@Test
public void testFrenchAmortizationMethod() {
AmortizationMethod amortizationMethod = new FrenchAmortizationMethod();
BigDecimal rate = new BigDecimal(0.05);
BigDecimal amount = new BigDecimal(1000);
BigDecimal payment = amortizationMethod.calculateMonthlyPayment(rate, amount, 36);
assertThat(payment.setScale(2, BigDecimal.ROUND_CEILING)).isEqualTo(BigDecimal.valueOf(29.98));
}
}
|
STACK_EDU
|
function splitOnUnprotected(s, splitters, saveSplitters, settings) {
if (!isString(s))
console.warn("non-string", s);
if (s.length === 0)
return [];
if (!settings)
settings = {};
if (typeof splitters === 'string' || splitters instanceof String)
splitters = [splitters];
var sections = [];
var lastSplitterEnd = 0;
settings.onChar = function(c, index, depth) {
// If at an unprotected level,
// *and* we're no longer in a splitter (for ambiguous splitters like "::" and ":")
// This uses a greedy algorithm, so might miss 'optimal' splits
if (depth === 0 && index >= lastSplitterEnd) {
var splitter = undefined;
// Find the longest valid splitter
var maxLength = 0;
for (var i = 0; i < splitters.length; i++) {
var s2 = splitters[i];
if (s.startsWith(s2, index) && s2.length > maxLength) {
splitter = {
splitterIndex: i,
index: index,
splitter: s2,
}
maxLength = s2.length;
}
}
if (splitter) {
var s3 = s.substring(lastSplitterEnd, index);
sections.push(s3);
lastSplitterEnd = index + splitter.splitter.length;
// Add the splitter to the array if we want to record it
if (saveSplitters) {
sections.push(splitter);
}
}
}
};
parseProtected(s, settings);
sections.push(s.substring(lastSplitterEnd));
return sections;
}
/*
* Get indices of unprotected
* Find each unprotected query
* settings
* simplifiedResults: return only the indices, not which query was found
*/
function getUnprotectedIndices(s, queries, settings) {
if (settings === undefined)
settings = {};
// wrap single queries in an array
if (typeof queries === 'string' || queries instanceof String)
queries = [queries];
var indices = [];
var start = 0;
settings.onChar = function(c, index, depth) {
if (index >= start && depth === 0) {
var found = [];
// Check which queries can be found from this index
var maxLength = 0;
for (var i = 0; i < queries.length; i++) {
// Record all queries found
// for ambiguous queries ("startling" for "s", "star", "start", etc),
// choose the first, unless settings say otherwise
if (s.startsWith(queries[i], index)) {
found.push([i, queries[i]]);
// skip the rest if we're prioritizing just by first found
if (!settings.prioritizeLongest && !settings.getAll)
break;
}
}
if (found.length > 0) {
if (settings.prioritizeLongest) {
found.sort(function(a, b) {
return a[1].length - b[1].length;
})
}
if (!settings.getAll) {
found = found.slice(0, 1);
start = index + found[0][1].length;
}
for (var i = 0; i < found.length; i++) {
// add to the index list
if (settings && settings.simplifiedResults) {
indices.push(index);
} else {
indices.push({
index: index,
query: found[i][1],
queryIndex: found[i][0],
});
}
}
}
}
};
parseProtected(s, settings);
return indices;
}
// UTILITY Hero function
// Runs all the parsing stuff
// Example input:
// #stuff.foo(['test#bar#'])#
// #stuff.foo('feelin' #emotion#')#
// feelin' )'( :-} #foo.test('foo {#protectThis#} :-)')#
// Outermost layers can ignore non-starting symbols, like ' " ( {
// But inner layers have to watch them
// parseProtected("feelin' )'( :-} #foo.test('foo {#protectThis#} :-)')#");
// parseProtected("feelin' \\# )'( :-} #foo.test('foo {#protectThis#} :-)')#");
function parseProtected(s, settings) {
if (!settings)
settings = {};
// Defaults
var openChars = ["[", "#", "{", "(", "'", '"'];
var closeChars = ["]", "#", "}", ")", "'", '"'];
var firstLevelIgnore = [""];
if (settings.firstLevelIgnore !== undefined)
firstLevelIgnore = settings.firstLevelIgnore;
if (settings.openChars !== undefined)
openChars = settings.openChars;
if (settings.closeChars !== undefined)
closeChars = settings.closeChars;
var sectionStack = [];
var topSection;
var escaped = false;
for (var i = 0; i < s.length; i++) {
// Ignore the escape chars
if (escaped) {
escaped = false;
} else {
var c = s.charAt(i);
// Deal with escape char
if (c === "\\")
escaped = true;
else {
// Top priority: can we close the current top section?
if (topSection !== undefined && topSection.closeChar === c) {
// Close this section
topSection.end = i;
topSection.inner = s.substring(topSection.start + 1, topSection.end);
// console.log(tabSpacer(topSection.depth) + topSection.openChar + " close " + inQuotes(topSection.inner));
if (settings.onCloseSection)
settings.onCloseSection(topSection);
// Pop it off the stack and set the new top section
sectionStack.pop();
topSection = sectionStack[sectionStack.length - 1];
} else {
// Is this character an opening character?
var openIndex = openChars.indexOf(c);
// Is this also a closing character? what does it close?
var closeIndex = closeChars.indexOf(c);
// If we are at the base level,
// ignore opening and closing except for the appropriate chars
//console.log(firstLevelIgnore.indexOf(c), c, sectionStack.length);
if (sectionStack.length === 0 && firstLevelIgnore.indexOf(c) >= 0) {
// console.log("Ignoring the " + c);
closeIndex = -1;
openIndex = -1;
}
// Regardless, do something with it
if (settings.onChar)
settings.onChar(c, i, sectionStack.length, s);
// If its not an opening character,
// but it *should* close something other than the current section,
// then this is an error
if (openIndex < 0 && closeIndex >= 0) {
if (settings.onError)
settings.onError("Unmatched " + inQuotes(c) + " at " + i);
}
// open a new section
if (openIndex >= 0) {
// Create a new section
var topSection = {
openChar: openChars[openIndex],
closeChar: closeChars[openIndex],
start: i,
depth: sectionStack.length + 1,
}
//console.log(tabSpacer(topSection.depth) + topSection.openChar + " new section ");
sectionStack.push(topSection);
if (settings.onOpenSection)
settings.onOpenSection(topSection);
} else {
if (settings.onChar)
settings.onChar(c, i, sectionStack.length, s);
}
}
}
}
}
for (var i = 0; i < sectionStack.length; i++) {
if (settings.onError) {
settings.onError("Unmatched " + inQuotes(sectionStack[i].openChar) + " at " + sectionStack[i].start);
}
}
if (settings.onEnd)
settings.onEnd(depth);
}
function isInQuotes(s) {
return (s.charAt(0) === "'" && s.charAt(s.length - 1) === "'") || (s.charAt(0) === '"' && s.charAt(s.length - 1) === '"')
}
function isInCurlyBrackets(s) {
return (s.charAt(0) === "{" && s.charAt(s.length - 1) === "}")
}
function isString(s) {
return (typeof s === 'string' || s instanceof String);
}
|
STACK_EDU
|
If you are looking for some great books to learn and understand Artificial Intelligence then you have come to the right place. Artificial Intelligence or AI is such a domain which seems like the galaxy of never-ending possibilities, so for gaining knowledge about AI, what else could be better than well comprehend and simplified books? Books are essentially a great source of knowledge and have high value and respect from those who really prefer them. So let’s do not waste much time and go to the list below.
Best Books to Learn Artificial Intelligence
This is one of the very interesting books written on artificial intelligence, you will be surprised to know that this book is one of the favorite books read by Elon Musk, and is also one which motivated Elon to work on a new project focusing on the development of advanced A.I. Yet this book deals passively with coding stuff but still it will facilitate you to learn basic knowledge of AI and you may get the strings to possibilities of AI in future.
This book Artificial Intelligence: A Modern Approach written by Peter Norvig and Stuart J. Russell is a very popular book. It is used widely as university textbooks on Artificial Intelligence and is followed by approximately more than 1300 universities and colleges all across the world. The content of this book is composed mainly for undergraduate students but can also be used for graduate studies.
Here, I would like to mention a little bit about the approach this book uses initially, it demonstrates the action-taking ability of the AI systems, after which, it focuses on cognitive thinking required for games like chess following with the reasoning and decision-making in the presence of uncertainty in the environment.
Forwarded furthermore, it discusses the ways for generating knowledge required by the decision-making components and finally concluding with the past and future of AI by discussing what AI really is and why it has succeeded to some degree
Well, those who know little or more about deep learning, it is well known to them that Deep Learning with R is the perfect option to dive in with. François Chollet has written this book, who is a Google A.I researcher, indeed a good book for those who are having enthusiasm in deep learning.
It provides very good instructions manual for Keras and some of the insights from the book include deep learning from first principles, setting up your own deep-learning environment, image classification, and generation, deep learning of text and sequence. And to have better coordination with the book you should have an intermediate skill level in R programming
AND HERE THE PANDORA’S BOX OPENS AND YOUR SINS WILL VANISH YOUR SOLE EXISTENCE. nah!!! just joking, don’t go on its scary title, it is literally a very good book to read. This book is written by James Barrat, it explains how governments and many private enterprises around the globe are researching Artificial General Intelligence (AGI) and are investing heavily in it.
It is believed that if this goal is achieved, as, in one of the interviews of Elon Musk he said, these machines will have similar survival drives as we do, isn’t this scary, the machine may use or kill us for their own survival. Don’t worry if we humans can make them then we can resist them from doing so.
When it comes to A.I and machine learning, there is no doubt that Python plays a crucial role in the development of algorithms and models. In fact, Python is the language that breaths the A.I, most of the developers prefer python to develop A.I powered applications because of its easy syntax. This book provides guidance on the development of machine learning, transforming raw data into useful information, classifying objects and regression analysis. It also has plenty of examples in it for a clear demonstration. And in the end, you will be able to build your own machine learning system.
So, these were the books I would like to recommend you to increase your arsenal in Artificial Intelligence and if you think I missed something, then don’t hesitate to drop down your valuable suggestions below.
|
OPCFW_CODE
|
import math
from math import atan2, pi
from random import Random
import turtle
from reportlab.platypus import SimpleDocTemplate, Paragraph
from reportlab.platypus.flowables import Flowable, Spacer
from reportlab.lib.styles import getSampleStyleSheet
from reportlab.lib.units import inch
from maze import MazePage, generateMazePair
from pdf_turtle import PdfTurtle
from cherrypy.test.sessiondemo import page
class PdfMaze(object):
ROOT2 = math.sqrt(2)
def __init__(self, page):
self.page = page
def draw(self, t):
""" Draw a page of the maze using turtle graphics.
@param page: a MazePage object to draw
@param t: a turtle graphics object
"""
screen_width = t.window_width()
screen_height = t.window_height()
width, height = self.page.size
cell_size = min(screen_width/width, screen_height*0.9/height)
t.penup()
t.goto(-cell_size*width/2, -cell_size*height/2)
for y in range(height):
for x in range(width):
self.drawCell(x, y, t, cell_size)
t.fd(cell_size)
t.back(cell_size*width)
t.left(90)
t.fd(cell_size)
t.right(90)
def drawCell(self, x, y, t, cell_size):
margin = cell_size/5
symbol_size = cell_size - 2 * margin
cell = self.page[x][y]
old_pen = t.width()
if (x, y) in (self.page.start, self.page.goal):
t.pendown()
if (x, y) == self.page.goal:
t.width(old_pen * 4)
for _ in range(4):
t.fd(cell_size)
t.left(90)
t.width(old_pen)
t.penup()
t.left(45)
t.fd(margin * PdfMaze.ROOT2)
t.right(45)
exits = cell.exits
if not exits:
self.drawX(t, symbol_size)
else:
self.drawArrow(t, symbol_size, exits)
t.left(45)
t.back(margin * PdfMaze.ROOT2)
t.right(45)
pass
def drawX(self, t, size):
t.left(45)
t.pendown()
t.fd(size * PdfMaze.ROOT2)
t.penup()
t.right(45)
t.back(size)
t.right(45)
t.pendown()
t.fd(size * PdfMaze.ROOT2)
t.penup()
t.left(45)
t.back(size)
def drawArrow(self, t, size, exits):
xdir = ydir = 0
for dx, dy in exits:
xdir += dx
ydir += dy
if xdir and ydir:
arrow_size = size * PdfMaze.ROOT2
else:
arrow_size = size
arrow_dir = atan2(ydir, xdir)/pi*180
t.left(45)
t.fd(size/2 * PdfMaze.ROOT2)
t.right(45)
t.seth(arrow_dir)
t.back(arrow_size/2)
t.pendown()
t.fd(arrow_size)
t.penup()
t.begin_fill()
t.left(150)
t.fd(size/5)
t.left(120)
t.fd(size/5)
t.left(120)
t.fd(size/5)
t.right(30)
t.end_fill()
t.back(arrow_size/2)
t.seth(45)
t.back(size/2 * PdfMaze.ROOT2)
t.right(45)
def generatePage():
page = MazePage(name='Page 1', size=(3, 4), start=(0, 0), goal=(2, 3))
random = Random()
for _ in range(30):
page.mutate(random)
return page
class TurtleArt(Flowable):
def __init__(self, page):
self.page = page
def wrap(self, availWidth, availHeight):
self.width = availWidth
self.height = availHeight
return (availWidth, availHeight)
def draw(self):
t = PdfTurtle(self.canv, self._frame)
PdfMaze(self.page).draw(t)
def main():
doc = SimpleDocTemplate("example.pdf")
styles = getSampleStyleSheet()
story = []
pages = generateMazePair()
for page in pages:
story.append(Paragraph(page.name, styles['Normal']))
story.append(Spacer(1,0.055*inch))
story.append(TurtleArt(page))
doc.build(story)
## Uncomment this to display the PDF after you generate it.
#from subprocess import call
#call(["evince", "example.pdf"])
if __name__ == '__main__':
main()
elif __name__ == '__live_coding__':
page = generatePage()
pdf = PdfMaze(page)
pdf.draw(turtle)
|
STACK_EDU
|
A security technician at a small business is worried about the Layer 2 switches in the network suffering from a DoS style attack caused by staff incorrectly cabling network connections between switches. Which of the following will BEST mitigate the risk if implemented on the switches?
A. Spanning tree B. Flood guards C. Access control lists D. Syn flood
Explanation: Spanning Tree is designed to eliminate network ‘loops’ from incorrect cabling between switches. Imagine two switches named switch 1 and switch 2 with two network cables connecting the switches. This would cause a network loop. A network loop between two switches can cause a ‘broadcast storm’ where a broadcast packet is sent out of all ports on switch 1 which includes two links to switch 2. The broadcast packet is then sent out of all ports on switch 2 which includes links back to switch 1. The broadcast packet will be sent out of all ports on switch 1 again which includes two links to switch 2 and so on thus flooding the network with broadcast traffic. The Spanning-Tree Protocol (STP) was created to overcome the problems of transparent bridging in redundant networks. The purpose of STP is to avoid and eliminate loops in the network by negotiating a loop-free path through a root bridge. This is done by determining where there are loops in the network and blocking links that are redundant. Spanning-Tree Protocol executes an algorithm called the Spanning-Tree Algorithm (STA). In order to find redundant links, STA will choose a reference point called a Root Bridge, and then determines all the available paths to that reference point. If it finds a redundant path, it chooses for the best path to forward and for all other redundant paths to block. This effectively severs the redundant links within the network. All switches participating in STP gather information on other switches in the network through an exchange of data messages. These messages are referred to as Bridge Protocol Data Units (BPDUs). The exchange of BPDUs in a switched environment will result in the election of a root switch for the stable spanning-tree network topology, election of designated switch for every switched segment, and the removal of loops in the switched network by placing redundant switch ports in a backup state.
Which of the following allows lower level domains to access resources in a separate Public Key Infrastructure?
A. Trust Model B. Recovery Agent C. Public Key D. Private Key
Explanation: In a bridge trust model allows lower level domains to access resources in a separate PKI through the root CA. A trust Model is collection of rules that informs application on how to decide the legitimacy of a Digital Certificate. In a bridge trust model, a peer-to-peer relationship exists among the root CAs. The root CAs can communicate with one another, allowing cross certification. This arrangement allows a certification process to be established between organizations or departments. Each intermediate CA trusts only the CAs above and below it, but the CA structure can be expanded without creating additional layers of CAs.
A company that was previously running on a wired network is performing office-wide upgrades. A department with older desktop PC’s that do not have wireless capabilities must be migrated to the new network, ensuring that all computers are operating on a single network. Assuming CAT5e cables are available, which of the following network devices should a network technician use to connect all the devices to the wireless network?
A. Wireless bridge B. VPN concentrator C. Default WAP D. Wireless router
A network technician is assisting the security team with some traffic captures. The security team wants to capture all traffic on a single subnet between the router and the core switch. To do so, the team must ensure there is only a single collision and broadcast domain between the router and the switch from which they will collect traffic. Which of the following should the technician install to BEST meet the goal?
A. Bridge B. Crossover cable C. Hub D. Media converter
A technician is setting up a new network and wants to create redundant paths through the network. Which of the following should be implemented to prevent performance degradation?
A. Port mirroring B. Spanning tree C. ARP inspection D. VLAN
Correct Answer: B
Explanation: The Spanning Tree Protocol (STP) is a network protocol that ensures a loop-free topology for any bridged Ethernet local area network. The basic function of STP is to prevent bridge loops and the broadcast radiation that results from them. Spanning tree also allows a network design to include spare (redundant) links to provide automatic backup paths if an active link fails, without the danger of bridge loops, or the need for manual enabling/disabling of these backup links.
A technician needs to limit the amount of broadcast traffic on a network and allow different segments to communicate with each other. Which of the following options would satisfy these requirements?
A. Add a router and enable OSPF. B. Add a layer 3 switch and create a VLAN. C. Add a bridge between two switches. D. Add a firewall and implement proper ACL.
Correct Answer: B
Explanation: We can limit the amount of broadcast traffic on a switched network by dividing the computers into logical network segments called VLANs. A virtual local area network (VLAN) is a logical group of computers that appear to be on the same LAN even if they are on separate IP subnets. These logical subnets are configured in the network switches. Each VLAN is a broadcast domain meaning that only computers within the same VLAN will receive broadcast traffic. To allow different segments (VLAN) to communicate with each other, a router is required to establish a connection between the systems. We can use a network router to route between the VLANs or we can use a ‘Layer 3’ switch. Unlike layer 2 switches that can only read the contents of the data-link layer protocol header in the packets they process, layer 3 switches can read the (IP) addresses in the network layer protocol header as well.
An administrator has a physical server with a single NIC. The server needs to deploy two virtual machines. Each virtual machine needs two NIC’s, one that connects to the network, and a second that is a server to server heartbeat connection between the two virtual machines. After deploying the virtual machines, which of the following should the administrator do to meet these requirements?
A. The administrator should create a virtual switch for each guest. The switches should be configured for inter-switch links and the primary NIC should have a NAT to the corporate network B. The administrator should create a virtual switch that is bridged to the corporate network and a second virtual switch that carries intra-VM communication only C. The administrator should create a virtual switch to bridge all of the connections to the network. The virtual heartbeat NICs should be set to addresses in an unused range D. The administrator should install a second physical NIC onto the host, and then connect each guest machine’s NICs to a dedicated physical NIC
A company would like to connect multiple departments into one network operations center (NOC), yet provide each department with autonomy from one another and enable them to share their high speed Internet connection. Which of the following devices would BEST enable the NOC to accomplish this?
|
OPCFW_CODE
|
HTC U12 Plus review: Hands-on
I've looked in the Forum to see if there is an answer to the question of the Dial Up window coming up on almost an ad hoc basis. Typically, when my machine has been to sleep for a while and I wake it up, the Dial Up window appears. I've looked on the Task Manager window and nothing spurious appears to be running. I've clicked on the connect button and it connects but goes nowhere!
Anybody got any ideas where to start to try and resolve the annoying little problem? Nothing has changed in the system and it has been happening for about the past three weeks. I've copleted a full virus scan and everything is clean.
It suggests a Registry problem that is instructing the Dial Up window to open, but am not sure what to look for.
I run Win98 SE with all the updates. The latest version of ZoneAlarm Pro and Norton AV 2004. I did have ONSPEED on but thought the speed saving was miniscule so took that off.
have you ticked the box in IE, to "never dial a connection" that will stop it coming up..
Onspeed may have left a footprint,download this and run it:-click here
CCleaner - Crap Cleaner software download Regards.
Read this some very useful links click here
Thanks for all the above. I will give the tips a go and see what happens.
Magik ®©, I don't want to do that because it is good to go direct to the Dial Up window when firing up OE or IE. Thanks though.
Just as a matter of interest,it would go straight to the dial up,all you do is put a short cut on the desktop..double click it,and it connects..
Magik ®©, not quite sure what you mean. If I enable the "Never dial a connection", and fire up IE, nothing happens. I have to go back a step, then get a connection, and then go in to IE. Are you saying to do that exercise easier, put a short cut of the Dial up window on the desktop and click that before opening IE?
if for aguments sake you are with BT or whoever, the fact that you have ticked "never dial a connection" will make no difference to the way it works, just put a shortcut, from say BT on the desktop and then double click it. then, as you say just click IE or outlook express, which ever one you use first.
you can have a read here...
the link looks dodgy..so try this..
This behavior occurs because the first DUN connection that is created is automatically set up as the default IE connection. IE dials this connection without being prompted. This behavior is by design, but it can be an issue if you connect to the Internet by using a LAN or if you have selected the Dial whenever a network connection is not present option.
To work around this behavior, either configure the account to connect by using a specific phone connection (DUN connection), or set Internet Explorer to never dial a connection.
Configuring the Account to Use a Specific Connection
In Outlook, on the Tools menu, click E-mail Accounts.
Click View or change existing e-mail accounts, and then click Next.
Double-click the POP3 e-mail account to gain access to its settings.
Click More Settings.
On the Connection tab, under Connection, click Connect using my phone line.
Under Use the following Dial-Up Networking connection, click the connection you want to use.
Setting Internet Explorer to Never Dial a Connection
In Control Panel, double-click Internet Options, and then click the Connections tab.
Click to select either the Never dial a connection check box or the Dial whenever a network connection is not present check box.
If you connect to the Internet on a LAN or by using a cable modem, select the Never dial a connection check box.
This thread is now locked and can not be replied to.
|
OPCFW_CODE
|
/* eslint-disable no-proto */
import { ApiResponseType } from './DefaultResponseProcessor';
export interface ApiExceptionConstructor<ResponseType> {
// eslint-disable-next-line no-use-before-define
new (response: ApiResponseType<ResponseType>, request: Request): ApiExceptionInterface<ResponseType>;
}
export interface ApiExceptionInterface<ResponseType> {
getRequest: () => Request;
getResponse: () => ApiResponseType<ResponseType>;
}
/**
* Default API Exception
*/
export default class DefaultApiException<ResponseType> extends Error implements ApiExceptionInterface<ResponseType> {
/**
* Response from server that throwed an error.
*/
private response: ApiResponseType<ResponseType>;
/**
* Constructor.
*
* @param response - Processed response from server.
*/
public constructor(response: ApiResponseType<ResponseType>) {
super(`Api exception: ${JSON.stringify(response.data)}`);
this.response = response;
}
public getResponse() {
return this.response;
}
public getRequest() {
return this.response.request;
}
}
|
STACK_EDU
|
AWS Cognito is a popular managed authentication service that provides support for integrated SAML 2.0-compliant identity providers (IdPs) such as Azure Active Directory, Okta, Auth0, OneLogin, and others.
One use case for Cognito is to serve as a middleware or proxy layer between an identity provider and a backend web application. Instead of implementing support for SAML directly into the application (and dealing with the proper security configuration and variety of standards), developers can use Cognito to do the heavy lifting.
Many IdPs also support using groups for user management. This allows a user to rely on their Active Directory, Okta, or other IdP groups for user RBAC rather than manually configuring access locally within your application.
Fortunately, these group mappings can be passed from the IdP, through Cognito, and to your backend application. AWS wrote a blog post to highlight how Cognito can be used to collect group mappings, but they stopped short of explaining how to actually pass the group mappings to a backend application via the “/userInfo” Cognito endpoint.
First, a user pool must be configured in Cognito with the correct settings to support collection of the user’s groups and passing of the profile information.
I won’t walk through the entire process of configuring a user pool, because it is well-documented and not the core focus of this post, but here are a couple things to note:
- You should set the user’s email address as a required attribute if you plan to map validated users to your backend database.
- Make note of the Pool ID because it will be used later when configuring the third-party IdP
- If you are creating a production application, be sure to set the SES Email provider and domain names, since the sandbox allotment in Cognito will not be sufficient for production use.
Creating Custom Attributes
To collect the groups from the IdP, you must configure and enable a custom attribute within Cognito. This can be done from the “Attributes” page:
- I recommend naming the attribute “groups” because let’s not get too fancy.
- I set the groups to the maximum size of 2048 characters because sometimes users coming from enterprise IdPs have a lot of groups. If you exceed this limit, some IdPs, like Okta, support using “starts with” or regex patterns to filter which group names are sent in the SAML response to Cognito.
- Ensure the “type” is “string” and “Mutable” is checked.
- Be sure to save!
- Once saved, note that the “Name” changes to “custom:groups”. This is important because it will be the name of the field used later when accessing the “userInfo” endpoint.
Enabling the Custom Attribute
Once you create the “custom:groups” attribute, you need to activate it for the app client. Navigate to “App Clients” and then click “Show Details” and then “Set attribute read and write permissions.”
Configure App Client Settings
On the “App Clients” settings page, ensure that “profile” is selected under “Allowed OAuth Scopes.” Without this setting, the additional custom attributes, including “groups,” passed by the IdP, will not be accessible to your backend via the “userInfo” endpoint.
Third-Party IdP Setup
Don’t sign out of the Cognito console just yet; we’ll come back to it shortly. But first we need to configure the third-party IdP. I will use Okta as an example for this post, but Active Directory, OneLogin, and Auth0 all have similar configuration options.
Create a New Application
Go through the application setup process to create a new application using the following settings (replace the bold parts with your settings):
- Single Sign On URL:
- The same value can be used for “Recipient URL,” “Destination URL,” “Sign Out URL,” and other similar fields.
- Audience Restriction:
- Name ID Format:
Email (note: this may be “User name” or another value depending on how you configured Cognito).
Be sure to configure any other required fields according to your IdP’s documentation.
For Cognito to recognize the user and pass the required fields to your backend application, the IdP must pass certain attributes. The names of these attributes are configurable, but they must match across Cognito and the third-party IdP.
Group Attribute Statements
Each IdP handles this a bit differently, but most allow you to pass the group names along using a similarly-named attribute definition. In Okta’s case, you can define groups like this:
In Okta, I added my user to two groups, “test-group-one” and “test-group-two” for testing purposes.
Now, you must get the IdP metadata to create an IdP mapping in Cognito. Most IdPs allow you to export an XML file or provide a configuration URL. Okta provides both options, which can be accessed from the “Sign On” tab of the application.
Back to Cognito
Add an Identity Provider
Now that you have the third-party IdP metadata URL, you can create an identity provider in Cognito. This is done via the “Federation” > “Identity Providers” page. From there, select “SAML” and create a new provider using the URL you obtained from Okta.
Define Attribute Mappings
You now need to tell Cognito which attributes from the provider should be collected and mapped to attributes in Cognito. This is done from the “Attribute mapping” page. Select your new provider from the list and then enter each of the attributes to collect. These should match the same names as entered earlier in Okta. For example, I am collecting the user’s email address under the attribute name “http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress”.
Be sure to also define the “groups” attribute and map it to the “custom:groups” user pool attribute.
Enable the Identity Provider in the App Client
The final step is to return to the “App client settings” page and enable the new third-party IdP.
Frontend + Backend Changes
Depending on how your frontend environment is configured (single-page app, JQuery, framework, etc.), you will need to configure the sign in page to match the page defined in the “Callback URLs” portion of Cognito and then handle the “access_token” that is returned. You can also use the AWS Cognito JS SDK to handle this seamlessly in your application.
Once the access_token is retrieved, it can be sent to your backend API or used to fetch the user’s groups directly by passing the authorization header. This call looks like:
curl -X GET https://YOUR-DOMAIN.auth.us-east-1.amazoncognito.com/oauth2/userInfo -H 'authorization: Bearer <access_token>'
The response, if everything worked, will look like:
"custom:groups": "[test-group-two, test-group-one]"
These custom groups (which came from the groups we created in the Okta IdP), can be parsed and used for RBAC within your application.
|
OPCFW_CODE
|
Make a Cut The Rope Game using SpriteKit: Part 2
This tutorial covers how to create a game like Cut The Rope from start to finish using SpriteKit and LevelHelper 2. If you haven’t checked part 1 of the tutorial you should do so here.
As explained in the first installment, this series takes the form of several videos, the second of which you’ll find below. I’ve also included prerequisites, video chapters and other useful info below. Without further ado, let’s get started!
Note: After part 1 of the tutorial was released I updated the LevelHelper-API to support simpler collision handling and also the newest Box2D library.
If you intend to use the project created by yourself in part 1, you’ll need to update the LevelHelper2-API folder and the Box2d library by taking the latest version from the repository here.
Topics covered in Part 2
• Creating buttons • Changing scenes • Creating ropes that can be cut • Collision handling • Disabling collision contacts • Creating a user guide
Download everything you need for this part of the tutorial:
LevelHelper 2 – The tool we will use throughout the creation of our game.
Xcode – Apple IDE that we will use to compile and build our game.
Game Assets – The image assets we will use in this part of the tutorial.
The Game From Part 1 – The game with the progress from part 1 of the tutorial.
This entire tutorial is available in the video above. For your convenience, I’ve split it into chapters below so that you can access each part with ease at a later time. Click on the links below to access the desired chapter.
- Introduction – Shows what we will create in this part.
- Adding Buttons – Shows how to use sprites as buttons.
- Adding Actions to Buttons – Shows how capture the node where the user clicked and execute an action.
- Prepare the game area. Spritesheets and background – Shows the preparation for the game area.
- Prepare the game area. Layout sprites – Shows how the game area was made.
- Ropes and other interactive elements – Shows how to create ropes.
- Publishing and loading the level. Using debug drawing – Shows how to load the level and how to use debug drawing to further help with collision handling.
- Collision Handling – Shows how to detect collision and perform actions based on that.
- Disabling collision contacts – Shows how to disable forces when two objects collide.
- Helping your gamers. Prepare a basic guide – Shows how to guide your users.
- Programming the game guide logic – Shows how to remove the game guide if user has already seen it.
- Final words – What we will do next.
The entire project for this part of the tutorial can be downloaded from here.
I hope you’ve enjoyed this tutorial and, as always, if you have any questions or need further help, please write on my forum and I’ll do my best to reply ASAP.
This tutorial is also available on www.gamedevhelper.com
Author: Bogdan Vladu
|
OPCFW_CODE
|
Alright, my computer that I've had for roughly a month and has been completely fine up until today where it decided to randomly shut off while I was browsing (watching youtube, wasn't even using a lot of resources) I turned it back on and 20 minutes of browsing later and it shuts off again. I've had it on for probably a good 24 hours because I left it on last night so that I can download something, however I've done that before. Everything was bought new and retail and appears to work fine.
Here are the temperatures currently according to CPUID HW Monitor:
TZ00 - stays solid 28 degrees celsius
TZ01 - stays 30
SYSTIN - stays 30
TMPIN3 - 25 min 39 max
29 min 49 max stays at around 35
stays solid 35
31 min 37 max
Keep in mind these are temps while just doing browsing or watching youtube videos.
Here is my specs:
GPU: Zotac GTX 680
CPU: i7 3770k w/ Hyper 212 Evo
Mobo: ASUS sabertooth z77
RAM: G skill 8gb @ 1866ghz
Hdrive: slow crappy 300gb (don't even know RPM or brand)
Monitor: just bought three days ago, only new part: BenQ XL2420T
Tower: HAF 912
PSU: 850W gold certified corsair professional edition (going to SLI soon)
2x 200mm fans: one in front one on top
2x 140mm fans: one back one side
2x motherboard assist fans
I'm pretty sure it's not my power supply because not only is it plenty of power but it's a great brand and it's gold certified. The only thing that I can think of is maybe it's my new monitor? But why would it make it shut off? Please help me I'm really worried.
I ran a tdss killer to check for rootkits and did two quick scans using norton and malwarebytes, nothing was found.
I'm skimming through that link of yours, and it said that if a computer or hard drive is over 3 years old (which mine is) that could be the problem. Could it be that when I got my new monitor it's been stressing my hard drive to perform better and it can't keep up? Or does that not make any sense.
I'll do a full scan and read the rest later. Also, I should mention that not too long ago Chrome had to close because it had an error or something, which has never happened before.
It wouldn't make sense for it to be over heating because I was playing Crysis 2 on max everything on 1920 x 1080 and it even prompted me to use Windows Basic theme because my "computer is slow" that's how much resources it was using, it would have made sense for it to turn off during my Crysis 2 session. Instead it turns off while I'm watching youtube, and the temps all seem fairly low (at least they're the same for like a month).
Today when I woke up my computer wouldn't even turn on, and I was very upset. I knew it wasn't my hard drive because if my hard drive died it would still turn on. As I'm unplugging everything to look inside, I notice that my power cord was very loose. I plug it in fully and bam it turns on no problem.
My guess is that I nudged the power cord a few times with my foot causing it to turn off.
I'll let you know if it turns off again. In the meantime, I'm looking for a new hard drive just in case. I was thinking a western digital cavier black 500GB since it's cheap and reliable. Are there any other hard drive brands you would recommend other than WD?
Ok I'm here through my other computer. When I woke up and tried to turn on my computer it wouldn't. I checked to make sure the power cord was firmly in and even tried other power cords. It would turn on for a split second then go back off.
I opened up my case and took off the PWR GROUND and tried to turn it on using a screw driver, that didn't work. The motherboard has a green light indicating that it's on and possibly working.
I took out my power supply and put it in the one I'm using now and exact same results. However I didn't have a 4 pin connector thing that is right beside the main power connector (the mobo is an old lanparty nf4) not sure if that wouldn't allow it to turn on.
So my guess is that the power supply has went. HOW is this even possible?! I spent extra to make sure I got the best power supply I possibly could, gold certified and even modular, and yet this happens?
Now what happens? Do I have to ship it to Corsair and have them replace it? I've never done this before. Urgh this is so frustrating.
|
OPCFW_CODE
|
Question 16 Ripple Effect A few rebus puzzles How does changing metrics help to find solutions to a partial differential equation? Was the London Blitz accidentally started by lost pilots? FedoraForum.org is privately owned and is not directly sponsored by the Fedora Project or Red Hat, Inc. How to explain extreme human dimorphism? http://wcinam.com/failed-to/failed-to-fetch-server-list-for-user.php
Google™ Search FedoraForum Search Red Hat Bugzilla Search Search Forums Show Threads Show Posts Tag Search Advanced Search Go to Page... System.AggregateException: One or more errors occurred. ---> System.ArgumentNullException: Value cannot be null. The following packages have to be installed: pexpect-2.3-6.fc15.noarch Pure Python Expect-like module Proceed with changes? [N/y] The transaction did not proceed. Comment 6 Dave Hodgins 2011-08-19 03:50:05 CEST Still a problem.
I've broken my new MacBook Pro (with touchbar) like this, do I have to repair it? fbourigault self-assigned this Jan 8, 2015 Contributor fbourigault commented Jan 9, 2015 I looked in the code how was handled package upgrade and I saw that packages install and packages update Parameter name: src at (wrapper managed-to-native) System.Runtime.InteropServices.Marshal:copy_from_unmanaged (intptr,int,System.Array,int) at System.Runtime.InteropServices.Marshal.Copy (IntPtr source, System.Byte destination, Int32 startIndex, Int32 length) <0x1a89fe0 + 0x00032> in :0 at Xamarin.Simulator.Server.ScreenManager+c__AnonStorey0.<>m__0 (Int32 i) <0x8740140 + 0x000e7>
What's the point of repeating an email address in "The Envelope" and the "The Header"? Bugzilla – Bug41452 Clean Fedora 15 install and PK can't install git. Comment 3 Richard Hughes 2012-02-02 04:28:47 EST Is this fixed on F16? If you can reproduce this bug against a currently maintained version of Mageia please feel free to click on "Version" change it against that version of Mageia and reopen this bug.
Bash remembers wrong path to an executable that was moved/deleted Difference between if else and && || What is the XP and difficulty of an encounter when a monster can transform? Why is it trying to select systemd-sysvinit ... $ docbook2pdf bash: docbook2pdf: command not found... gychang gychang View Public Profile Find all posts by gychang Tags disable, linux, management « Previous Thread | Next Thread » Thread Tools Show Printable Version Display Modes Linear Mode Switch Compute the Median How can I solve this integer equation with Mathematica?
Additional info: # yum install yumex Loaded plugins: auto-update-debuginfo, langpacks, presto, refresh-packagekit Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package yumex.noarch 0:3.0.1-2.fc15 will be installed --> Processing So when I was running the simulation, packages where really installed. Comment 10 Dave Hodgins 2011-09-03 02:41:20 CEST I think I understand what's happening now. You do not need to leave your room.
That might be related. Why leave magical runes exposed? Compactness of the open and closed unit intervals What are the strings outside the baseball bat called? Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events.
Sign in to comment Contact GitHub API Training Shop Blog About © 2017 GitHub, Inc. http://wcinam.com/failed-to/amd-app-sdk-runtime-failed-to-install.php Home Forums Posting Rules Linux Help & Resources Fedora Set-Up Guides Fedora Magazine Ask Fedora Fedora Project Fedora Project Links The Fedora Project Get Fedora F23 Release Notes F24 Release Notes asked 3 years ago viewed 7929 times active 1 month ago Related 0I want to save apt-get package3Installing software that is unavailable in standard repositories [decision tree]1How to install chemfig through Thanks :) **************************** @ the reporter and persons in the cc of this bug: If you have any new information that wasn't given before (like this bug being valid for another
y Actual results: # yumex bash: yumex: command not found... [root@localhost ~]# yumex bash: yumex: command not found... Thread Tools Search this Thread Display Modes #1 24th March 2010, 10:41 PM cuban_cigar Offline Registered User Join Date: May 2007 Posts: 175 system-config-selinux issue installing w/ yum Failed to install packages: user declined simulation [[email protected] ~]# yum -y install system-config-selinux Loaded plugins: presto, refresh-packagekit Setting up Install Process No package system-config-selinux available. http://wcinam.com/failed-to/failed-to-set-the-user-home-directory-msgina-citrix.php Should we eliminate local variables if we can?
Having packagekit install systemd-sysvinit would likely break my system. You mention that you use a relatively old Titanium SDK though, from before the Xcode and iOS versions you target. Browse other questions tagged ios appcelerator or ask your own question.
Otherwise, the path from VS iosEMualtor doesn't match (it takes FULL NAME). share|improve this answer answered Jan 16 '16 at 10:50 Fokke Zandbergen 3,4641421 1 I upgraded back to SDK 5.1.2.GA and am now able to run the app in simulator. –Nirmal It seems to work for me. ciltRichard DartonÖnizleme Yok - 1997Sık kullanılan terimler ve kelime öbekleriabsorption activity coefficients azeotropic distillation batch distillation behavior boilup bottom calculated changes Chem chemical column malfunctions composition control computational concentration condenser configuration
Are the guns on a fighter jet fixed or can they be aimed? What is the biblical basis for the belief that Mary is the mother of God? Still no joy. –Nirmal Patel Jan 16 '16 at 10:33 add a comment| Your Answer draft saved draft discarded Sign up or log in Sign up using Google Sign up http://wcinam.com/failed-to/amd-sdk-runtime-failed-to-install.php At that time this bug will be closed as WONTFIX (EOL) if it remains open with a Mageia 'version' of '1'.
|
OPCFW_CODE
|
Bulk File Transfer with Compression Measurements
Bulk throughput measurements | Bulk throughput simulation | Windows vs. streams | Effect of load on RTT and loss | Bulk file transfer measurements | QBSS measurements
We verified that bbcp was indeed setting the window sizes correctly by using the Solaris snoop command on pharlap to capture packets and looking at the stream initiation SYN and SYN/ACK packets.
For each copy, we noted the transfer rate reported by bbcp, the file size, the window size, the number of streams, the compression factor and compression achieved, the Unix time user, system/kernel, real times, the bbcp source and target host cpu usage reported by bbcp. We also noted the loaded (when bbcp was running) and unloaded ping times (when bbcp was not running). Between each measurement we slept for the duration of the previous bbcp measurement to limit the load imposed by bbcpload.pl and to allow unloaded ping measurements to be made. We also noted the operating system and version, together with the number of cpus and their MHz, for the remote host. The maximum number of streams allowed by bbcp was 64. The host at SLAC (pharlap) was a Sun E4500 with 4 cpus running at 336 MHz, a Gbps Ethernet interface and running Solaris 5.8.
To source file was read from /tmp. On pharlap /tmp is stored in swap space. The source file used was a 60Mbyte BaBar Objectivity file. The destination file was always written to /dev/null.
All measurements were made for a duration of approximately 10 seconds (see Measurement Duration for more details).
The average MHz-seconds used for a compression factor = 0 (no compression) is
8.3 +- 0.91 MHz-secs, and for a compression factor of 1 (compression = 6.9) is
57.3 +- 0.46 MHz-secs.
Next we made measurements of bbccp throughput to 22 remote hosts with
compression factors of 0 and 1, and with an optimal
TCP window size and number of streams selected for each host.
The results are shown to the right, with the compression factor of
0 being shown in
blue and that for a compression factor of 1 shown in red diagonal
It can be seen that the maximum compressed throughput is about 50Mbits/s.
If the uncompressed
throughput exceeds this rate (as in the case of NERSC2,ANL,
LANL, caltech, SDSC, Stanford, NERSC, Mich, Wisc, and LBL) then
there is no improvement by using compression. If the uncomppressed
throughput is < ~ 50Mbits/s
then compression can help (by more than a factor of 4 in the case of KEK which only has a 10Mbit/s
bottleneck bandwidth between it and SLAC). When using a compression factor of
1 (or compression of 6.7), then the average compressed bbcp
thoughput is 58+- 0.46 Mbits/s.
The consistency of the compressed throughput indicates that there is a common cause. To ascertain whether this common cause was the measuring host (pharlap), we repeated the measurements from antonia, a 2*532 MHz cpus running Linux 2.4 with a Gig Ethernet NIC, and with hercules, a 2*1131 MHz cpus host running Linux 2.4 with 2 Gig Ethernet NICs. Comparing the antonia results with pharlap's it is apparent that the maximum uncompressed throughput is reduced from about 400Mbits/s to about 165Mbits/s. This is believed to be since the pharlap source file is read from memory (/tmp is in swap space) whereas on antonia and hercules it is read from disk (/dev/sda2 for antonia and hercules and /dev/sda9 on testlnx05).
For the 1131MHz cpu (hercules) it can be seen that uncompressed throughputs of over 400Mbits are achievable, and the median compressed throughput is over 140Mbits/s. To understand these compressed throughput values better we measured the system time gzip took to compress the 380MByte Objectivity file on the measurement hosts and reported this as Mbits/s. The table below compares the results from all the measuring hosts. It can be seen that there is reasonable agreement between the median bbcp compressed throughput and the Gzip throughputs, with Gzip typically being 10-17% lower. This reduction maybe due to gzip source and destination being on the same host, whereas the bbcp measurements were using separate source and destination hosts. To pursue this further we used bbcp to compress and copy the above file from and to the same host and measured the source process cpu seconds and Mbits/s. The graph to the right of the table shows the median bbcp compressed (compression factor 1, compression 6.9) throughput from the measuring host versus the MHz of the measuring host's cpu:
|Host||OS||# cpus||MHz||NIC Mbps||Median compressed MBits/s||Stdev||Gzip Mbits/s||Bbcp sce=tgt Mbits/s||c/x|
For an uncompressed
copy, the average ratio of source_host MHz-seconds / target_host MHz-seconds was
1.2 +- 0.3. For The average target host MHz-s for Solaris (5.6, 6.7 and 5.8 a total of
5 hosts) was 5460+-367 MHz-seconds, and for Linux (2.2 & 2.4, a total of 20 hosts) was
7805+-745. There was
little variation within the various Solaris versions or the various Linux versions.
We also looked for a correlation between target MHz-seconds and the throughput
in Mbits/s or the number
of streams but could find little evidence for any correlation. See the plots below:
Looking at the bbcp compression throughput graph for Hercules above, we see that the Stanford compressed throughput rate (96 Mbits/s) is much lower than the median (~ 143 Mbits/s), and its uncompressed bbcp throughput is about 90Mbits/s so there is no lack of network bandwidth. The Stanford cpu is a single 299MHz cpu running Linux 2.2. Thus if we take the c/x ratio to be ~ 3.32, the best compressed throughput is limited by its cpu to about:
By fixing the window size at 16Kbytes and varying the number of streams with no compression for bbcp copies from SLAC to ANL, we were able to increase the file copy throughput in a fairly linear-with-streams fashion from about 1.3Mbits/s to over 115 Mbits/s. Using the same window sizes and varying the numbers of streams for compression factors of 1 through 9 we were able to visualize the effectiveness of compression on throughput for varying uncompressed file copy rates. This is seen on the graph to the right. It is seen that for uncompressed copies the throughput is less than about 50Mbits/sec for fewer than 18 streams. For 18 or more streams the uncompressed throughput is greater than 50 Mbits/s. Thus, as can be seen from the graph for a window size of 16KBytes between SLAC and ANL, compression is effective in increasing throughput for fewer than 18 parallel streams. It can also be seen that a compression factor of 1 is most effective.
|
OPCFW_CODE
|
is not using the current radeditor but it is using the 1 version back of the radeditor.
The problem is once a letter has been generated, extra html is placed in the letter that is generated causing
unexepcted blank lines to appear.
The template (letter) before the letter is displayed is stored in a sql server 2016 database in a column is is called
'stringtemplate' and is setup as a varchar(max), null) column in the database. I edit the template values directly using sql
server management studio.
Here are 2 examples of problems that I am having:
1. In one template where the value is setup as
'<br/><br/>' the values end up being '<p></p<p></p><p></p>' when the letter is actually generated.
2. in another template where the value is setup as:
<span style="font-size: 13px;font-family: arial,sans-serif; color: black;"><br />
with the final </span> and the end of the template
the letter that is generated actually has the following html:
<span style="color: black; font-family: arial,sans-serif; font-size: 13px;"><br />
<span style="color: black; font-family: arial,sans-serif; font-size: 13px;">
with the final </span> and the end of the letter.
Thus would you tell me and/or show me what I can do to solve the problem?
7 Answers, 1 is accepted
Could you confirm that both HTML snippets are valid HTML before being given to RadEditor? This is important to ensure proper functionality. If the HTML string is invalid, the browser may change it unexpectedly and also RadEditor's ConvertToXhtml filter will also attempt to fix it.
What I can suggest that you test is the following demo: https://demos.telerik.com/aspnet-ajax/editor/examples/builtincontentfilters/defaultcs.aspx. You can paste the original HTML strings you have in the HTML mode of the editor, flip to Design and back to HTML to see how the content changes depending on the various filter combinations.
You can also run this project locally so you can change the NewLineMode property of the control to Br to see if this provides results closer to your preference. Note that doing so will deteriorate the end user experience pertaining mostly to working with lists crating paragraphs of text and their alignment.
On the second snippet - this behavior is expected because a paragraph (a block element) cannot be nested in a span (an inline element). Disabling the ConvertToXhtml filter should help you avoid this change, but it can also cause other issues like increased chance of invalid HTML and other combinations that may not be well understood by an HTML parser.
On a side note - if you only want to generate emails from that HTML, and not have end users edit them, perhaps you do not need a RadEditor at all, and string operations on the server can let you do that generation.
Thank you for your response!
I have the following additional items to mention to you:
1. I have users that have said within the last 6 months after the latest Telerik tool has been installed, that more extra blank lines have been generated. They have to remove all these extra blank lines when the start to work with these generic letters (templates). The sql in the database has not changed. I am going to watch this situation, but I don't think this is possible, corect? If this is possible, can you tell me how this occurs and what I can do to solve the problem?
2. You had a side note about generating email messages. In this application, the user edits letters. That is why the application uses the radedior.
I was not sure what the setup you had is, so I had to mention that. It seems you will be needing the editor.
There haven't been recent changes in the control and such an upgrade is not expected to cause more or less whitespace. In my personal experience, users often add empty paragraphs while trying to space out something, they tend to do that in MS Word as well. Could you check whether this is the user behavior or the editor that adds the empty paragraphs?
Also, what happens if you set the NewLineMode to Br (I'd do that with a subset of users who you can talk to, if possible, because it will cause some issues with the control behavior, see here for more details)?
I would also suggest you play around with the filters the control offers to see if something there is causing this (e.g., ConvertToXhtml may be trying to fix some malformed content).
1. The users say they are not adding the extra sapces. Due to that fact, would you tell me how the editor can be adding extra paragraphs? Would you show me how to make the editor not add the extra paragraphs?
2. Would you tell me where I can set the NewLineMode to Br ? What is the default value for the NewLineMode when upgrading to a newer version of the Telerik tool?
I am assuming this is added to the web.config file by default or somewhere else? I am using whatever the defaults are set for the Telerik controls whenever I upgrade to a new version of the Telerik tool.
The editor does not have a setting that instructs it to add or not to add empty paragraphs. Generally, this should not happen, and the only case I've seen similar behavior is when the initial HTML passed to it is invalid XHTML. With this in mind, to help further I will need to be able to reproduce this problem in order to debug it.
On NewLineMode - this is a property on the control tag and it defaults to P (as in paragraph). So, you set it like this:
The P is the default value since Q2 2014, before that it was Br.
If you need to get back to this, please also post an MCVE so we can investigate.
|
OPCFW_CODE
|
Michael Palin was appointed President of the Royal Geographical Society at the beginning of the summer, and in a recent online travel survey he was named as 'Britain’s Favourite Travel Guide’ – ahead of Graham Greene, Bill Bryson and Colin Thubron. For the former Python and beloved presenter of numerous epic BBC travelogues it was further proof of his status as a national treasure.
These diaries cover the years immediately preceding Palin’s circumlocution of the globe for the successful television series Around the World in 80 Days. In 1980, after his first foray into reportage on the move as a presenter for one of the BBC’s Great Railway Journeys, he records critical acclaim for the programme: 'I really seem to have tapped the ageing, middle class audience.’ Even they, the loyal fan base of his delightfully anodyne journeys and immensely likeable on-screen persona, may find these diaries stretching their goodwill. Depending on age and stamina, Palin’s fans will find this book a useful companion in the wee small hours as they contemplate their own mortality. I only hope sleep will rescue them long before they reach page 587.
By calling this volume Halfway to Hollywood the publishers may have hoped for some glitz and glamour from the years when Palin starred in films like Terry Gilliam’s Time Bandits (1981) and Brazil (1985), Alan Bennett’s A Private Function (1984) and Charles Crichton’s A Fish Called Wanda (1988). These are quite simply some of the most memorable films of the decade, and Palin’s acting showed him to be more versatile than his reputation as one of the Monty Python team might have suggested. On some of these projects Palin contributed his skills as a writer, too.
In this weighty volume of diaries we learn very little about the actors and directors with whom he worked, and too much about his chiropodist, his dentists and even the man who ran his local corner shop in Gospel Oak. When actors enter the narrative his revelations are rarely surprising: Maggie Smith is publicity shy but a remarkable talent; Jamie Lee Curtis is good tempered and bursts with childlike enthusiasm; while Kevin Kline is a method actor who probably takes himself too seriously. Trevor Howard drank too much.
Palin’s editors have done him a great disservice. The contents of these diaries do not merit this lengthy treatment. They are repetitive and suck the lifeblood from the sections of his writing which hint at a far greater talent. There are an interminable number of entries concerning meetings with publicists, film financiers or directors with whom he might collaborate as a writer or actor. When Palin is close to a historic or memorable event his treatment of it too often leaves us desperate for more detail. While working closely with George Harrison, for example, Palin tells us that on hearing of John Lennon’s murder he rings Harrison to send condolences. Harrison isn’t answering the phone and Palin says limply that he 'left a message’. Frustratingly, when he next meets Harrison only a few weeks later, he doesn’t allude to Lennon at all.
Diaries can, of course, be much more revealing than the simple sum of their daily entries. Palin has time to muse on world events; the IRA in Northern Ireland, the Israeli bombardment of West Beirut or Mugabe’s victory in the 1980 Zimbabwean elections which 'must give great heart to guerrilla movements in other countries’. Palin sets himself up as a fairly standard issue Left-wing thespian with such comments, although he manages to retain an affectionate regard for the Royal Family and acknowledges that he is materially well off without pretending to be guilty about it.
Tiny pieces of showbiz gossip occasionally intrude. Janet Street-Porter, with whom Palin appears on Jonathan Ross’s show 'evidently rates herself rather highly’ and 'pisses Palin off’ by leaving a tray of half-eaten food outside his dressing-room door rather than her own. Appearing in the ill-conceived It’s a Royal Knockout in 1987, we are told that John Travolta 'refuses to take part in two of the games in case he gets his hair wet’. When it comes to showbiz bitchiness, none of this is exactly Noël Coward or Kenneth Williams. At one stage he reveals that a piece he wrote for the New Yorker is rejected as too dull and conventional: 'a warning for all my writing’. Sadly, a lesson not learned as a diarist.
Through the tedious mire of dental appointments, train journeys, taxi rides and minor family events – such as being woken when his wife takes an early flight to go on a skiing holiday with friends – there are brief glimpses of pathos and charm. Palin writes best about his family, teaching his youngest daughter to ride a bicycle, taking his son to dinner for a birthday treat or Mary (his 80-year-old mother) to New York on Concorde. He also reveals that his sister committed suicide and treats sensitively the issue of his own helplessness in the face of her recurring fits of depression. After a memorial service he writes 'the memory of her will recede slowly but gently into the past. The tears will flow less easily (although they are pouring down my face as I write).’ This is the Michael Palin with whom the public has fallen in love. A man whose ordinary likeability makes us feel we know him, and that he is incapable of nastiness or an outburst of bad temper. It is a pity that the diaries weren’t edited to make the most of those qualities.
Halfway to Hollywood: Diaries 1980-1988
By Michael Palin
WEIDENFELD & NICOLSON, £20, 621pp
Available from Telegraph Books 0844 871 1516
|
OPCFW_CODE
|
Jellynovel Reincarnation Of The Strongest Sword God – Chapter 2733 – Garrisoning Twin Towers squeamish quarter reading-p3
Novel–Reincarnation Of The Strongest Sword God–Reincarnation Of The Strongest Sword God
Chapter 2733 – Garrisoning Twin Towers vase sable
“Now that No Wing has officially garrisoned Sky Planting season Community, could we empower Sky Early spring City’s Area Teleportation Community?” s.h.i.+ Feng required.
“No, no, no. The type of material the Dual Towers Empire purposes are the best. No problems will occur despite a thousand a lot of use,” Vico a.s.sured sincerely even though trembling his go.
Skies Springtime Area was guaranteed to grow to be one of several essential hubs from the eastern country. The amount of money and information it could rake in would substantially surpa.s.s those of lots of royal capitals. If developed appropriately, the city could even rival imperial capitals.
Sleeping With The Enemy
“How much can it cost you to build just one big-degree teleportation range?” s.h.i.+ Feng expected.
Total Legacies for Tier 3 cla.s.ses!
“Fine, so whether it is. Nonetheless, I want every little thing carried out within two hours,” s.h.i.+ Feng explained just after studying the moment.
If athletes failed to have frustrating toughness, next the moment they left the confines of NPC towns and cities, they would have to overcome making use of their lives at risk. That was because equally NPCs and monsters obtained sensitive thought right after the major revise, and they would no longer let gamers wipe out them like idiots.
Certain more than enough, the Guild still can’t garrison the capital. s.h.i.+ Feng was not particularly astonished at Vico’s denial. “Since that’s the truth, Zero Wing will garrison Heavens Spring season City.” Even though the Conflict G.o.d’s Temple now regarded Absolutely nothing Wing for an formal Guild in G.o.d’s Domain name, that has been only acknowledgement certainly nothing more. To begin with, s.h.i.+ Feng thought that with all the wear and tear on the Two Towers Kingdom due to the abyssal monsters, Absolutely nothing Wing might are able to garrison its money.
Thus, crushing monsters would not be basic, repet.i.tive fights for EXP. Alternatively, it could developed into a battle where players gamble their existence for EXP. In times where two ends were definitely roughly similar in power, just about every overcome might be incredibly taxing. In fact, as soon as the big revise, most people would not any longer dare combat monsters that had been considerably bigger-leveled compared to they have been.
However he acquired managed to carryout his strategy in time, its accomplishment still depended heavily regarding how significantly the NPC society increased as soon as the 1st inhabitants growth. If the rise in NPC number was too intense, even No Wing could well be overwhelmed. After all, the Guild simply possessed too few Tier 3 combatants now. Meanwhile, Level 3 combatants were the minimum requirement of sustaining community sequence in Guild Residential areas.
It was simply because the Mystery Covenant Tower was obviously a holy area for mincing for concentrations and attaining Guild solutions.
Chapter 2733 – Garrisoning Dual Towers
Nevertheless, regardless of the possible lack of learning ability, the monsters inside awarded the same amount of EXP because the monsters in the rest of the world. One problem was that joining the trick Covenant Tower cost Magic Crystals. Additionally, the monsters inside failed to decline Coins or Miraculous Crystals. The value of materials they fallen was also minimal.
As opposed to the Level 3 Legacies spread on the rest of the world, a whole Legacy could reduce the difficulty of players’ Tier 3 Promotion by way of a significant border. A whole Level 3 Legacy could permit a Guild to attain many Level 3 athletes inside a small phase. It wasn’t an exaggeration to say that, for Guilds, a total Tier 3 Legacy became a hundred occasions more valuable over a Fragmented Renowned piece.
Upcoming, I’ll have got to observe how very much the NPC population increases while in the initial influx. s.h.i.+ Feng’s gaze developed solemn following Vico left behind the area. Ideally, the quantity isn’t too important.
Obtaining the assets Skies Spring Location were required to provide was just one among his objectives in getting Absolutely nothing Wing garrison town. What was more significant for Zero Wing was the City Teleportation Community that was included with garrisoning a major city. This became also the key reason the different superpowers got frantically fought to have a save chair.
“Although it is actually slightly way, as long as Milord helps to make the total transaction, we will behave immediately,” Vico claimed, gritting his pearly whites when he checked out the guide.
Nonetheless, it was a unique storyline in the Key Covenant Tower.
Sky Springtime Community was bound to turn into one of several crucial hubs within the eastern continent. The bucks and assets it could possibly rake in would substantially surpa.s.s those of quite a few royal capitals. If designed properly, the town could even rival imperial capitals.
“That won’t become a issue,” Vico mentioned with confidence. “May I am aware which Guild Town you prefer to build the teleportation array in?”
“If it is just one, the information payment is 100,000 Precious metal Coins and 30,000 Miracle Crystals. The work pricing is an additional 5,000 Gold bullion,” Vico mentioned. “In entire, it is going to cost you 105,000 Golden and 30,000 Magical Crystals.”
Luckily for us, s.h.i.+ Feng didn’t ought to issue himself with your a concern.
The numerous monsters kept in the secrets Covenant Tower’s unique s.p.a.ce possessed already went mad and did not own any consciousness or intellect. The only thing on his or her thoughts was slaughter.
“Silverwing Community!” s.h.i.+ Feng solved soon after presenting the dilemma some thought. Silverwing Community was vital to Zero Wing’s long term advancement and strength from Saint’s Palm. Hence, it absolutely was very important that Silverwing Community established as rapidly as you possibly can.
The numerous monsters held in the actual key Covenant Tower’s special s.p.a.ce possessed already long gone wild and failed to have got any awareness or intellect. The single thing on their own thoughts was slaughter.
For this reason, grinding monsters would no longer be simple, repet.i.tive combats for EXP. As an alternative, it might become a overcome where competitors choice their day-to-day lives for EXP. In times where two aspects have been roughly equivalent in energy, each combat could well be incredibly taxing. The fact is, following the significant update, most players would no longer dare overcome monsters which were drastically larger-leveled compared to they ended up.
Soon after contemplating it via, however, he located this example realistic. As part of his preceding living, the Dual Towers Empire experienced shone brightly on the country of G.o.d’s Area, on account of the Tower of your energy and Magic formula Covenant Tower it located. It was a empire that stood much above other kingdoms.
“No, no, no. Materials the Twin Towers Kingdom makes use of are the best. No concerns will develop even when one thousand a lot of use,” Vico a.s.sured sincerely though trembling his travel.
Even so, it could seem that they obtained still overlooked the Two Towers Empire.
Yet still, even with the absence of cleverness, the monsters inside granted the same amount of EXP since the monsters during the outside world. The only problem was that going into the trick Covenant Tower cost Miraculous Crystals. Moreover, the monsters inside did not drop Coins or Secret Crystals. Value of the type of material they dropped had also been negligible.
“You males certainly are ruthless. It is just a number of common resources, however you’re actually charging me 100,000 Yellow gold.” s.h.i.+ Feng was flabbergasted at Vico’s quotation.
Well before G.o.d’s Domain’s initially big improve took place, mincing for degrees was obviously a everyday ch.o.r.e for participants. Nevertheless, once the first important update, grinding for levels would turn into life threatening.
“Silverwing Location!” s.h.i.+ Feng solved just after providing the dilemma some imagined. Silverwing Community was pivotal to Absolutely no Wing’s potential future creation and resistance towards Saint’s Fretting hand. Thus, it was critical that Silverwing City created as rapidly as you possibly can.
“That won’t certainly be a difficulty. You need to adhere to me. I am going to conduct the methods in your case,” Vico said and brought s.h.i.+ Feng for the third-floor hallway.
|
OPCFW_CODE
|
Hi all -
This is a progress report on my efforts to equip a WebKit application with
a restricted (and configurable) number of concurrent logins. First I give a
summary, then the details are given below my signature.
Summary: The solution is based on a modification of the
SecurePage/LoginPage example to make use of UserManager, User, and
UserGroup classes. A SecureApplication subclass of Application creates a
UserManager instance for itself with access method
SecureApplication.userManager(), which is needed by SecurePage. I had to
hack AppServer slighly so that its createApplication() method creates a
SecureApplication instead of a plain Application. Doing this by specifying
an ApplicationClass configuration option for AppServer would be insecure,
to say the least, and I couldn't think of a non-hacking approach.
The Session class is used unmodified, but I wish that it had an
isTimedOut() method so that I could test session.isTimedOut() instead of
time.time() - session.lastAccessTime() > session.timeout()
Suggestions and criticisms on any of this are welcome.
The scheme seems to work -- I attempt to view an app page, get presented
with a Login Page, log in as one of my "authorized users", and get to the
application page (subclass of my SecurePage, which is a subclass of my
BasePage) in the expected way.
I notice that if, while viewing the app page, I choose "Refresh" in the
browser's menu or toolbar, the question comes up "Re-post form data?" If I
respond "Yes" then the login page appears, and I have to log in once or
twice to get back to the app page. If the session times out, WebKit
generously allows me to bypass all the security and proceed without a
session. I think both of these problems will be solved with a little more
fiddling with the LoginPage and SecurePage logic.
The long version of what I did and found out (so far) is below. This is
only a progress report, so the details may change considerably in the next
few days as I try to get this ready for release.
Victoria BC Canada
ADDING LOGINS TO A WEBKIT APPLICATION (IN PROGRESS)
My home-made application launcher fixes up sys.path and launches
ThreadedAppServer, which is a subclass of AppServer.
AppServer has been hacked so its createApplication() method creates a
SecureApplication instead of a plain old Application.
SecureApplication is a subclass of Application. Its __init__ method creates
a UserManager for itself, with access method userManager().
SecureApplication does not use the UserManager -- it just makes it
available for the SecurePage applet.
UserKit's UserManager and other classes could be used, but my project needs
stable classes with different features. For project deadline reasons, I
created three very simple classes that do exactly what I need and can be
brought to a final releasable state quickly. These are:
UserManager - loads and saves the users file, doles out Users and
Groups on demand. The authenticate(username, password) method returns a
User object if the login info is correct, None otherwise.
User - has a username, a password (encrypted upon creation, unencrypted
original discarded), an integer security level which allows the app to
decide what the user is allowed to do, and a groupname, which makes him a
member of a particular group or "pool" of concurrent users.
Group - currently has only a groupname and a maxConcurrentUsers
attribute limiting the number of users of that group who can be logged in
All three of these reside in a package (usr) in my application. The "users"
file containing user ID's, encrypted passwords, security levels, and
groupnames, is also kept in a web-inaccessible directory.
My app's license module specifies an overall limit on the total number of
allowed concurrent logins from all groups combined.
Usage example: A site which has five authorized concurrent logins could set
up two groups - say, 'clerks' and 'wizards', where the clerks group has a
maximum of 3 concurrent logins and the wizards group has 2. Ten users
(say) are registered - eight clerks and two wizards. One of the wizards is
given the highest security level, 5, giving him full admin privileges. The
other has level 4. Two clerks have level 3, and all the rest have security
level 2. With this scheme, both of the wizards are always able to log in
any time, but only 3 of the clerks can be logged in at one time.
My version of the SecurePage class relies on the UserManager, User, and
Group classes described above, but a very similar approach would work with
UserKit. When someone tries to log in,
application.userManager().authenticate(username, password) is called. If a
User is returned, all the sessions in application.sessions() are checked to
see whether all of the slots for that user's group are in use. If not, the
User object is stored in the current session as authenticatedUser, and the
user's groupname is stored in the session as groupname. While searching
all the sessions to determine the availability of logins, sessions which
are timed out are not counted. Whenever a secure page attempts to use a
timed out session, it is (supposed to be) immediately logged out. [One
gotcha: application.sessions() is (sort of) dict-like, not list-like, so
you have to iterate through application.sessions().keys()].
The base page class for all the protected app pages is the modified
SecurePage. The original LoginPage is used pretty much as-is, except for
Victoria BC Canada
|
OPCFW_CODE
|
import { test } from 'qunit';
import sinon from 'sinon';
import _ from 'npm:lodash';
export default function testAssertion (testCases, asserter, assertionPusher, message, appContext) {
testCases.forEach((testCase) => {
const testName = testCase.args.map(arg => `"${JSON.stringify(arg)}"`).join(', ');
test(testName, function(assert) {
if (asserter) {
const m = "Testing the asserter";
assert.equal(asserter.apply(null, testCase.args), testCase.result, m);
}
// Testing assertion pusher, no user message
let spy = sinon.spy();
const obj = { push: spy };
const firstArg = appContext ? testCase.args[1] : testCase.args[0];
const secondArg = appContext ? testCase.args[2] : testCase.args[1];
// Padding the args array using the testCase.argsLength value
let args = [].concat(testCase.args);
if (testCase.argsLength) {
args = _.merge( Array.apply(null, Array(testCase.argsLength)), args );
}
assertionPusher.call(obj, appContext, ...args);
sinon.assert.calledOnce(spy);
sinon.assert.calledWithExactly(spy, testCase.result, firstArg, secondArg, message);
// If individual test case description is provided, repeat the previous assertion
// with the description (it's impossible to pass a message into sinon.assert)
if (testCase.desc) {
const m = `Description of the previous assertion: ${testCase.desc}`;
assert.ok(spy.calledWithExactly(testCase.result, firstArg, secondArg, message), m);
}
// Testing assertion pusher, with user message
spy = obj.push = sinon.spy();
assertionPusher.call(obj, appContext, ...args, 'Foo');
sinon.assert.calledOnce(spy);
sinon.assert.calledWithExactly(spy, testCase.result, firstArg, secondArg, `Foo: ${message}`);
// If individual test case description is provided, repeat the previous assertion
// with the description (it's impossible to pass a message into sinon.assert)
if (testCase.desc) {
const m = `Description of the previous assertion: ${testCase.desc}`;
assert.ok(spy.calledWithExactly(testCase.result, firstArg, secondArg, `Foo: ${message}`), m);
}
});
});
}
|
STACK_EDU
|
The Best 10 Wordpress security plugins to protect your Wordpress sites with ease. We did the research for you!
A WordPress plugin will automatically send you an email whenever an administrator logs in to your website. Enjoy.
A WordPress plugin that can be used to disable, enable, and remove certain functions of XML-RPC on your site.
An easy-to-config WordPress plugin that allows you to secure HTTP headers and add cookie flags you prefer.
An easy and lightweight reCAPTCHA plugin that automatically integrates Google's reCAPTCHA into any forms and protects your site from bots, brute-force attacks, spam, and abuse.
The Change My Login provides an easy way to change the default WordPress login URL to protect your website against brute-force attack. How to use it: 1. Download the plugin and upload the zip on the Add Plugins page. 2.…
Login Security Recaptcha is a Wordpress security plugin which displays Google reCaptcha (v2 or v3) on the Login page and comment form to prevent spam and brute-force attack.
A dead simple anti-spam captcha Wordpress plugin that adds a random security code to the login page. Only users who have entered the security code correctly will be allowed to log in to the website.
Better Headers is a Wordpress security plugin created for securing your Wordpress website by setting HTTP response headers without any server-side technology.
Hide WP Login is a Wordpress plugin to prevent your Wordpress from hackers by protecting the login page (wp-login.php).
Login Secure is a Wordpress security plugin used to prevent the login page using a unique query string you specify.
A Wordpress for Random File Upload Names that adds an alternative random name to your file after upload.
A dead simple Wordpress plugin that automatically removes the meta generator tag with WordPress version to protect your website from the hacker.
An easy and useful Wordpress security plugin to protect your WordPress admin area using IP Whitelist and Unique Secure Link.
A minimal Malware Removal plugin that checks the file Integrity in your Wordpress to block malicious files and prevent file modification.
A Wordpress plugin that protects your Wordpress website's login page with a simple PIN. The login page will only appear if the correct PIN is provided.
The WP Tweaks plugin provides a collection of useful tools to tweak your Wordpress with additional performance and security options.
SSL Fixit is a WordPress plugin that quickly resolves the mixed content issue in your SSL encrypted WordPress website by replacing all the http with https. Mixed content issue occurs when some external resources (such as images, stylesheets, scripts) are…
Word Security is a small and easy WordPress security plugin that enable you to secure your WordPress files, folders, and login page. Main Features: Directory Browsing: Enable/disable directory browsing. Protect .htaccess: Enable/disable HTTP Access to .htaccess. Protect wp-config.php file: Enable/disable HTTP…
A WordPress security and performance plugin that can be used to block Brute Force Attacks and DDoS by disabling frontend access to the admin-ajax.php file. How to use it: Download the Admin-AJAX WordPress Plugin. Login to your WordPress admin panel. Upload the plugin.…
|
OPCFW_CODE
|
He should have chosen DotNetNuke.
The Obama White House has contributed code back to the Drupal community, six months after it made headlines by adopting the open source CMS. Dave Cole, a senior advisor to the CIO of the Executive Office of the President, announced the code release this afternoon during a keynote at the DrupalCon trade show in downtown San …
Proprietary vendors are not going to understand the specialised scaling requirements of individual users as well as the users themselves, assuming the latter have a clue. Contributing patches upstream and maintaining them there reduces the duplication of effort involved in maintaining a local fork.
What to learn about scalability in closed-source or proprietary (yes, there can be a difference) situations?
Licensing models are to be considered, not just ability to handle large databases, user counts, etc. How much did the old CMS system cost per-processor? How much for the clustering add-on? Add-on for database connectivity of your choice? Cost of (probably) running their will-only-run-on-this database? Per-processor and clustering costs for said database? OS licensing, since it only runs in Windows on IIS6 or somesuch restrictions?
Contrast that to an open source solution that can be ran on various platforms and databases of your choosing. I think that scales quite well actually, and we haven't even started talking about capabilities for massive data and user counts....
...any nation that the U.S. has an argument with, and trade barriers. For instance Cuba.
I anticipate that the White House will regret it if the Cuban government web sites are found to be running on software that the White House contributed.
Or, "truth" being what it is in political discourse in the U.S., if it is merely alleged that that is the case. Or if it is pointed out - I suppose correctly - that that -could- be the case.
I myself, indeed, am not taking the trouble to check whether Cuba even respects "intellectual property", and in particular from overseas, or whether all their computers run on one cracked version of Windows 98. (If that's what their hardware supports.)
It's rumour that counts.
I don't think that White House programmers necessarily would feel bad about their software being used by Cuban sysadmins, who probably aren't particularly evil people. But they'd feel bad about embarrassing their boss.
I also wonder whether FOSS going around trade sanctions will at some point be made a reason to close FOSS down. For instance, suppose IBM was told to stop contributing to Linux because their work was supporting enemy states.
Then again, have I just caused a lot of trouble for causes that I like and respect?
Or... suppose that software was published with an open source licence that says you can't use it in a country whose government supports, practises, or uses torture...
...where in the world would you have to take your laptop to in order to run it?
I you are not trolling I think you have fairly rogue problems.
And firstly I would like to point out that the only reason Cuba has remained a communist state is the US trade embargo.
Also most of the state owned cars are US made, must be embarrassing too, but bye now, I think I lost your plot.
Cuba uses Linux, Iran uses Linux, Syria uses Linux... why is it legal for anyone in the U.S. to contribute to Linux, or acceptable to use it?
I use Linux, although not all the time because I have special requirements that it doesn't cover. But I can see "associating with the enemy" being the next campaign after patents against GPL and other free-like-beer software.
Biting the hand that feeds IT © 1998–2022
|
OPCFW_CODE
|
The guide will walk you through the steps necessary to build a clean virtual environment for OCW work.
The login information for the VM is always username: vagrant, password: vagrant.
To start the VM build, change to the ocw-vm directory and run vagrant up.
Vagrant will now build your VM and provide you with a simple GUI. Login using the usual credentials (username: vagrant, login: vagrant) and install the necessary dependencies. After you log in you will need to copy the dependency installation file from /vagrant to your home directory and start the installation.
This process can take a while depending on the quality of your internet connection. Be patient, it will get there eventually.
Help VM GUI TIps
When you're in the VM's GUI window you find that you're unable to use your mouse on our main desktop. To "escape" your mouse from the VM window press Left-Cmd (OS X) or Left-Ctrl (Windows/Linux). If your keybindings are different you may need to check the settings for additional information. Similarly, if the VM windows goes to sleep, you can wake it up by putting the window into focus and pressing the Escape key.
The VM build has switched over to using Miniconda instead of the full Anaconda. This helps speed up the build. The below screenshots may look slightly different given that, but all the inputs should be the same. If you run into problems please email the dev@ list.
About half way through the installation you will be prompted for input. The VM build relies on the Continuum Analytics Anaconda Python distribution which requires you to agree to their license and provide some minor configuration parameters.
You should answer yes to their license and use the default answers for the other prompts. After the install is complete, you will be prompted to add the new Anaconda path to your shell's configuration file. Answer yes to this as well. This step is important! If you forget to do this your build will not be able to find the Python dependencies that you installed!
After the installation has finished, restart the VM by going to the terminal where you ran vagrant up and run vagrant halt. We need to restart the VM so the GUI that we installed will kick in. Restart it by running vagrant up again.
You should now be greeted with a splash screen asking you to log in. Select the vagrant user and log in with vagrant as the password.
Note, if your login simply redirects you back to the login page, click the small Ubuntu icon by the username and select the Xfce Session.
When prompted, select to use the Default Config
Congratulations. The VM is now configured!
Testing the Build
Open a Terminal by clicking Applications Menu > Terminal Emulator. Then run the following:
Once the evaluation finished, you should find a .png in the examples directory if everything ran successfully.
Exporting the Virtual Machine from VirtualBox
Ensure that the VM is shutdown by running vagrant halt or closing the GUI and selecting the Shut Down option.
Open VirtualBox. You should see a VM with a name similar to ocw-vm_default_1400605262050_94035. You can rename in the settings for the VM.
Delete the default shared folder by Right Clicking the VM > Settings > Shared Folders > Select the vagrant folder and select the delete icon (red minus sign on a folder)
From VirtualBoxes settings, select the Export Appliance. Select the VM and click Continue. Select somewhere to save it and click Continue, then click Export.
Once this is done, your VM is ready!
Adding a Shared Folder
If you're received an exported version of the OCW VM, you may need to update the shared folder settings so you can access local data on your computer inside of the VM. This section assumes that you're using VirtualBox to host your VMs. If you're not, you'll need to check the documentation for the application that you're using.
You should see something similar to the below image when you've imported your VM.
Right click the VM and select Settings.
Select the Shared Folders Tab.
Most likely you folder list will be empty. Add a new one by clicking the icon with a "+" on a folder. You should see the following window pop up.
Navigate to the desired folder and give it a name. You'll most likely want to leave Read-only unselected so you can read and write to the directory from the VM. You'll also want to select Auto-mount so you don't have to manually mount the folder in your VM every time you start it up.
|
OPCFW_CODE
|
As a professional blogger, you probably spend most of your days writing new posts and optimizing old ones. This can be pretty exhausting, especially if you have to produce large amounts of content in short amounts of time.
Fortunately, an AI-powered writing assistant can make your job a lot easier. Although machine-generated content isn't a replacement for human ingenuity, tools like Jasper (previously known as Jarvis) can help you write better content, come up with blog post ideas, and much more.
In this post, we'll take a closer look at Jasper and how it works. We'll also help you decide if this is the right tool for you. Let's get started!
What Is Jasper?
Jasper is a service that enables you to generate blog posts, articles, and other types of content. This tool utilizes neural networks to understand what you want to write about, then produces sentences, full paragraphs, and even entire articles based on this information.
If you spend a lot of time working on blog posts, using a service such as Jasper can be a gamechanger. To be fair, you shouldn't simply let the tool generate full blog posts and publish them without any changes. Instead, you can use its features to help you improve sentences, come up with ideas on what to say, and answer questions that users might have about specific topics.
On paper, Jasper sounds amazing. However, the best way to judge a tool is by trying it out. That's precisely what we're going to do in the next section.
How to Use Jasper
There are two ways to use Jasper. The first approach involves using the tool's editor, which enables you to write blog posts within the platform. Once you start writing, you can tell Jasper to generate further content automatically. Here's what that looks like in action:
Those first sentences are all Jasper's work. To get the ball rolling, we set a title and a description for our blog post. We also added a couple of keywords, which Jasper has tried to work into the content.
Moreover, you can generate short, long, or medium-sized paragraphs. Jasper will even ask you to make some edits before you can produce further content:
In our experience, Jasper works best if you use it sparingly. If you ever feel stuck while writing, you can have the tool compose the next few sentences and use those as a starting point. In a way, Jasper can act very much like a writing partner, if you don't mind the fact that that partner is a neural network.
If you don't want to start with a blank slate, Jasper also offers content templates. You can choose from a broad range of content types, such as long-form articles, social media posts, and ads:
For example, if you choose the Product Description template, Jasper will ask you for the product's name and what it does. Then, it will generate one or more product descriptions that you can use as starting points:
Templates can be more hit and miss than using Jasper to help you write content on your own. If you look at the examples above, they definitely read like they were written by a computer. Therefore, it's always a good idea to edit and tweak this AI-generated content in order to make it sound a bit more natural.
Should You Use Jasper?
If you only write a couple of blog posts per month, a service such as Jasper can be overkill. It's not cost-efficient to spend $29 per month if you're not going to take advantage of the 20,000 words this service offers you.
On the other hand, if you publish blog posts regularly and you struggle to finish articles or come up with topic ideas, Jasper is more than worth its cost. For example, if you run an affiliate site, Jasper can help you publish content more often and spend less time working on it. Ultimately, that can translate to an increase in traffic and revenue.
Keep in mind, if you really want to get the most out of Jasper, the Starter plan doesn't offer the best value. To get access to unlimited generated content, you'll need to opt for the Pro plan or beyond:
Spending over $100 per month on any tool for your website is a big ask. If you're just launching your site, we recommend that you take some time to create your own content before using machine-generation services.
Machine-generated content has come a long way in the last few years. In fact, many AI writing assistants like Jasper can produce content that reads naturally and looks professional.
In a way, using Jasper feels very much like having a writing partner. You can use it to generate different types of content which you can later edit and build on. Moreover, you can use this tool to explore new topic ideas for your blog.
Do you have any questions about how to use Jasper to generate better content for your website? Let's talk about them in the comments section below!
|
OPCFW_CODE
|
The Behavior Template demonstrates how you can use Behavior script to create an interactive Lens without writing code. It comes with several examples, including how to change an image on tap, start particles based on your facial expression, running Tweens, and even calling your own custom code.
The Behavior template itself is composed of Helpers that can be added individually to your project. Take a look at each of their own documentation to learn more!
- Behavior: Use a drop down to choose different triggers, like screen touches and face event, to call responses like enabling/disabling objects, play animation, and more
- Tween: Animate an objects transform (position, location, rotation) and other properties of objects through the Inspector panel
- World Object Controller: Display an object only in the world side camera, as well as the ability to manipulate the object by screen touches.
The template comes with examples for how you can create different experiences using Behavior. Select each children of Behavior Examples in the Objects panel, then see its properties in the Inspector panel to learn about each one.
Tip: You can add the Behavior script into your project by selecting the + button in the Objects or Resources panel, and choosing Behavior.
Delay Fade In Image
This example shows how you can use Behavior to display a hint. There are two Behaviors:
- Fade in the hint after 6 seconds
- Hide the hint if the user taps.
Tip: Behavior is just a script that can be added to any object like any other script. Select the object, then in the Inspector panel, choose Add Component, then Script. Then select Add Script and choose Behavior.
In the first example, the hint is faded in using the Tween system. When working with Behavior or other components with references, a quick way to see how it works is to
right click a field with a reference and choose
Select in the popup menu. For example,
right click the
Target Object field containing
Tap Hint, and choose
Lens Studio will automatically choose the referenced object. We'll input an object with Tween Script attached into the
Target Object slot. Make sure the Tween Script on the target object has the same name as the
Tween Name in this Behavior script.
On Tap: Change Sticker
In this example, you can see how multiple Behavior scripts can be linked together to create more interactive experiences, even a list of them!
The main Behavior in this example is the one found on the
On Tap: Change sticker object. This Behavior responds to Tap Events by calling other Behavior scripts. To communicate with other scripts, this Behavior uses the
Send Custom Trigger option.
In addition, the
Next In List option is selected so that every time this response is called, the next
Custom Trigger in the list is triggered. This is what allows every tap to modify, enable or disable a different object.
Tip: Behavior scripts can communicate with other Behavior scripts using
Custom Triggers. In addition you can use
Custom Triggers to communicate between your own scripts and Behavior
The response for each Custom Trigger is set in the child objects. In each child object you can see two behavior scripts (similar to the previous example)--both responding to the same custom trigger listed in
On Tap: Change Sticker Object. One changes the texture of the logo, and the other animates the logo using tween.
Thus, every time the user taps on the screen, the logo’s texture is changed as well as a bounce animation is played.
Tip: It is not important that they are set as a child object, as
Custom Triggers function globally. In fact, you can have multiple Behavior respond to the same
Custom Trigger, and arrange them anywhere in the scene.
It is important, however, that if you were to respond to a
Custom Trigger in your own script, that the Behavior script itself is above your custom script in the Objects panel (as the Behavior sets up the
Custom Trigger system).
On Tap: Show Ring
The On Tap: Show Ring example is very similar to the
On Tap: Change Sticker example. The main difference here is that instead of changing the texture of an
Image Component, it is setting a parameter for a material. Try using what you learned in the previous example to play with this example.
On Camera Flip: Show Hint
This example shows how you can use multiple Behaviors to modify how interactivity should work based on the users action.
In this case, you may want a user to show a hint to switch their camera if they haven’t yet swapped it yet. So there are two objects--each one to show a hint for each side of the camera.
In addition, there is a second behavior script in each hint that disables that object if the corresponding camera has already been triggered. When an object containing Behavior is disabled, the Behavior itself will no longer run.
Tip: This is a double edged sword as you may accidentally disable an interactivity without taking this into account.
For example, if you wanted an object to appear and disappear when a head is in the scene, you should NOT put the object as part of the appearing/disappearing object, as when the object disappears, so will the Behavior to bring it back.
On Tap: Swap through face objects
Each object under
Face Interactions is an example of how you can connect your own script with Behavior. This is useful as Behavior can take care of some of the common Lens Studio specific code, while you can focus on the specific logic of the interactivity you want.
For example, in the
On Tap: Swap through face objects, we wrote a script that enables only one object in a list, and disables everything else. Then, we exposed this function through the api property. Finally, we called this script using Behavior by using the name of that function.
Warning: make sure the
Function Name input field on Behavior needs to be an exact match of the API function in the custom script!
script.api.activateNextItem = activateNextItem;
Tip: Using this technique you can quickly change how you want your interactivity to be triggered as well. Changing between tap and face triggers is as quick as choosing from a drop down. This allows you to develop your AR experience much faster and iterate by tests!
On Face Event: Throw Particles
This example is similar to the previous. Try taking what you’ve learned previously and playing with this example!
On Tap: Swap through face objects
This example demonstrates another way you might want to use Behavior. In this case, the Behavior deals with the helper parts of your Lens so you can focus on your own Interactivity:
On front camera show hint: Like before you can use behavior to show a hint so users know how to interact with your Lens
- On Lens start enable eye effects: You can enable the interactivity you want only when a condition is met using behavior as well
Change liquify based on touch position: This is our actual interactivity logic--in this case it’s not connected to Behavior at all!
Prevent touches from going beyond Lens: Since the interactivity we’ve defined ask the user to touch the screen, we might want to disable some of the default behavior Snapchat has when the user touches the screen so that two things are not happening at the same time
The last example contains interactivity that you might want to do in a Lens facing the world side.
Tip: The world side objects are using the World Object Controller which will disable any child object if you are in the front facing camera, and enable it when it’s facing the rear
On Tap: Swap through world objects
On Tap: Swap through face objects this example demonstrates a Behavior interfacing with a custom script.
On Look At: Change Owl animation
Demonstrates the usage of two behavior scripts to modify the animation of a 3D object--the first when a user is looking at, and the second when the user is not looking at an object.
On Distance Change: Change Kitty animation
This example is similar to the previous example. However, in addition, it responds to one of the animations that’s played by playing another example when it ends!
There’s a lot of things you can do with Behavior. If you’re new to building interactivity, it can allow you to put unique interactions together without code. If you’ve been building interactivity, it can allow you to test and modify them faster and more efficiently. Try remixing what you’ve learned in these examples, as well as use them in your next project!
Previewing Your Lens
You’re now ready to preview your Lens. To preview your lens in Snapchat, follow the Pairing to Snapchat guide.
Please refer to the guides below for additional information:
Still Looking for help?Visit Support
|
OPCFW_CODE
|
July 2004 - Posts
Well blogged now, but wanted to mention that Jim has released IronPython and the PPT for his talk as OSCON. I learnt from Sam's post that Jim has joined Microsoft, wondering where Sam got that from I soon found answers in Jason's post. Its great news that the CLR team has Jim on board and I will be very interested to see what his work will mean for dynamic languages (such as Python and Perl) on the CLR. My hope is that Jim can now work with the Perl, Ruby and other communites to get these languages running full featured and fast on the CLR. Indeed I really hope Jim will continue his work with the Mono community as I feel sure the folks there could help in the effort. I would love to see Jim on Channel 9 (hope your listening Scoble :)
Check out Miguels comments on the subject (no shame in that cheese burger Miguel ;-) and Edd's cool Mono. IronPython and GTK demo
Jason is looking for other devs, PM's and testers for his team, if I could follow Gudge's lead in living in the UK (Manchester like me) but wor for top class team like the CLR team (in Gudge's case for Mr Box's team), I would be right on it.
Past 2am here, been trying out SQL Server 2005 beta 2 for the last couple of hours and wondering (as I did at the start of the year) on a merger of Mono's CLR and MySQL.
Rob blogs about his day-before-the-first-day at OSCON. Great to see OSCON getting underway, really looking foward to seeing some of the stuff come out of the show. A few sessions I wish I could see.
Would have been great to see have seen a few more Mono sessions and maybe a session or two from the P.Net and Rotor folks.
Mr Box has news that ICE is now running on the CLR. I will admit I was not really aware of ICE, having grown up on RMI/CORBA-IIOP/DCOM and then web services and .NET Remoting, other brokers and services I did'nt cover. Its interesting to read about ICE and how its compared to CORBA (comparing IIOP seems a little odd to me as CORBA can be used over RMI/sockets and other delivery means). Will be interesting to see what this all means for Indigo.
I have yet to go through the spec in some detail but the following you may have heard before
native XML datatypes to the ECMAScript language, extends the semantics of familiar ECMAScript operators for manipulating XML data and adds a small set of new operators for common XML operations, such as searching and filtering. It also adds support for XML literals, namespaces, qualified names and other mechanisms to facilitate XML processing.
Sounds a lot like C-Omega?, maybe thats why Herman Venter is one of the Microsoft folks on the TC. Maybe the work in C-Omega will find its way into the CLI?
The technology that drives what your reading right now, .Text, has a new name Community Server :: Blogs and its creator Scott Watermasysk has a new job. Scott has joined Rob Howards startup, I think they need another for the fold.
Coding all week on a mission critical app, the kind that can't, just can't go wrong. Tommrow we go live, nerves? the sheer man hours means sleep won't be a problem....and very soon me and my bed will meet :)
Sean has the scoop on a preview release of Blue Dragon's CFMX for .NET. Well done to the team, it now means CFML is running on the CLR and JVM...next stop maybe Parrot. Much as I know (and correct me if I am wrong) its the first main stream web app technology to run on both. I will hopefully find some time over the weekend to play with this and C-Omega.
Dare has the scoop
, the C-Omega compiler
is available for download
!! Whats C-Omega? XML and SQL query handling all native to C# (remember all the talk of X# and Xen? early projects that lead to C-Omega ).
Dan has some notes on requests for someone to help lead the Perl 6 development. That is development of the language and it running on Parrot. Dan is writing Parrot as the runtime, Larry is designing Perl 6, but someone needs to help Larry's designs happen on Parrot and lead the Perl devs into making it happen. As one of the folks I have muchos respect for and a thought leader, I think John should do it
Mono's mighty man has some notes on Flex
, its great that he notes the markup for Flex is simpler and about the cross platform abilities of Flex into the Flash player, great to see. Flex for .NET is on its way Miguel !
Got to admire Scott, written a lot of books and articles in his time and now is working towards a fiction title
. I like Scott wrote a lot when I was young and it was a dream come true to get my own book out there, non-fiction as it was. The flame to write fiction also like Scot still burns away, so hats off to him for giving it a serious shot. There is an old saying of "write what you know" and I am not sure personally I would write about computers. I am sure Scott's idea will be corker but the writer in me would fancy a challenge, to write about my other interests, to find drama and excitment about a topic I don't live and breath 24/7. Maybe that way it could stay fresh, maybe. The biggest challenge facing Scott will be finding a publisher, with his publisher contacts he should find it easier than most new authors to fiction and his subject material will mean a potential captive audience. However its still far from easy and some authors write 10 books before one gets published. As I say however you have admire Scotts ambition to do this, maybe some day I will have a serious go too.
More Posts Next page »
|
OPCFW_CODE
|
Life at Hashnode Edition: #4
Life at Hashnode is a weekly sneak-peak of literally "Life" at Hashnode.
Hey there! 👋🏻
So it's been a month since I have started writing this series. What started as a fun experiment is now the most frequent subject I get DMs about. And yeah, it really amuses me to know that you all wait for new editions of Life at Hashnode every week. 🥺
If anyone of you reading an edition of Life at Hashnode for the first time. Then a little backstory - Life at Hashnode is an article series that I write every week to share some insider stories of Hashnode. ✨
You can read all the past editions.
What this week looked like?
This week is FUN! (short answer).
We got really amazing response for the Bootcamp.
And to be honest, this week's sessions by Colby, Sultan, and Annie blew our minds too. All of us are eagerly waiting for Samson's session that will be happening today.
We are working on some super exciting features
The whole Hashnode dev team had been busy this week working on some amazing features that we are shipping out starting next week. The goal is to support dev creators on the platform and also enhance the overall user experience with Hashnode.
More details about it will be out soon.
Also, it was Sandeep Panda's birthday yesterday!
Drop wishes for him in the comments after you are done reading this whole article! 😌
📌 Sandeep Panda reminder for you to charge your laptop and send us your birthday treat.
Annnnnd, this week Victoria Lo came for the Happy Hours session 🍺 , and it was a blast 🔥
Those who don't know - Hashnode Happy Hours is Hashnode's new initiative. We talk to the Hashnode community every Thursday about things around Tech, Careers, Blogging, and simply ANYTHING and EVERYTHING the community is interested in talking about over beverages and Snacks.
Our main aim behind doing these sessions is to understand our community better and chill with them.
If you want to chip in upcoming sessions, you can join Hashnode's Discord channel - discord.gg/HNwmsB84S6
We do these every Thursday.
Coming back to the theme of this edition - Side Projects and Hobbies.
For this edition, I thought of sharing with you'll the Side Projects and Hobbies of all the Hashnode team members.
Starting off with Catalin Pit
Well, he owns a Quad bike. 🥺🥺
I personally don't know a cooler hobby than this. 🥲
Sandeep and his Jawa bike
Sandeep loves to travel over the weekends.
And hiring for Hashnode is his side project. 😂
Well, answering what side projects and hobbies I am involved with is always tough for me.
It's like I have this Metaphorical stove in my head. Where I always keep 4-5 burners ON. Some projects are boiling, some are on simmer, and some have their burner switched off so that they can settle.
(I have no idea how much these lines made sense to you. 😂)
If it did, you can google the term Multipotentialite and read more about it. In a nutshell, it's someone having multiple creative pursuits.
But yeah, I tried to pick my top 3 side projects and hobbies that I am most involved with these days.
- I help in running a Product community called The Product Folks.
- I learn to make animation movies and Games using Unreal Engine over the weekends. This is the biggest project I have pulled off so far in it👇🏻
- And I PAINT! I learned how to paint before I learned to write. So this is my longest-lived hobby.
Meme Mirza's Hobbies and Side projects
Mohd Shad Mirza shares some common hobbies as mine, like his love for Art and Literature.
He sketches really well. ✨
You can check out more of his Instagram page.
Also, he loves to Skate and Read. 📚
He also has a mentorship program as a side project named TheNextBigWriter.
Edidiong Asikpo loves 🗺️✈️🧳
Edidiong, like every rich person, loves to travel and fine dine. 😂
Also, she loves watching movies 🍿 and hitting the gym.
Vamsi Rao's planokay
As a side project, Vamsi is working on Planokay, a collection of mini side projects he has worked on with his friend. And in parallel, he writes about Indie Hacking and random observations on Vamsirao.com.
Hobbies: Watching TV Series, Playing tennis, listening to podcasts
Girish Patil's hobbies aka Adventures of Girish.
- Girish, of course, loves Travelling.
- He used to sketch a lot, now he tries to paint too.
- And he loves to 🏊♂️
Eleftheria Batsou's hobbies and side projects both revolve around Content Creation
She runs her Instagram page named elef_in_tech and posts value bombs there consistently.
And her YouTube channel is SUPER awesome too.
Syed Fazle Rahman reads 📚
So Fazle, hobbies revolve around reading candidates for hiring and reading books.
He's currently reading Rework by David Heinemeier Hansson and Jason Fried and recommends everyone to read it.
He tries to relax and get some sleep in his leisure hours (working across so many different time zones is not that easy 🙇♀️).
Woh! this took time to write.
But yeah, it was fun.
See you all next week! 👋🏻
Meanwhile, let me know for sure what you wish to read next under this series.
|
OPCFW_CODE
|
A Scalable Vector Graphic (SVG) is a unique type of image format. Unlike other varieties, SVGs don’t rely on unique pixels to make up the images you see. Instead, they use ‘vector’ data.
By using SVGs, you get images that can scale up to any resolution, which comes in handy for web design among plenty of other use cases. In this article, we’ll ask the question: What is an SVG file? We’ll then teach you how to use the format.
Let’s get to it!
What is an SVG File?
SVGs are graphics built using vectors. For the uninitiated, a vector is an element with a specific magnitude and direction. In theory, you can generate almost any type of graphics you want using a collection of vectors. Take this image of a blue rectangle with a black border and shadow, for example:
This is another kind of image file called a Portable Network Graphic (PNG), used for illustrations and drawings. If you wanted to replicate something similar using vector graphics, you’d need to generate it with XML code (the same found in use for sitemaps.) The following code could achieve the same result:
<?xml version="1.0" standalone="no"?> <svg xmlns="http://www.w3.org/2000/svg" width="200" height="200" version="1.1" baseProfile="full" > <rect x="0" y="0" width="60" height="60" style="stroke: blue;"/> <rect id="myRect" x="25" y="25" rx="0.6" ry="0.6" width="150" height="150" fill="blue" stroke="black" stroke-width="8"/> </svg>
In theory, if you take this code and drop it into an HTML file, you’ll see a similar set of rectangles to the PNG – that is, as long as the browser you use supports SVG files. Although both images look the same, SVG files offer a whole host of benefits that other formats don’t. For example, SVGs are capable of retaining image quality as they scale up or down.
If you keep zooming in on the PNG rectangle, you’ll notice its quality begins to downgrade at some point. With more complex pixel-based graphics, the degradation becomes evident much faster. However, SVGs look good at practically any resolution.
Why Use an SVG File?
Many websites use formats such as PNG and JPEG almost interchangeably. SVGs aren’t quite as versatile, though. If you try to re-create a complex photograph using vectors, you’ll usually end up with massive and unusable SVG files.
The SVG format is a fantastic option for a whole set of other scenarios, though:
- Logo design. Since you’ll probably re-use logos across websites and social media, using SVG resolves any potential scalability issues.
- Diagrams. SVGs are a perfect match for diagrams and any other kind of illustration that relies on plain lines.
- Animated elements. You can use CSS to animate SVGs, which makes them a useful component in website design, particularly for microinteractions.
- Charts and graphs. You can use SVGs to create scalable graphs and charts that support animations.
Since SVGs use the XML format, this also makes them both searchable and indexable. Screen readers can interpret SVG files as long as you use the correct accessibility tags.
Finally, SVG files tend to be much smaller than high-resolution equivalents in other formats. On paper, this means you may be able to cut down some of your page sizes and decrease loading times. However, unless you plan to convert most of your images to SVGs, the performance increase will probably be minimal.
How to Create an SVG File (2 Ways)
There are two approaches you can take when it comes to SVG files. You can create them from scratch or take an existing image and convert it. Let’s start with the manual method.
1. Create an SVG File Manually
Creating an SVG file usually doesn’t involve you typing out vector information as we did earlier. That was just an example to show the concept. Instead, you create SVGs like any other graphics – by using a design program and saving the file out as an SVG. Many modern graphic design tools support SVGs out of the box. Some top options include:
The last two options in this list are open-source solution. This makes them a great option to experiment with creating SVGs without paying for premium software. In fact, they may be all you need.
If you don’t have any experience with graphic design, creating your own logos or other elements for your website will be a challenge. In this case, your best bet will be to take existing images and convert them into SVGs.
2. Convert Existing Images Into SVGs
There are a lot of free tools you can use to convert images from other formats into SVGs. Most of the software we mentioned in the last section enables you to open your images and save them as SVG files.
If you don’t want to download any software, you can also use online conversion tools – and there are plenty of services you can turn to. One example is Vector Magic, which you can use to convert all manner of filetypes into SVGs:
We like this particular tool because it shows you a preview of your SVG file before you download it. You can also use a built-in editor to make small changes and corrections before downloading the file:
In our experience, most SVG converters offer results of similar quality. For the best possible results, the converter you use doesn’t matter as much as the images you select.
As a rule of thumb, it only makes sense to use the SVG format for ‘simple’ images – that is, images with defined borders and clean lines. The more complex the image, the more likely it is you’ll end up with a massive SVG file that’s a chore to edit manually or animate.
How to Use an SVG File (In and Out of WordPress)
SVGs aren’t all that hard to use. Adding an SVG file to your website is as easy as taking its code and pasting it within an HTML document wherever you want the image to go.
If you and your site’s visitors use browsers that support SVG files (and most do these days), they’ll be able to see the element. Animating SVGs is, of course, trickier since it requires the use of CSS.
The process changes if you’re using WordPress, though. The Content Management System (CMS) doesn’t support SVGs out of the box. If you want to enable SVG support so you can upload files directly into your website, you’ll want to use a plugin such as Safe SVG:
It’s also possible to enable SVG support in WordPress manually, but the process is much more involved. In this case, using a plugin is the safer option.
Adapting your website to use SVG files is much easier than you might imagine. The real challenge lies in designing SVGs from scratch or choosing the right images to convert to the format. Fortunately, there are plenty of tools you can use to do both.
Some great options include Adobe Illustrator, InDesign, and GIMP. Using those tools, you can create and convert existing images into SVGs. If you’re using WordPress, you can upload those SVGs using the Safe SVG plugin, then have fun animating them.
Do you have any questions about how to use SVG files? Let’s talk about them in the comments section below!
Article image thumbnail by VectorsMarket / shutterstock.com
|
OPCFW_CODE
|
Some time ago, I wrote an introductory post about bitwise operations in SQL Server. I had fully intended on writing a follow-up to that. Alas the opportunity has passed for the idea I was working on back then.
As luck would have it though, I encountered a new opportunity to share something on this topic. This one came to me by once again helping out in the forums. And, since I worked it out, I will be using the same problem posed in the forum and the solution I proposed.
First we need a little setup. Let’s create a simple table and populate that table with some data.
CREATE TABLE ColorPlate (ColorID INT PRIMARY KEY IDENTITY(1,1),ColorPlate VARCHAR(10), ColorType INT) INSERT INTO ColorPlate (ColorPlate,ColorType) SELECT 'Red',1 UNION ALL SELECT 'Blue',2 UNION ALL SELECT 'Yellow',4
As I said, this setup is rather simple. The solution is not much more complex. However, before we get to the solution, we need to know what we need the solution to do. From this table, I need to be able to determine the primary colors that make up a different color based on input of an ID relating to that color. I know. I know. We don’t have all of the colors and their ColorTypes presented to us at this point – but let’s just go with it for a bit. I would imagine that the other colors and the number assigned to their colortype would be populated at some other time.
For now, we are only working with seven color variations – so any number from 1-7 is a valid input. How do we find all of the colors that are required for the number that we input? Well, we use some smoke and mirrors. Just kidding. Seriously though, we use bitwise operations as well as a neat trick called “cross apply.”
DECLARE @ColorType INT = 3 SELECT cp1.* FROM ColorPlate cp1 CROSS APPLY ColorPlate cp2 CROSS APPLY ColorPlate cp3 WHERE cp1.colortype & cp2.colortype & cp3.colortype & @ColorType <> 0 ORDER BY ColorID;
Do you see what is being done there? I have known values in this table of 1,2, and 4. I know that 7 is the max number I am allowing for input at this time. Because of that, I know that I need three values in order to arrive at a value of 7. Due to this requirement, I know I must Cross Apply the ColorPlate table twice beyond the first select from it. That will permit me to sum three values from the ColorPlate table.
Now that I have access to three possible values, I need to compare those values using the Bitwise And operator. This is denoted by ampersand ( & ). Note that the where clause checks each of the three tables as well as the variable. Then, I want to make sure that their bitand operation is not 0. Pretty slick eh?
Let’s put it to action. If I run the above query with a value of 6 for the @ColorType variable, I will get a two record result set. The results returned would be the primary colors for green (which are Blue and Yellow). If I use 7 for that same variable, I will get a three record result-set which would include red, blue and yellow.
This was a rather simple solution and scenario for a bitwise operation. There are plenty of other examples out there of how to use these types of solutions. Some more elaborate than others – but many good examples nonetheless.
I am interested in finding more solutions that involve these types of operations. Who knows, maybe I will even be able to remember the neat stuff I learned while writing the last article on the topic and be able to put that up before too long.
|
OPCFW_CODE
|
/**
* Book Actions
*
* These control the navigation through a book, through pages and at different orientations.
*/
import _ from 'lodash';
import { addImageSource, resetWorld, goHome, IMAGE_LOADING_STATE } from './viewerActions';
export const BOOK_ADD = 'BOOK_ADD';
export const BOOK_TO_PAGE = 'BOOK_TO_PAGE';
export const BOOK_TO_START = 'BOOK_TO_START';
// The values of these constants are quite important!
// Amount of pages to move by when flipping.
// Amount of pages that are on screen.
// Page number for first page.
// etc..
export const ORIENTATION_LANDSCAPE = 2;
export const ORIENTATION_PORTRAIT = 1;
export const PAGE_NEXT = 'PAGE_NEXT';
export const PAGE_PREV = 'PAGE_PREV';
export const PAGE_RANGE = 'PAGE_RANGE';
let orientation = ORIENTATION_PORTRAIT;
export function getOrientation() {
return orientation;
// @todo fix bugs with loading of thumbnails side by side before re-enabling this.
// return Math.abs(window.orientation) === 90 ? ORIENTATION_LANDSCAPE : ORIENTATION_PORTRAIT;
}
export function getBookFromState(state) {
return state.book.currentBook;
}
/**
* State machine
*
* This is a performance tweak that will stop the PAGE_RANGE event
* being fired more than once per second. This was for GA event
* tracking.
*
* It is not the most elegant design, and breaks the rules. Could use
* some TLC to improve.
*/
class State {
static from;
}
const setSourcePage = _.debounce((from) => {
State.from = from;
}, 1000, {
leading: true,
trailing: false
});
const setTargetPage = _.debounce((dispatch, to) => {
if (State.from !== parseInt(to, 10)) {
dispatch({ type: PAGE_RANGE, meta: { from: State.from, to } });
}
}, 900, {
leading: false,
trailing: true
});
export const toPage = (page) => (dispatch, state, viewer) => {
if (
page !== state().book.startPage &&
state().book.currentPage === page
) return;
// Set `from` with debounce of 1s
setSourcePage(state().book.currentPage);
// Set `to` with debounce of 0.9s
setTargetPage(dispatch, page);
// Add loading state.
dispatch({ type: IMAGE_LOADING_STATE });
// Reset world to begin with
dispatch(resetWorld());
// Grab orientation and book object.
const orientation = getOrientation();
const book = getBookFromState(state());
// add first page (imageSource).
const firstPage = book.getPageImage(page);
dispatch(addImageSource(firstPage.getSource(), 0, 0, firstPage.getWidth()));
// If we are in landscape, and not on the cover page...
if (orientation === ORIENTATION_LANDSCAPE && page !== 0) {
// add second page (imageSource with offset)
const secondPage = firstPage.fitTo(book.getPageImage(page + 1));
dispatch(addImageSource(secondPage.getSource(), firstPage.getWidth(), 0, secondPage.getWidth()));
}
// dispatch updated page number
dispatch({ type: BOOK_TO_PAGE, payload: { page } }); // Reducer will then update annotation based on this event.
};
export const nextPage = () => (dispatch, state) => {
dispatch({ type: PAGE_NEXT });
// We only move one page regardless if we are on the cover.
if (state().book.currentPage === 0) {
dispatch(toPage(1));
} else {
// Move one or two pages depending on orientation.
dispatch(toPage(state().book.currentPage + getOrientation()));
}
};
export const prevPage = () => (dispatch, state) => {
dispatch({ type: PAGE_PREV });
// If we are on the second page, or first page
if (state().book.currentPage <= 1) {
// Go to first page.
dispatch(toPage(0));
} else {
// Go to previous page.
dispatch(toPage(state().book.currentPage - getOrientation()));
}
};
// Go back to start.
export const resetToStart = () => (dispatch, state) => {
dispatch({ type: BOOK_TO_START });
dispatch(toPage(state().book.startPage));
};
// Add book, wrapped up in a promise.
export const addBook = (bookPro, startPage = 0, coverPages = 1) => (dispatch) => {
Promise.resolve(bookPro)
.then((book) => {
dispatch({ type: BOOK_ADD, payload: { book, startPage, coverPages }, meta: book.__META__ });
})
.then(() => {
dispatch(toPage(startPage));
// Remove once reset is done.
// dispatch(resetAfterSeconds(10));
})
;
};
export function turn() {
orientation = orientation === ORIENTATION_LANDSCAPE ? ORIENTATION_PORTRAIT : ORIENTATION_LANDSCAPE;
// Remove the below for device..
// @todo see orientation above for explanation.
// const orientation = getOrientation();
return (dispatch, state) => {
if (state().book.currentPage === 0) {
dispatch(toPage(0));
} else {
// This will figure out which pages are to be loaded based on orientation.
// Can't remember how it works!
dispatch(toPage((Math.ceil(state().book.currentPage / orientation) * orientation) - (orientation - 1)));
}
};
}
// @todo Again another use-case for global state, this should be moved to a redux-friendly approach.
let timeout;
export function resetAfterSeconds(seconds) {
// Clear timeout
// Set timeout
// Reset in timeout
// Create new resetAfterSeconds(x) in timeout fn.
return (dispatch, state) => {
clearTimeout(timeout);
timeout = setTimeout(() => {
dispatch(resetToStart());
dispatch(resetAfterSeconds(seconds));
}, seconds * 1000);
};
}
|
STACK_EDU
|
JEgg is a framework designed to reduce the complexity and cost of developing robust, multithreaded Java applications. Simpler code is faster to get into production, and saves development dollars.
Effective use of the Active Object design pattern, in which each active object has its own logical thread of execution, can facilitate the development of simpler, more robust multithreaded applications. JEgg provides a framework that supports this pattern and takes it a step further by supporting asynchronous, message-based communication between active objects.
The message-oriented aspect is essential to writing better code. Passing a message to an object is much safer than calling a method on it, and message-oriented objects are intrinsically loosely coupled and highly cohesive. Morever, the restriction to message-passing active objects is really no restriction at all; any Java object can be a message. The central difference is that an object's public interface is really the set of message types that it can "handle" instead of its public methods.
Multithreaded development in terms of JEgg active objects is much simpler
than "straight Java" because the framework allows each active object to
be implemented as if it
were executing on its own distinct thread, without using the
Design of a complex application is never easy, but the execution independence of JEgg active objects, together with one-at-a-time message handling semantics, dramatically simplifies multithreaded application development without reducing the sophistication that you require. In fact, higher levels of sophistication become cost effective because your code is more focused.
The essential steps in a constructing a JEgg-based application are:
Designing an application in terms of messaging active objects may require you to think a little differently, but it should quickly feel natural since it's completely object-oriented. Additionally, the JEgg framework includes facilities to make the implementation easy so that you spend more time write application code and much less time writing "infrastructure" and error-handling code. Error handling is always necessary - no framework can avoid that - but the error handling that's left for you to implement should be at the application level.
JEgg objects are called eggs because their external aspect is featureless, like an egg. They have essentially no program API (public methods) because they aren't needed - eggs only communicate by sending and receiving messages. Each message is delivered by the framework to the egg it's addressed to and handled by your application logic. One-at-a-time message semantics is enforced by the framework which ensures that a given egg is not delivered its next message until it has finished handling the current message. The framework provides a way for an egg to send a message in response to the current message without having to explicitly reference the egg that sent the current message being handled. Also, a mechanism is supported to allow message "broadcast" to a group of eggs.
The physical artifacts of a JEgg application are the application eggs, the message types they exchange, and whatever other passive classes your application needs. Relationships between the eggs are expressed not in code but in the JEgg application descriptor which is a simple XML file that essentially enumerates the application eggs and describes which eggs will exchange messages with which other eggs. The framework also allows these relationships to be specified dynamically, or not at all. Eggs can register themselves with a built-in naming service that other eggs can use to look them up in order to send them messages.
The XML descriptor also allows you to assign eggs to one or more message dispatchers. In JEgg parlance, a message dispatcher is called a "basket" (really, what else would you call it?). Sometimes you can put all of your eggs in one basket, in which case they will all execute on the same physical thread, having their messages delivered by the same message dispatcher. jEggs that can take a long time to handle a message should be assigned to their own individual message dispatcher, or basket, but this assignment is made in the application descriptor, not code, so it's easy to change. Often, though, the application will have eggs that execute relatively infrequently, and those eggs can certainly all go in the same basket.
During execution, the message dispatcher assigned to an egg queues incoming
messges for that egg and delivers them one-at-a-time to the egg by using
reflection to deliver a given message to the egg's overloaded
So, where are the threads? They are the message dispatchers that deliver the messages to your eggs, one at a time. Conceptually, each egg executes completely independently of all other eggs, and this dramatically simplifies application design. Behind the scenes, Java threads are assigned to the message dispatchers that deliver messages to the eggs, and the application descriptor allows you to assign eggs to the same dispatcher or a different one. Consequently,
|
OPCFW_CODE
|
- [Feature] #51: Add option to specify target key in
- [Feature] #52: Better support of
- [Bug]: Prevent Twisted’s
log.errfrom quoting strings rendered by
- [Bug]: Allow empty lists of processors. This is a valid use case since #26 has been merged. Before, supplying an empty list resulted in the defaults being used.
- [Bug]: Tolerate frames without a
- [Feature] #26: Allow final processor to return a dictionary. See Adapting and Rendering.
- [Feature]: Officially support Python 3.4.
- [Feature]: Drop support for Python 3.2. There is no justification to add complexity for a Python version that nobody uses. If you are one of the 0.350% that use Python 3.2, please stick to the 0.4 branch; critical bugs will still be fixed.
- [Feature]: Test Twisted-related code on Python 3 (with some caveats).
structlog.PrintLoggernow is thread-safe.
- [Feature] #22: Add
- [Feature] #28: structlog is now dually licensed under the Apache License, Version 2 and the MIT license. Therefore it is now legal to use structlog with GPLv2-licensed projects.
- [Feature] #19: Pass positional arguments to stdlib wrapped loggers that use string formatting.
- [Feature] #42: Add
- [Feature] #44: Add
from structlog import *works now (but you still shouldn’t use it).
- [Bug] #8: Fixed a memory leak in greenlet code that emulates thread locals. It shouldn’t matter in practice unless you use multiple wrapped dicts within one program that is rather unlikely.
- [Bug]: Various doc fixes.
- [Bug]: Don’t cache proxied methods in
structlog.threadlocal._ThreadLocalDictWrapper. This doesn’t affect regular users.
- [Feature] #5: Add meta data (e.g. function names, line numbers) extraction for wrapped stdlib loggers.
- [Feature]: Allow the standard library name guesser to ignore certain frame names. This is useful together with frameworks.
- [Feature]: Add
structlog.processors.ExceptionPrettyPrinterfor development and testing when multiline log entries aren’t just acceptable but even helpful.
- [Feature] #12: Allow optional positional arguments for
structlog.get_logger()that are passed to logger factories. The standard library factory uses this for explicit logger naming.
- [Feature] #6: Add
structlog.processors.StackInfoRendererfor adding stack information to log entries without involving exceptions. Also added it to default processor chain.
- [Bug]: Fix stdlib’s name guessing.
- [Feature]: Extract a common base class for loggers that does nothing except keeping the context state.
This makes writing custom loggers much easier and more straight-forward.
- [Feature]: Allow logger proxies that are returned by
structlog.wrap_logger()to cache the BoundLogger they assemble according to configuration on first use. See Performance and the cache_logger_on_first_use of
- [Feature]: Add Twisted-specific BoundLogger that has an explicit API instead of intercepting unknown method calls.
structlog.ReturnLoggernow allows arbitrary positional and keyword arguments.
- [Feature]: Add Python Standard Library-specific BoundLogger that has an explicit API instead of intercepting unknown method calls.
- [Support]: Greatly enhanced and polished the documentation and added a new theme based on Write The Docs, requests, and Flask. See License and Hall of Fame.
- [Feature]: Allow for custom serialization in
- [Feature]: Enhance Twisted support by offering JSONification of non-structlog log entries.
structlog.PrintLoggernow uses proper I/O routines and is thus viable not only for examples but also for production.
- [Feature]: Add key_order option to
structlog.processors.KeyValueRendererfor more predictable log entries with any dict class.
- [Feature]: Promote to stable, thus henceforth a strict backward compatibility policy is put into effect. See How To Contribute.
- [Feature]: Initial work.
|
OPCFW_CODE
|
Welcome to our Learning CentreUse our online documentation as a reference book to answer your questions.
The home automation requires first and foremost that the system is stable. Unfortunately the writing software and the various changes that are made over time can generate unpredictable errors. For this reason we have divided the server into two parts: a base software that is modified as little as possible so as to keep each very stable system and a second software to be able to expand the functionality of the system.
We have in fact created plugins (modules) that allow us to add the functionality required by the customer in the event that actually serve. Only when a plugin is considered sufficiently stable is moved to the main module of the software thus minimizing the possibility of errors when additions or updates.
The plugins are components that can be used on the project but need to be separately installed on the server using the specific function. There are two types of plugins: those using a web page to be configured and those that are used in the EVE Manager configuration software. In order to make both of them be effectively be usable, you need to install a single plugin called EVE Logic on the server, whereas others can be installed one by one according to the needs of each project.
This distinction has been made for allowing the end user, via web pages, to configure aspects of the project (such as the management of the areas of the irrigation of a garden) independently from any browser, without the need to have access the entire project through the EVE Manager configuration software. The plugins that are part of the main EVE Logic module ( And-Or, If…Then, Calculator, Charts, Linker, Signal, Logger, Sunrise-Sunset, etc.) are available to the installer who can choose to use them only with EVE Manager. EVE Manager is designed to be modular and highly customizable. Plugins were born also to allow other companies to independently create specific control algorithms, or handle proprietary protocols in a completely autonomous using EVE Suite as the basis for the development of their specific applications.
EVE Logic installation
In order to actually use the plugins present in EVE Manager and plugins with configuration via web page you first need to download to your PC the main EVE Logic module and later carry out the installation.
Once you downloaded and installed the module EVE Logic plugin components available on the components library of EVE Manager will actually be up and running. Similarly, the plugins with configuration via web page will be activated only after the installation of the EVE Logic module and, in addition, their specific installation.
Follow tutorial of EVE Logic installation in order to use the available plugins of EVE system.
Web Pages configuration
Universal Gateway acts as a real bridge allowing to combine two devices of different protocols so that the implementation of the command on a device also corresponds to the implementation on the other and vice versa (unidirectional or bidirectional logic).
Light Saver has been developed in order to avoid energy waste. In particular, this plugin concerns lights but also any device which needs to be turned off. Light Saver allows to set a timer of activity for each switch of your project and also fading values.
Command Shortcut has been developed in order to simplify and speed up the way you interact with your devices to control your system. You now have the way to act on them without get access to the remote controller App, but just using a safe link to a web page.
Color Sequencer allows you to easily and quickly set the perfect color sequences of your RGB lights. Set unlimited color sequence congurations for different RGB lights and choose which actions will turn the RGB light color sequences on.
|
OPCFW_CODE
|
[Releasing] Meeting minutes: Qt release team meeting 29.09.2015
jani.heikkinen at theqtcompany.com
Wed Sep 30 11:39:41 CEST 2015
Meeting minutes from Qt Release Team meeting 22th September 2015
Qt 5.5.1 status
- New blocker reported. Fix already available
-> New packages needed.
- Will be tight to get release out during this week, let's see (we are still trying)
Qt 5.6 Beta status
- First enterprise binary snapshot already created
* Linux one useless, mac seems to work somehow, windows ones under work
- LGPL binary snapshot coming soon as well
Next meeting Tue 13th Oct 2015 16:00 CET
Irc log below:
[17:04:10] <jaheikki3> akseli: iieklund: kkoehne: thiago: fkleint: ZapB: tronical: vladimirM: aholza: peter-h: mapaaso: ankokko: fkleint: carewolf: fregl: ablasche: ping
[17:04:19] <ankokko_> jaheikki3: pong
[17:04:28] <fkleint> aheikki3: pong
[17:04:32] <akseli> jaheikki3: pong
[17:05:18] <jaheikki3> Time to start qt release team meeting
[17:05:23] <jaheikki3> On agenda today:
[17:05:28] <jaheikki3> qt 5.5.1 status
[17:05:34] <jaheikki3> qt 5.6 beta status
[17:05:45] <jaheikki3> Any additional item to the agenda?
[17:06:57] <carewolf> pong
[17:07:18] <jaheikki3> Let's start from Qt 5.5.1 status
[17:07:35] <jaheikki3> New 'rc' under testing
[17:08:28] <jaheikki3> Unfortunately it isn't final one: New blocker reported (QTBUG-38481), fix under integration atm
[17:08:40] <jaheikki3> When integration succeefd
[17:08:59] <jaheikki3> new qt5.git integration + new packaging round needed
[17:09:27] <jaheikki3> --> Will be really hard to get qt5.5.1 release during this week
[17:09:43] <jaheikki3> But we are still trying, let's see
[17:09:58] <jaheikki3> Any comments / questions?
[17:12:26] <jaheikki3> Ok, then 5.6 beta status
[17:12:55] <jaheikki3> first enterprise binary packages created for linux & mac
[17:13:16] <jaheikki3> binaries are from new CI
[17:13:43] <fkleint> *suspense*
[17:13:46] <fkleint> WINdows upcoming?
[17:14:00] <jaheikki3> Linux one is useless, new ci has too old icu
[17:14:08] <jaheikki3> mac one seems to work somehow
[17:14:36] <jaheikki3> Windows ones under work but some work still needed
[17:15:16] <jaheikki3> LGPL packages will be created as soon as possible as well, hoping we could get first ones quite soon
[17:15:44] <fregl> jaheikki3: I have win 10 / msvc 2015 machines now
[17:16:01] <fkleint> Yippie..packages?
[17:16:02] <fregl> so I hope in one or two days we'll have all needed windows configs running, maybe even today
[17:16:22] <jaheikki3> fregl: Geat!
[17:16:30] <fregl> fkleint: the packages follow automatically, so yes, as soon as the builds are through iieklund_ can run his tool to upload the binaries
[17:16:39] <fregl> fingers crossed of course
[17:17:11] <fregl> we also added the missing android configs, so there is only one rhel config not up to date when it comes to ICU, that will be done this week too, so right now I'm cautiously optimistic
[17:17:55] <fregl> this time we'll really need to do a good testing job on 5.6, so we compile with the right DB drivers and such everywhere
[17:18:11] <fregl> but we'll only have to fix it once and hopefully won't have regressions there again
[17:19:41] <jaheikki3> fregl: Yeah, my intention is to emphasize that testing need when sending information about first snapshot (when available)
[17:21:15] <jaheikki3> Any other comments / questions?
[17:22:22] <fkleint> 5.6 will hopefully have RTA in the loop?
[17:24:00] <jaheikki3> fkleint: Yes, first run already done
[17:26:43] <fregl> jaheikki3: sweet :)
[17:27:05] <jaheikki3> Ok, that was all at this time. let's end this meeting now and have new one next Tue at this same time
[17:27:15] <fkleint> Qt WS
[17:27:19] <fkleint> no point, I guess?
[17:27:54] <jaheikki3> fkleint: ahh, true. Lets have new one ather two weeks then
[17:29:30] <jaheikki3> OK, that was all. Thanks for your participation. Bye!
[17:29:42] <fkleint> bye
[17:29:45] <ankokko_> bye
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Releasing
|
OPCFW_CODE
|
Registring Concrete Types That Implement Variant Generic Interfaces With Autofac
Consider following strucure as a registration subject with Autofac 3.0.0:
class Something
{
public int Result { get; set; }
}
class SomethingGood : Something
{
private int _good;
public int GoodResult {
get { return _good + Result; }
set { _good = value; }
}
}
interface IDo<in T> where T : Something
{
int Calculate( T input );
}
class MakeSomethingGood : IDo<SomethingGood>
{
public int Calculate( SomethingGood input ) {
return input.GoodResult;
}
}
class ControlSomething
{
private readonly IDo<Something> _doer;
public ControlSomething( IDo<Something> doer ) {
_doer = doer;
}
public void Show() {
Console.WriteLine( _doer.Calculate( new Something { Result = 5 } ) );
}
}
I'm trying to register concrete type MakeSomethingGood and then resolve it by contravariant interface.
var builder = new ContainerBuilder();
builder.Register( c => new MakeSomethingGood() ).As<IDo<SomethingGood>>();
builder.Register( c => new ControlSomething( c.Resolve<IDo<Something>>() ) ).AsSelf();
var container = builder.Build();
var controller = container.Resolve<ControlSomething>();
... and Resolve fails because no components found for IDo<Something>
What am I doing wrong?
Thank you
possible duplicate of Customizing Autofac's component resolution / Issue with generic co-/contravariance
The ContravariantRegistrationSource as mentioned in the other question is still available in 3.0.0.
@Steven - it's quite similar, however I'm trying to ask different question here.
@Steven - In my example, Resolve fails because Autofac didn't manage to find services by contravariant interface, i.e. finding IDo<SomethingGood> by IDo<Something>
You register an IDo<SomethingGood> and try to resolve an IDo<Something>. How is that ever supposed to work? For this to work, IDo<T> should be defined as covariant: IDo<out T>.
Since IDo<in T> is defined as contravariant (using the in keyword), you can't simply assign an IDo<SomethingGood> to IDo<Something>. This won't compile in C#:
IDo<SomethingGood> good = new MakeSomethingGood();
// Won't compile
IDo<Something> some = good;
And that's why Autofac can't resolve it, even with the ContravariantRegistrationSource.
|
STACK_EXCHANGE
|
Can you enter the Schengen area with a residency visa from another country?
The Schengen area has an Annex II list of countries, which grants passport holders the right to enter the Schengen area visa free. In addition, this right is present for citizens of non-Schengen members of the EU (Ireland, Romania, Croatia, Bulgaria). Does this right also extend to any residents of these countries?
In other words, is there a residency permit issued by any non-Schengen state that would enable its holder to enter the Schengen area visa free?
Definition of "residency permit/visa" for the purposes of this question: any identity document issued by a given nation to non-nationals who reside in its territory. I.e. an I-571 refugee document might not strictly speaking be a "residency permit" but lets consider it such for the purposes of a canonical answer.
NB: this is intended as a canonical question to cover all possible variations of this question
I think it's unlikely. For non-Schengen EU states, even EU family members residing in those countries technically require a visa to visit Schengen (though if they are travelling or joining their EU family member, a visa is issued on the spot free of charge if they manage to reach the Schengen border, e.g. road or rail crossings, and prove the relationship).
I’m aware that US refugee document holders can enter Germany and a few other Schengen states so it’s possible there are exceptions.
If you accept territorially restricted visa-waivers within Schengen, then there are a few other exceptions in some countries IIRC, e.g. organized school trips for pupils resident of non-Schengen EU states.
Sure, a canonical answer should include all edge cases.
@xngtg that is incorrect. Someone with a residence card issued under article 10 or article 20 of directive 2004/38/EC is explicitly exempted from the requirement to have a visa by article 5(2).
A US refugee document is not a "residence permit" as such.
@phoog thanks, updated question.
@phoog but only if they are traveling with or joining the EU-Spouse. When traveling alone they may require a visa.
@MarkJohnson in my opinion, the directive is ambiguous on that question, but that is certainly the interpretation of immigration authorities.
@phoog Practical Handbook for Border Guards, Point 2.8, Page 22: A Slovak citizen resides with his Chinese spouse in Ireland. The Chinese spouse holding a residence card, issued by Ireland under Article 20 of the Directive, travels alone to France. As she travels alone, she needs to apply for a visa to enter France. Applies to Member states not yet fully applying the Schengen acquis;
@phoog I checked again and I think this is another case where the differences among Schengen states/EU states/bilateral treaties with Switzerland are a mess. For a Chinese EU family member in Croatia, Timatic shows visa required for Switzerland (I knew this because someone got denied boarding from Croatia) but for France it explicitly says Union citizen families are exempt.
@MarkJohnson as I said, I know that interpretation is broadly accepted, but a close reading of the directive shows that either the interpretation is incorrect or the directive is poorly drafted.
@xngtng Switzerland being a Schengen state, it cannot require a visa of a Chinese national who has an article 10 card from Croatia and who travels to Switzerland with, or to join, the EU citizen with whom he or she resides in Croatia. If TIMATIC fails to recognize that, then either Switzerland is wrong or TIMATIC is wrong. In the latter case, it wouldn't be the first time.
@phoog But that is then your personal opinion, so saying that xngtng statement is incorrect is a false statement. The Practical Handbook for Border Guards also states on the same page that Article 2(16)(a) of the Schengen Borders Code defines a residence permit as all residence permits issued by the Member States according to the uniform format laid down by Council Regulation (EC) No 1030/2002 (1) and residence cards issued in accordance with Directive 2004/38/EC;...
@phoog ...Ireland and Croatia are not Member States of the Schengen acquis, so their Article 19/20 resident cards are not considered residence permits, but only as proof that they are a family member of an EU Citizen and can cross an external border with or to join the EU-Spouse. Travelling alone, they require a visa.
@MarkJohnson there are two different issues here. My personal opinion concerns an unorthodox interpretation of the directive, which I fully accept is not shared by any border authority. However, the statement by xngtng is incorrect even under the most restrictive and uncontroversial interpretation, for example a Romanian citizen and Chinese spouse, holding an Art. 10 card, traveling together from Croatia to Switzerland. In that case, a visa is not required, and no visa is issued at the border, contrary to the statement in xngtng's first comment, so that statement is, as I said, incorrect.
@phoog Sorry, I misread that part of xngtng comment. A visa, free of charge, will be issued at a consulate if the family member wants to travel alone. When traveling with the spouse they are allowed to enter without the need of a visa.
No, since Ireland, Romania, Croatia, Bulgaria and Cyprus do not fully implement the Schengen acquis and are therefore not considered a Contracting Party.
The Schengen Border Code uses the term Member States, but doesn't define exactly what that means.
Article 2(16)(a) of the Schengen Borders Code defines a residence permit as
all residence permits issued by the Member States according to the uniform format laid down by Council Regulation (EC) No 1030/2002 (1) and residence cards issued in accordance with Directive 2004/38/EC;
The Practical Handbook for Border Guards, Point 2.8, Page 22 gives samples when a visa is needed or not.
A Slovak citizen resides with his Chinese spouse in Ireland. The Chinese spouse holding a residence card, issued by Ireland under Article 20 of the Directive, travels alone to France.
As she travels alone, she needs to apply for a visa to enter France.
Article 19/20 resident cards, that are not issued by Member States [of the Schengen acquis] are not considered residence permits, but only as proof that they are a family member of an EU Citizen and can cross an external border with or to join the EU-Spouse. Travelling alone, they require a visa.
Since family members of EU-Citizens may require a Schengen Visa, when they reside in a non-Schengen EU Member state, to enter the Schengen Area, one can assume that Member States of the Schengen acquis. is meant.
Article 21(1) of the The Schengen acquis - Convention implementing the Schengen Agreement of 14 June 1985...: Aliens who hold valid residence permits issued by one of the Contracting Parties may, on the basis of that permit and a valid travel document, move freely for up to three months within the territories of the other Contracting Parties, provided that they fulfil the entry conditions referred to in Article 5(1)(a), (c) and (e) and are not on the national list of alerts of the Contracting Party concerned.
Sources:
Practical Handbook for Border Guards - European Commission PDF
|
STACK_EXCHANGE
|
LogDiver is a new Mac OS X tool to help you view, filter and diagnose problems with your log files.
LogDiver supports the following features:
The Import Dialog allows you to select which log files you would like to import. Upon file selection, LogDiver will attempt to determine what type of log file it is and helpfully suggests which Extractor should be used.
By default, each extractor comes with a default date format that you can override if your log file has localised dates.
Note that you can multi-select log files, but they must all use the same extractor.
There are three main ways that you can filter events from the main view. The first and best way to filter by timestamp is to use the Timeline Bar to quickly set a starting and ending time filter.
The second, more granular, way to filter events out is to use the Filter View to build a predicate on a field-by-field basis.
And the third way to control what columns you can see is to Control-Click on the column headings to choose which columns are visible.
If you are unable to find a pre-built extractor to suit your log file, you can always add your own using the Extractor Editor.
The Extractor Editor allows to you to define your own log file formats using regular expressions.
Using a group-based regular expression that matches your log file format, the Groups table will display as many rows as there are groups in your pattern. You can then allocate each group to the built-in LogDiver field names, and give the column an optional description.
You can also define whether the column is visible by default.
The Severity Map table allows you to enter specific strings that, when assigned to the Severity column, will be mapped to a particular severity type. For example, in some log files an error may be signified by an
E in the log file. Any strings that do not have an explicit mapping will use the Unknown severity type.
You can also provide a regular expression that will be used to assist the Import Dialog to determine whether your newly created extractor should presented as a suggested extractor. If the currently selected file matches this regular expression, your extractor will be included in the suggested extractor list.
And lastly, you can optionally provide a regular expression that is used to assist LogDiver to handle log lines that have spilled over onto multiple lines. As LogDiver reads each log line from the input file, if it matches the continuation regular expression, the line will be appended to the previous line instead of being parsed as a new log event.
LogDiver also gives you the ability to define your own log file formats.
LogDiver will be available soon in the Mac App Store, however, I am currently seeking feedback from interested parties for:
Please contact me at email@example.com if you would like to be involved, or to get a copy for beta testing.
|
OPCFW_CODE
|
Soon ... (ish)
And so to the New Year, which began with great productivity (once the hangover wore off) and "Dry January" commenced. More on that later.
First up was going back over some old ground and changing things ... which to be honest, appears to be 90% of what consumes my development time. I had used a small number of open source/creative commons/CC0/CC-BY images. I ended up stripping these out and replacing them with custom imagery. The most used one was of a cat, which was changed to be a tiger.
I dropped the sole icon I was using from http://game-icons.net/ (a really useful resource) and replaced it with my own custom one. This had been for the "Tough Skin" powerup, which during playtesting I had thought was underpowered for a rare item and so doubled the amount of damage it stopped from 10 to 20 percent.
"Tough Skin" Rare Powerup, Now With Less Gauntlet And More Catgirl
The overall layout of the player's interface/HUD went through another iteration, and a skull was added to the kill/gore gauge which fills up and changes colour until it completes level 4 and a Momento Mori is triggered. Upon this the player becomes invincible for 60 seconds and scoring multiplies by 8 (that's the old 4/8, death/wealth for the Far East), and all attacks now deal critical damage to enemies.
Object which mounts to the player to signify that the Momento Mori has been activated, and they are now Shinigami, God of Death.
Kinda like this guy but you have to make your own atatatatatatatata sound
I also altered the textures for the Gradius style followers which can be picked up and upgraded via powerups, giving them each a little sheen which passes over them. They are little weapon platforms which follow the player around and increase firepower. I spent a while in the depths of C++ changing their start/stop movement to make them move much smoother than they had before.
I am most proud about getting this gif to loop seamlessly :P
One thing I noticed about collecting offensive and defensive powerups is that there could be a disconnect between the information displayed to the player and what the actual effect is. This is because special attack and defend abilities are triggered using an RNG. The more abilities the player has, the greater chance of an activation. However this doesn't mean that the last powerup will trigger next, so I changed that so that it always triggers on the next attack or defence - thus giving the player a clear idea of the ability of the object which they just picked up.
Collect "Magic Bullet" offensive powerup, immediately see it used on your next attack
And so onto FUNBALLS. If the esoteric words "RoR, /v/, Git Gud and Host When?" mean anything to you, then this should give an inclining as to what comes next.
This is a FunBall. You can tell it's a FunBall due to the word "FUN" written on it and it's murderous grin
FunBalls are fun - okay, they're not really, they're bombs which some enemies will drop. I also intend to create an unlockable challenge mode when everything will drop FunBalls upon death.
I am using a slightly basic collision and physics system (hello rigidBody) for speed and convenience. FunBalls didn't look very good as standard objects due to sudden rotation changes when they bounced. To combat this I made the object a billboard so that it always faces the camera and used a scrolling texture to give the illusion of smooth, spinning movement. I then created four variations, leaving one central and rotating the others (off-left, off-right and upside down) so that when a number of FunBalls burst from a deceased enemy, they have plenty of variation but the player can always see the grin at some angle.
FunBalls in action! Audio has since been changed.
And upon all of this, there have been numerous little tweaks and changes to how things work, as well as much planning on the data of enemy types as the levels progress (though everyone is still a placeholder cube right now).
Did I mention "Dry January"? My performance at darts really suffered so halfway through the month this kinda happened ...
Next up, more sorting out data and attack types for enemies, and some considerable reading up on this new fangled thing called PBR.
|
OPCFW_CODE
|
Reads encrypted data from a protected surface.
HRESULT EncryptionBlt( [in] ID3D11CryptoSession *pCryptoSession, [in] ID3D11Texture2D *pSrcSurface, [in] ID3D11Texture2D *pDstSurface, [in] void *pIV, [in] UINT IVSize );
- pCryptoSession [in]
A pointer to the ID3D11CryptoSession interface of the cryptographic session.
- pSrcSurface [in]
A pointer to the ID3D11Texture2D interface of the protected surface.
- pDstSurface [in]
A pointer to the ID3D11Texture2D interface of the surface that receives the encrypted data.
- pIV [in]
A pointer to a buffer that receives the initialization vector (IV). The caller allocates this buffer, but the driver generates the IV.
For 128-bit AES-CTR encryption, pIV points to a D3D11_AES_CTR_IV structure. When the driver generates the first IV, it initializes the structure to a random number. For each subsequent IV, the driver simply increments the IV member of the structure, ensuring that the value always increases. The application can validate that the same IV is never used more than once with the same key pair.
- IVSize [in]
The size of the pIV buffer, in bytes.
If this method succeeds, it returns S_OK. Otherwise, it returns an HRESULT error code.
Not all drivers support this method. To query the driver capabilities, call ID3D11VideoDevice::GetContentProtectionCaps and check for the D3D11_CONTENT_PROTECTION_CAPS_ENCRYPTED_READ_BACK flag in the Caps member of the D3D11_VIDEO_CONTENT_PROTECTION_CAPS structure.
Some drivers might require a separate key to decrypt the data that is read back. To check for this requirement, call GetContentProtectionCaps and check for the D3D11_CONTENT_PROTECTION_CAPS_ENCRYPTED_READ_BACK_KEY flag. If this flag is present, call ID3D11VideoContext::GetEncryptionBltKey to get the decryption key.
This method has the following limitations:
- Reading back sub-rectangles is not supported.
- Reading back partially encrypted surfaces is not supported.
- The protected surface must be either an off-screen plain surface or a render target.
- The destination surface must be a D3D11_USAGE_STAGING resource.
- The protected surface cannot be multisampled.
- Stretching and colorspace conversion are not supported.
This function does not honor a D3D11 predicate that may have been set.
If the application uses D3D11 quries, this function may not be accounted for with D3D11_QUERY_EVENT and D3D11_QUERY_TIMESTAMP when using feature levels lower than 11. D3D11_QUERY_PIPELINE_STATISTICS will not include this function for any feature level.
Minimum supported client
|Windows 8 [desktop apps | Windows Store apps]|
Minimum supported server
|Windows Server 2012 [desktop apps | Windows Store apps]|
Minimum supported phone
|Windows Phone 8|
|
OPCFW_CODE
|
When one is trying to find a proof of a mathematical statement, it can be surprisingly helpful to think about the converse of that statement as well. The reason is that an understanding of the converse can give important information about what a proof of the original statement would have to be like, thereby speeding up the search for it.
This may seem a slightly artificial example, but it came up recently in a research problem, and thinking about the converse was an essential step in finding a solution.
Suppose that you have a norm on and you would like to prove that for every belonging to some subset . Suppose also that you want to do this by estimating the norms of at most points. (This situation occurred because the norm in question was randomly defined, and it was not possible to ask for too many events to occur simultaneously – at least if one wanted to avoid understanding the very subtle dependencies between those events.) The obvious method is to choose some subset of consisting of at most points, and to run an argument with the following general structure:
every element of can be approximated (in a suitable sense) by an element of ;
if two elements of are close (in that same sense) then their norms are close;
the norm of every point in is smaller than .
Now let us think about whether this scheme of proof is necessary. That is, if has the property that if every point in has a small norm then so does every point in , does it follow that every point in can be approximated, in some suitable sense, by a point in ?
The answer is an emphatic no: one soon realizes that if the norm of every point in is at most 1, say, then the norm of every point in the convex hull of is also at most 1. Armed with that observation, we can go back to the original problem with a potentially much more flexible method of proof:
every element of belongs to the convex hull of ;
every element of has norm at most 1.
However, if we are sensible, we should learn our lesson and again investigate the converse. Suppose that does not lie inside the convex hull of . Is it still possible that the norm restricted to could control the norm restricted to ?
The answer turns out to be no. If some point lies outside the convex hull of , then the Hahn-Banach separation theorem implies that there is a linear functional such that , but for every . Thus, the seminorm is at most 1 everywhere on but greater than 1 somewhere on . And provided is bounded we can easily convert that into a norm with the same property.
What we learn from this is that if we want to find a set and deduce from the fact that the norm of every vector in is small that the norm of every vector in is small, then, unless we know, and can use, further information about the norm (which in our problem we could not), we are forced to use the second method above. Thus, we can stop wasting time searching for alternative approaches.
|
OPCFW_CODE
|
I am a statistician because I enjoy tackling the questions of data-driven health research. I pursued a PhD because I love to teach. This is an exciting era in health research; new and better technologies are offering researchers opportunities to collect, explore, and engage biological, social, and economic data as never before. I enjoy collaborating with researchers in this new frontier.
- UVA DS 2006 - Computational Probability, 2023 link to current course
- UVA DS 6400 - Machine Learning I, 2022 - 2023 link to current course
- UVA DS 6300 - Probability and Stochastic Processes, 2022
- VU DS 5620 - Probability and Inference, 2019 - 2021
- VU MSCI 5015 - Biostatistics II, 2016 - 2022
- VU PUBH 5502 - Biostatistics I, 2018 - 2019
- UNC BIOS 600 - Principles of Statistical Inference, 2011
Conference sessions, Seminars, Workshops, JIT instruction
- [Invited session, 2019] Stewart TG and Spratt H. Biostatistics and Data Science: Identifying Their Complementary Roles in Clinical and Translational Research. Translational Science, Washington DC.
- [Invited workshop, 2019] Smith DK and Stewart TG. Understanding the Design of Systematic Reviews, Meta-analyses, and Clinical Studies in Supportive Care. MASCC/ISOO Annual Meeting, San Francisco, CA.
- [Short Course, 2021] Stewart T, Shotwell M, and Blume J. Principles of Prediction and Inference in Machine Learning. Conference on Statistical Practice, Online.
- Resources for new STATA users
- [HSR Biostatistics Seminar, Sept 2016] Propensity score matching in STATA
- [Vanderbilt/University of Duhok Linkages, Sept 2016] Reproducible research tools
- [CRC Workshop, March 2018] Ideas for reproducible manuscripts
- Alfresco for collaboration
- [CRC Workshop, March 2019] How to use regression to estimate & interpret non-linear associations
- [MASCCO/ISOO 2019] Bias Assessment of Observational Studies
- [2019 Vanderbilt Mouse Kidney Injury Workshop] Power calculations for mouse research
- [2020 Epi Lunch and Learn] How to improve your study design with simulation(code)
- [CRC Workshop, January 2020] What to Say (and Not Say) When P > 0.05
- VU MSCI
|
OPCFW_CODE
|
Hi, I'm Evan
More about me
Outside of work, I almost exclusively develop open-source software, and it is all available here on this GitHub account. On personal projects, I like to iterate, experiment, and develop as fast as possible, which leads me to have a habit of cranking out projects every three-ish days. A lot of these projects are either developed out of necessity for use in another project, or as learning experiments. Feel free to use, fork, and contribute to any of my projects. I appreciate any feedback given in return.
Notable past projects
In the past, I have worked on many interesting projects of various sizes. I was once a (very) popular user over on devRant, and have worked on multiple bots and statistical tools for the site's community, including my first ever group project, devCredits, and a command-line client for the app, dr. Neither of those old projects were particularly well designed or written on my part, but I learned a lot from them, and thats what matters in my opinion.
More recently, I have been involved with Raider Robotics, a FIRST Robotics Competition team based out of my highschool. On Raider Robotics, I was the leading force developing the software that powered our award-winning robots: Q*bert, MiniBot, HATCHField, and Darth Raider. I also developed some event management software, a parts management tool for the team shop, hardware debugging tools, and the team's core robotics library (including its documentation).
I keep my pinned repositories list fairly up to date with the best of my more recent projects.
If you haven't noticed yet, I have a lot of active repositories on this account.
To make it easier for people to dig around and see what I work on, the following are some quick links to the GitHub search tool.
- At the moment, Rust is my primary programming language. My goal is to become a Rust expert. I have been working hard to produce many useful crates to help fill out the selection of available libraries for other developers to benefit from.
- For the past few years, Java was my primary language. I picked it up in 10th grade, while working with Raider Robotics (@frc5024). My Java projects are split between robotics, Minecraft mods, and homework from my high school compsci classes.
- Python was the programming language I first learned (way back in 5th grade). It was also my primary language up until high school, I have many many Python projects. As is to be expected with new programmers, many of my old Python projects are not of great quality, but my Python abilities have since grown to the point I can comfortably say I am an expert in the language. I also use Python professionally, although that work is closed source.
- C and C++ are both languages I picked up during my time working with Raider Robotics. The majority of my experience in these languages is in the Robotics space, I have a few small side projects in these langues as well. I also use C++ professionally.
I work in many other languages, but none have enough projects to warrant their own section here. Feel free to dig around my repositories page to find them.
|
OPCFW_CODE
|
What are commands?#
A command is a way of describing our program’s behaviour. There are other things that affect how a program works, but commands are the primary one. Crochet’s commands are a bit similar to “functions”, “procedures”, “routines”, and “methods” in other programming languages, but they have a few unique things about them.
For example, consider the following command declarations:
command true and true = true; command boolean and boolean = false;
This piece of code defines two commands, but both of them have the same
_ and _. The underscores in the name indicate where arguments
to this command would go. In the definition, these underscores indicate
the places where the requirements for executing the command go.
So, for the first command, we can execute it whenever both of the arguments
are an instance of the type
true. For the second one, we can execute
the command whenever both of the arguments are an instance of
Remember that types in Crochet create an hierarchy, so the boolean hierarchy
looks like this:
+ any | `--+ boolean | |--o true `--o false
At the root of this hierarchy we have the type
any. Then we have
descending from it. And both
false descending from
When we use a command, Crochet will pick the one whose requirements are most
closely matched to the arguments.
For example, let’s say we have the following use of a command:
true and true
Crochet will find all of the commands that have been declared with the name
_ and _ and then pick one to execute. First, we need to match all
requirements. In this case both commands we’ve declared fulfill all of
the requirements: the value
true is both an instance of the type
and transitively, an instance of the type
Then, since we have more than one candidate, Crochet needs to somehow
disambiguate this. And the way it does so is by picking the closest matching
one. That is, if we have to walk up the hierarchy up to
any, the closest
matching is the one we have to take the least amount of steps. Here, the
true and true requires no steps on both sides, whereas the command
boolean and boolean requires one step on each side. Thus, Crochet picks
true and true, yielding the result
On the other hand, if we had the expression
true and false, or
false and true. Or even
false and false, the requirements for the
true and true command wouldn’t be matched, and we’d end up executing
boolean and boolean command, yielding the result
So, to answer the opening question: a Crochet command is like a function, it has a name and we can execute that function to do something. But multiple commands can share the same name, and when we execute one, Crochet will pick up the closest one that matches all requirements. Some languages call this a Multi-method.
Commands are global#
In Crochet, commands are always global. This might come as a surprise since
almost everything else in Crochet is qualified by the package they’re in.
So if you define a type like
player, what you’re really defining is
some-package/player. But this is not the case with commands.
If you define a command
_ and _, its name will always be just
_ and _,
regardless of which package it’s defined in.
Because commands can have the same name, Crochet needs a different way to refer to specific commands. To do this it has a concept called a Signature. A Signature combines the name of a command with the requirements to execute the command.
Signatures seem simple at the surface, but they’re actually a fairly complex topic, so they’re covered in depth in their own section: Signatures and their uses.
So, if a command describes a “program’s behaviour”, how exactly does it do that? Well, if the left side of the command declaration is its signature, then the right side is its behaviour. Here we use expressions to describe what the program does if the command is executed.
The distinction between “expression” and “signatures” isn’t always very
obvious. A signature may only contain types, and expressions may only
contain values. But because these are two different concepts, they may
(and in a lot of cases do!) look pretty much the same.
true and true
can either be a signature (concerning the type
true), or an
expression (concerning the value
Since the topic is vast, we cover expressions in their own chapter.
|
OPCFW_CODE
|
Accessing nested response object from previous backend in a sequential proxy
I'm having some trouble using a nested object in the response from a previous backend in a sequential proxy scenario.
What I want to achieve is set up an endpoint with two sequential backends where the second backend should read the response from the first one and use a field in that response, that happens to be nested inside some other object.
So for example, if the first endpoint returned:
{
"id": 1,
"title": "hello",
"user": {
"id": 1
}
}
The second backend would user the user id from the previous response. So essentially I want to access user.id in that response. As I understand it, I should be able to do that with {resp0_user.id}, but that doesn't seem to work, as I keep getting a 404 from the users endpoint, and I know for a fact that that user exists.
I've set up some json placeholder route to test this:
https://my-json-server.typicode.com/martskins/json-demo/posts/1
https://my-json-server.typicode.com/martskins/json-demo/users/1
And a basic krakend config that uses these endpoints to reproduce this issue:
{
"version": 2,
"timeout": "5s",
"name": "Demo",
"endpoints": [
{
"endpoint": "/v1/posts/{id}",
"method": "GET",
"extra_config": {
"github.com/devopsfaith/krakend/proxy": { "sequential": true }
},
"backend": [
{
"url_pattern": "/martskins/json-demo/posts/{id}",
"host": [ "https://my-json-server.typicode.com" ],
"extra_config": {
"github.com/devopsfaith/krakend/http": { "return_error_details": "posts" }
}
},
{
"url_pattern": "/martskins/json-demo/users/{resp0_user.id}",
"group": "user",
"host": [ "https://my-json-server.typicode.com" ],
"extra_config": {
"github.com/devopsfaith/krakend/http": { "return_error_details": "users" }
}
}
]
},
{
"endpoint": "/v2/posts/{id}",
"method": "GET",
"extra_config": {
"github.com/devopsfaith/krakend/proxy": { "sequential": true }
},
"backend": [
{
"url_pattern": "/martskins/json-demo/posts/{id}",
"host": [ "https://my-json-server.typicode.com" ],
"extra_config": {
"github.com/devopsfaith/krakend/http": { "return_error_details": "posts" }
}
},
{
"url_pattern": "/martskins/json-demo/users/{resp0_userId}",
"group": "user",
"host": [ "https://my-json-server.typicode.com" ],
"extra_config": {
"github.com/devopsfaith/krakend/http": { "return_error_details": "users" }
}
}
]
}
]
}
You'll notice that the posts endpoint returns both "userId": X and "user": { "id": X }, this was added just to test whether there was an issue with response parsing in general or just in the case of nested objects.
The first endpoint in that config (/v1/posts/{id}) will error on the second backend, while the second endpoint (/v2/posts/{id}) will succeed, and the only different is that the first one tries to get a nested field, while the second one uses a field in the root level of the response object.
this is a known and fixed bug: https://github.com/devopsfaith/krakend/issues/330
next monday we plan to release v1.1.1 with the fix included
|
GITHUB_ARCHIVE
|
With DojoExpert plugin you can add members list, member profiles and competition results list on your WordPress website. It’s easy, here are the steps:
1) Download DojoExpert plugin and install it in your WordPress website: Download.
If your website is older (worpress version below 4.5) scroll lower to install earlier version of plugin.
2) Go to “settings” and click “DojoExpert” to enter plugin settings as shown on this picture:
Here you must set 3 parameters:
- Member page URL – it’s the URL of your page in WordPress where you will put the member lists. This is important if you want your member list to be clickable and display member profile page on click. In above example on our test page the member page is http://dojoexpert.linklab.hr/members/ so the member page URL is /members/? (add question mark on the end)
- User – this is your DojoExpert username
- Language ID – leave this to default 1.
3) To add a member list page, create a page or post with permalink URL you specified in step #2.
Click on add a block - "widgets" and notice "members" and "results" buttons:
In block editor notice parameters on the right. If you have more then one dojo defined in DojoExpert then with the “Dojo ID” (that you can find in your DojoExpert account) you can list only members of particular dojo. If you want to list all members (all have just 1 dojo) leave this parameter blank.
Set the “Member ID” parameter if you want to create a page for just one member (enter the UID of that member), but if you want to list all members, leave this parameter blank. So in our example, both parameters should be blank and after clicking the “Update” button the list will appear in page/post body.
Save the page and see how it looks:
As you can see, the list your members will appear and if you have set the “member page URL” parameter in 1st step correctly, they will be clickable leading to members profile page. Live example here: http://test.dojoexpert.com/members/
Important: the member list lists only members who have the “show” option set to true in DojoExpert. Find this “show” checkbox under the member picture in your DojoExpert manager. This way you can decide who you want to show publicly on the list and who will remain hidden.
4) To add a results page add a widget block for "result list".
In widget settings you can set the year for which you want to show results. Leave empty for current year, set "0" for all years. For example you can create a page that looks like this:
Plugin for older versions of Wordpress looks a little different. You can download older version of plugin here. In versions before 4.5 there is no blocks in page editor. In these cases DojoExpert plugin adds two buttons in
editor to insert members and results list:
Click the “M” button to add members list and a popup will appear asking you to enter parameters “Dojo ID” and “Member ID”.
You will see this code in HTML code of the page: [memberlist dojoid="" id="" /]. You can also enter the parameters manually in this code.
See the live example website here.
|
OPCFW_CODE
|
/**
* @file DetectorDuck.ino
*
* @brief Builds a Duck to get RSSI signal strength value.
*
* This example builds a duck using the preset DuckDetect to periodically send a ping message
* then provide the RSSI value of the response.
*
* @date 2020-11-10
*
* @copyright Copyright (c) 2020
* ClusterDuck Protocol
*/
#include <DuckDetect.h>
#include "timer.h"
// Needed if using a board with built-in USB, such as Arduiono Zero
#ifdef SERIAL_PORT_USBVIRTUAL
#define Serial SERIAL_PORT_USBVIRTUAL
#endif
// We use the built-in duck detector with a given Device UID
DuckDetect duck = DuckDetect("DUCK-DETECTOR");
// Create a timer with default settings
auto timer = timer_create_default();
void setup() {
duck.setupWithDefaults();
Serial.println("DUCK-DETECTOR...READY!");
// Register a callback that provides RSSI value
duck.onReceiveRssi(handleReceiveRssi);
timer.every(30000, ping);
}
void handleReceiveRssi(const int rssi) {
Serial.println("[DUCK-DETECTOR] handleReceiveRssi()");
showSignalQuality(rssi);
}
void loop() {
timer.tick();
duck.run(); // use internal duck detect behavior
}
// Periodically sends a ping message
bool ping(void *) {
Serial.println("[DUCK-DETECTOR] Says ping!");
// This API is only available to Duck Detector
duck.sendPing(true);
return true;
}
// This uses the serial console to output the RSSI quality
// But you can use a display, sound or LEDs
void showSignalQuality(int incoming) {
int rssi = incoming;
Serial.print("[DUCK-DETECTOR] Rssi value: ");
Serial.print(rssi);
if(rssi > -95) {
Serial.println(" - GOOD");
}
else if(rssi <= -95 && rssi > -108) {
Serial.println(" - OKAY");
}
else if(rssi <= -108) {
Serial.println(" - BAD");
}
}
|
STACK_EDU
|
T1 decay for simulator
When using a simulator, noise is applied after each gate. Previously, in the t1 decay method, there's only one wait gate, which means that noise is not properly applied "over time" when using a simulator. I've updated the method to add a number of wait gates proportional to the length of the delay when using a simulator.
Related to Issue #4264
As always, any feedback is appreciated!
Can we leave the t1_decay code unchanged and still get the expected results on this test ?
Unfortunately, doing this right would require something along the lines of #2749, as our current noise models are agnostic to the duration of the gates involved. Making a model sensitive to WaitGate duration is easy enough, but it gets complicated when trying to make this work on a general level (how do you get the gate durations? is noise based on gate duration, or moment duration? etc.) which prompted this PR.
Essentially, the issue here is that some noise models (which may otherwise provide an accurate model of hardware) produce nonsense results here because the test assumes noise is continuous (rather than discretized). We'd like to be able to estimate T1 for these models, which requires discretizing the WaitGate.
Would changing the parameter name to something like discrete_noise be sufficient?
Essentially, the issue here is that some noise models (which may otherwise provide an accurate model of hardware) produce nonsense results here because the test assumes noise is continuous (rather than discretized). We'd like to be able to estimate T1 for these models, which requires discretizing the WaitGate.
Why can't we just modify these existing noise models to check if the operation is a wait gate and then act accordingly ? Looking at the interfaces for cirq.NoiseModel it looks like it is definitely an option. Having a noise model do something different for for 10 wait gates after one another vs one longer wait gate definitely that amounts of the same wait time seems like something we don't want our users to have to worry about and would be pretty confusing for them.
Why can't we just modify these existing noise models to check if the operation is a wait gate and then act accordingly ?
We can - but again, doing so correctly amounts to solving #2749. If a noise model can recognize that wait(20) and wait(40) take different amounts of time, I would also hope it knows that X(q0) and measure(q0) take different amounts of time - but exactly what those times should be and where to get them remain open design questions.
I think I understand your concern with this PR, though - it essentially "canonicalizes" the per-gate noise as a correct use case for this experiment, which is misleading. If we want to avoid that, I would support a version of this PR that only adds tests: test_noise_model_constant to demonstrate how to simulate the experiment, and a separate test to demonstrate that existing "discrete" noise models will misbehave. What do you think?
Sure that makes a bit more sense to me.
Automerge cancelled: A required status check is not present.
Missing statuses: ['Build docs', 'Build protos', 'Changed files test', 'Coverage check', 'Doc test', 'Format check', 'Lint check', 'Misc check', 'Notebook formatting', 'Pytest MacOS (3.7)', 'Pytest MacOS (3.8)', 'Pytest MacOS (3.9)', 'Pytest Ubuntu (3.7)', 'Pytest Ubuntu (3.8)', 'Pytest Ubuntu (3.9)', 'Pytest Windows (3.7)', 'Pytest Windows (3.8)', 'Pytest Windows (3.9)', 'Type check', 'Typescript lint check', 'Typescript tests', 'Typescript tests coverage']
|
GITHUB_ARCHIVE
|
package lifecycle
import (
// "github.com/ngmoco/timber"
"sync/atomic"
"testing"
"time"
)
var loggerInited bool = false
func requireLoggerInited() {
if loggerInited {
return
}
// timber.AddLogger(timber.ConfigLogger{
// LogWriter: new(timber.ConsoleWriter),
// Level: timber.DEBUG,
// Formatter: timber.NewPatFormatter("[%D %T] [%L] %-10x %M"),
// })
}
func TestShutdownRequest(t *testing.T) {
requireLoggerInited()
shutdownRequest := NewShutdownRequest()
// Set up a channel that will stream 1's. This will be a workload for our pretend
// server below.
onesChan := make(chan int)
go func() {
onesChan <- 1
}()
var sum int32 = 0
lifeCycle := NewLifeCycle()
// Set up a pretend "main loop" of a server and shut it down using ShutdownRequest
go func() {
lifeCycle.Transition(STATE_RUNNING)
for !shutdownRequest.IsShutdownRequested() {
select {
case toAdd := <-onesChan:
atomic.AddInt32(&sum, int32(toAdd))
case <-shutdownRequest.GetShutdownRequestChan():
break
}
}
lifeCycle.Transition(STATE_STOPPED)
}()
// Give the server time to get some work done before we shut it down
lifeCycle.WaitForState(STATE_RUNNING)
time.Sleep(500 * time.Millisecond)
// Tricky: after we request shutdown, then the server should execute its main loop at
// *most* one more time. The shutdown flag should catch it before it loops again.
// Therefore, we should expect sum to increase by at most 1 after we request shutdown.
shutdownRequest.RequestShutdown()
sumAfterShutdownRequest := atomic.LoadInt32(&sum)
lifeCycle.WaitForState(STATE_STOPPED)
sumAfterShutdownComplete := atomic.LoadInt32(&sum)
reqsWhileShuttingDown := sumAfterShutdownComplete - sumAfterShutdownRequest
if reqsWhileShuttingDown > 1 {
t.Fail()
}
}
func TestLifeCycle(t *testing.T) {
requireLoggerInited()
lifeCycle := NewLifeCycle()
triggeredCh := make(chan interface{}, 1)
// This goroutine waits for the lifeCycle to be RUNNING, then sends to channel
go func() {
state := lifeCycle.WaitForState(STATE_RUNNING)
if state != STATE_RUNNING {
t.Error("State should be 'running'")
}
triggeredCh <- true
}()
select {
case <-triggeredCh:
t.Error("Channel should not have been ready to read yet")
default:
}
if lifeCycle.GetState() != STATE_NEW {
t.Error("State should have been 'new'")
}
lifeCycle.Transition(STATE_RUNNING)
if lifeCycle.GetState() != STATE_RUNNING {
t.Error("State should have been 'running'")
}
select {
case <-triggeredCh:
break
case <-time.After(1 * time.Second):
t.Error("goroutine waiting for lifeCycle startup didn't get triggered")
}
// This goroutine waits for the lifeCycle to be STOPPED, then sends to channel
triggeredCh = make(chan interface{}, 1)
go func() {
state := lifeCycle.WaitForState(STATE_STOPPED)
if state != STATE_STOPPED {
t.Error("State should have been 'stopped' but was %v", state)
}
triggeredCh <- true
}()
time.Sleep(100 * time.Millisecond)
lifeCycle.Transition(STATE_STOPPED)
if lifeCycle.GetState() != STATE_STOPPED {
t.Error("State should have been 'stopped'")
}
select {
case <-triggeredCh:
break
case <-time.After(1 * time.Second):
t.Error("goroutine waiting for lifeCycle shutdown didn't get triggered")
}
time.Sleep(100 * time.Millisecond) // Let log lines have time to print
}
|
STACK_EDU
|
In some ways, the "best" approach to calculating the homotopy groups of spheres is to identify patterns in the homotopy groups of spheres, rather than trying to make a complete calculation of all homotopy groups up to a certain dimension. This "best" approach, led by Ravenel and collaborators, is to try to determine which families of elements in $ Ext = E_2 $ survive to $ E_\infty $ and therefore detect non-zero elements in homotopy. Often these families of elements in homotopy groups have an infinite number of elements, but the recent work of Hill-Hopkins-Ravenel on the Kervaire invariant is a significant example of a family of elements shown to be infinite in number in Ext but finite in number in homotopy (i.e., there are infinitely many non-zero differentials in the classical Adams spectral sequence on this family).
A complete calculation of the homotopy groups of spheres at any prime is difficult for many reasons. Since Ext calculations (using any suitable generalized homology theory) tend to be very large, neither humans nor machines can make a complete calculation up to a large dimension in a short amount of time. Also, calculating differentials often requires topological information not present in a single algebraic Ext object. It is common to compare an Ext calculation for one generalized homology theory (e.g., mod p homology to obtain the classical Adams spectral sequence) to an Ext calculation for another generalized homology theory (e.g., Brown-Peterson theory to obtain the Adams-Novikov spectral sequence) and see if any differentials are forced by comparing the two spectral sequences, but results using this approach are not guaranteed. Of course, there are other methods for calculating differentials, but they tend to be ad-hoc or context specific.
The best overall summary of results would be Doug Ravenel's book on the homotopy groups of spheres, and I would also recommend Kochman's book. Read works of Mark Mahowald for results using the Adams spectral sequence, and Doug Ravenel for the Adams-Novikov spectral sequence. Complete or nearly complete calculations for the homotopy groups of spheres that have been localized at a particular Morava K-theory have been made by Toda, Goerss-Henn-Mahowald-Rezk, and Mark Behrens. If you're interested in computer calculations of Ext, you should contact Robert Bruner or Christian Nassau. Many others have contributed to the calculation of homotopy groups of spheres and probably deserve to be mentioned (if I omitted someone, it was unintentional).
On an unrelated and personal note: I would like to publicly thank Torsten Ekedahl (who recently passed away) for everything he has done to help me.
|
OPCFW_CODE
|
Which country flags can you make in Tetris?
Your friend is playing Tetris. In her version of the game, the pieces use the standard colors and can drop in any possible order without restrictions. In the order shown below, the colors are light blue, dark blue, orange, yellow, green, purple, and red.
"Yes, I finally did it!" she exclaims. You worriedly glance over, afraid that she's finally beaten your high score. Much to your surprise, her score is still 0—she hasn't even cleared a line yet!
"What's the big deal?" you wonder aloud.
"I made my country's flag!" she cries triumphantly.
As you stare at the matrix of colored squares, your jaw drops in awe. Despite not having cleared a single line, she's managed to recreate her nation's flag!
Which countries could your friend be from?
Clarifications
No rows have been filled, and no lines have been cleared.
The playing field has ten columns.
The flag is a country's current official one (see Wikipedia for a good list).
All parts of the flag exactly match the shape and dimensions of their official design.
The flag is entirely colored using similarly colored pieces, without any empty space.
It's a bit unclear what colour the background is. If there is no tetris piece occupying a space, is that space considered to be black or white? I have seen tetris games with both white or black backgrounds.
@Alderath No background color is allowed per the last bullet point.
That first one looks more like cyan than light blue. And I think your "purple" one is actually magenta.
I know too much about tetris to solve this. Modern tetris uses a "bag" system in which each separate piece will fall before a repeated piece falls, so there's actually no solution with the current restrictions.
@RobinClower "In her version of the game, the pieces use the standard colors and can drop in any possible order without restrictions."
Ah, I was looking for that in the clarifications section :facepalm:
Must the country be a recognized country? If not, the flag of the micronation of Atlantium might also be possible.
@bta It has a similar issue to rot13(Fjrqra): because of its aspect ratio (6:9) and the fact that a stack of O-tetrominoes can never be an odd height, it would have to be 12 tiles wide by 18 tiles high, violating constraint 2
@bta Good call out, she is from a recognized country, although I did not clarify this. If she were from an unrecognized country, then this would be totally open-ended.
The friend is from Ukraine, as their flag (rotated 90 degrees in either direction) can be formed on the grid, preserving the 2:3 ratio of the flag (and the equally-sized stripes). This is the only flag that works, since based on the constraints of the puzzle, the flag must only consist of areas with square borders (ie. no symbols, circles, triangles, stars, etc.) and cannot contain white or black, which greatly reduces the number of valid flags. Additionally, since (I'm pretty sure) it is impossible to form a rectangular region with the S- and Z-tetrominoes, the flag must be made of blue, orange, yellow, and/or purple polygons with sides connected at right angles, leaving only Sweden and Ukraine to fit this description. Sweden's flag is an invalid option, as to "exactly match [its] dimensions", the flag (rotated 90 degrees in either direction) would have to be ten tiles wide, violating the first constraint (the "thicknesses" of each colour in the Swedish flag horizontally are 5, 2, and 9, respectively, which can't be scaled down to integer values).
I don't know if this is appropriate for this site, but consider donating to organizations that support those in Ukraine right now, like Save the Children.
Are there any other such countries?
@GregMartin I've added an explanation for why I think this is the only one
The puzzle says "countries" meaning that there are multiple answers
@SomeGuy does that necessarily mean OP knows of other solutions? It's possible they're just asking and used plural because they assumed at least two could be found?
@samm82 Nice job finding Sweden! It was really the only viable alternative, and I very narrowly ruled it out with my specific choice of rules (sorry, Sweden). Thanks for the great answer and for helping raise awareness! Let's help our friend :)
@BruceWayne I made sure that the answer is unique, but I chose to use the plural to avoid giving this away and to encourage the solver to discover this as well.
If you don't mind a pixelated version, Palau might be possible...
@samm82, argh, yes. Yes, of course you're right, scaling in integer sizes is the obvious first issue for scaling down. Well, would have been, if I wasn't a fool. Sorry. Fun fact: all the nordic cross flags have different proportions. And kind of convenient for the sake of this question that the flag of the correct answer does have such nice proportions.
@ilkkachu No worries!
|
STACK_EXCHANGE
|
LESS and CSS3 syntax highlighting
I found this post on SU, but it doesn't seem to address my needs exactly, so I'll ask a similar question. I normally use PSPad, and I'm also not too scared of Notepad++. They are both good editors, but their CSS highlighting seems to be lacking all the CSS3 goodies.
To top that, I recently started using LESS instead of plain CSS, and this is where both editors fail miserably. As soon as nested properties are encountered, PSPad gets completely lost and is unable to even show matching braces, not to mention bad syntax highlighting. Notepad++ is somewhat better, as matching braces are always shown correctly, but still, nesting makes Notepad++ lose its way around LESS.
So, do you happen to know how I can make either of these two cooperate with LESS correctly? A downloadable resource will be fine, or perhaps a plugin, if you know of one (I don't). Alternatively, if you know of any other good lightweight editor that can offer good LESS highlighting, please point me to it (and please, no Eclipse-based stuff, it's way too heavy for just a CSS highlighter).
[Edit, in case anyone finds it useful]:
Since writing the question, I have come across a great, albeit Java-based and thus slightly sluggish, IDE. It's called PhpStorm, created by JetBrains. As I use it now for most of my PHP coding, I also end up editing LESS files with it. And here comes the surprise: PhpStorm has out of the box inbuilt support for LESS! It's not perfect, as it sometimes forgets to display autocomplete suggestions, but overall it's really decent. Like I said, it's not a lightweight solution, and not free of charge either for that matter, but I use it for all of my coding nowadays and find it very recommendable.
The question you linked is about less, the unix file-viewer, where your question is about LESS, the stylesheet language. So not exactly similar questions. :)
Haha, indeed :D. I can't remember now (I wrote this question half a year ago...), I suppose I linked to a question I hadn't bothered to read carefully :D.
PSPad allows you to define "User highlighters". Although not quite as flexible as a full specification, you can at least define all the keywords (up to 3 categories) and reserved words you want highlighted. For LESS, you can get/edit a list of keywords from the existing CSS highlighter in the [Keywords] section from "CSS.DEF" in the "Context" directory inside the PSPad program folder -- you may also expand this list if you simply want support for the CSS3 keywords in the regular CSS highlighter.
You should then assign this user highlighter to one of the "<not assigned>" spots in the Highlighter Settings, following which you can select colours for the reserved words and 3 keyword categories.
With these user highlighters, nested brackets work just fine.
Note: since these files are stored in subfolders of the PSPad program folder, usually in C:\Program Files, Windows Vista and Windows 7 won't let you edit them unless you run PSPad as Administrator. Be sure to do this when you change any settings.
So, in summary:
Expand C:\Program Files\PSPad\Context\CSS.DEF for CSS3 properties
Create a user highlighter (with keywords from the above file) to be able to have syntax highlighting for LESS
Notepad++ also allows for user defined styles.
Here's a list of 50 languages that have already been made, as well as instructions for importing them. The link takes you to Less.js. And of course any other language you'd like to add/modify.
Cheers!
|
STACK_EXCHANGE
|
The operation of the brain is complex and is far from completely understood, so it can't be modeled precisely with equations. But we do know it operates very much like a computer in terms of sending digital signals between elements that do comparative operations on the digital inputs, like a complex version of the simple gates in CPUs. There is not anything known about a neuron that can't be implemented with NAND gates. For any mental activity we can precisely define, we can implement it on a computer and then give precise equations for cause and effect. For mental activity we can't as equally well define, we can't assume the ideas and equations we've learned (from precisely defined mental activity) are incomplete in their capability (to precisely implement the undefined mental activity).
Our machines can see, smell, touch, and hear whatever elements of the environment we choose, then they can think deeply about the consequences and then react, moving whatever things they need to move in order to change the environment they have sensed in order to achieve, as best they can, the goal we've defined for them. This is the field of "controls" in mechanical and electrical engineering, being precisely determined by equations. Going even beyond this, there are A.I. programs that can "run amok" on their own with the programmer not knowing exactly how they were able to achieve their goals. There is a common fear that some of these machines will be let loose from a hackers software or future 3D printer laboratory with the goal of reproducing itself. Or that if the goals are not defined, then they will evolve in a more ethical laboratory until this goal is selected for, and it escapes.
These machines can be programmed to learn things the programmers can't model or copy unless he has access to all the changed memory bits inside the machine. They can learn to do things better than their programmers were able to program into them by watching the results of their own actions and improving their own programming. This was common in 1990 when I first learned about neural nets. Genetic algorithms can change even the design of the neural net after several generations.
A steam engine governor is a much earlier example of a machine sensing the environment and adjusting the environment being sensed in order achieve a set goal. When the rotation of a shaft got too slow or too fast, it increased or decreased the amount of steam being let through. It could react faster than a person at much less cost than a person. The is the economic problem of our times, the time of the computer replacing the need for brains, even programming brains.
Many thoughtful students first learning about engineering controls become immediately struck by the sensation that the feedback loop in a control system is where consciousness lies. It is the difference between where the machine senses where it is, and where it wants to be. The "amount" of consciousness is called the "error" value which is sent off to cause movement of the machine's "muscles", implying a philosophy on the part of the engineer of "consciousness is pain", but this is because basic control systems are trying to regain a point of maximum profit that is known to be possible. It could also be called "opportunity" in machines that redesign their programming to gain more profit than their designers thought was possible.
You seem to be taking as an axiom that human thought is fundamentally different from the thinking of machines. My first paragraph explained why I would have to consider that a leap of faith. I was not speaking metaphorically when I assigned human thinking words to machines. The "want" of a thermostat appears to be only a quantitative difference from the "want" of a brain. Since it is only 1 or 2 comparative operations, its complexity is (as a very rough estimate) only 1 millionth the capability of 1 neuron, which is 100 billionth of a brain. So I do not think most people would feel insulted if I claimed their want is not fundamentally different from a thermostat....but only if I also stated their marvelous brains are 100 quintillion times more complicated than a thermostat, and that we will never be able to conceive of what a 100 quintillion difference is except by math. We can conceive that we want a room to be warmer, and we can instill not only that want but the necessary resultant action into a machine.
The reason I have not read the rest of the book is because it holds as a foundational axiom something I find not only a leap of faith, but high nigh untenable given the preponderance of evidence to the contrary. A glance at the rest did not indicate the axiom was abandoned or that the book can stand without it.
Engineers and programmers have not invented different words to distinguish the activity of their machines from their brains. Programming is a transference of a precise set of "wants" from the programmer's mind to the machine. I do not know of any programmers or engineers who would insist that there is a qualitative difference.
The ability to reason logically was once considered to be the thing that separated humans from other animals, and the highest form of intellectual activity. I believe this was a primary motivation in Boole's invention (or formalization) of digital logic around 1850. Has the measuring stick of the mind been moved to more vague (imprecisely definable) areas in order to keep a mystical idea of mind alive?
When I make a numerical estimation of the difference in complexity of a thermostat and a brain, I am being literal. There are various estimates as to how many NAND or XOR logic gates are needed to implement a neuron. I believe a thermostat can be made to be as "universal" as they are called, as they are all that's needed to implement a complete Turing machine (with wiring).
|
OPCFW_CODE
|
Swift enum and NSCoding
I have a 'Thing' object with a String property and an NSImage property; the Thing class has encodeWithCoder: and decodeWithCoder: methods, and I can archive and unarchive a [Thing] array using NSKeyedArchiver/Unarchiver.
So far, so good. Now I want to expand my Thing class by an array of directions, where 'Direction' is the following enum:
enum Direction{
case North(direction:String)
case East(direction:String)
case South(direction:String)
case West(direction:String)
}
In other words, the data I wish to store is
thing1.directions: Direction = [.North("thing2"), .South("thing3")]
(In a more perfect world, I'd be using direct references to my Things rather than just their names, but I realise that this will easily create reference cycles - can't set a reference to another Thing until that Thing has been created - so I'll refrain. I'm looking for a quick and dirty method to save my app data and move on.)
Since I will be needing directions elsewhere, this is a separate entity, not just an enum inside the Thing class. (Not sure whether that makes a difference.)
What is the best way to make my Direction enum conform to NSCoding?
The best workaround I can come up with involves creating a [String: String] dictionary with "North" and "South" as keys and "thing2" and "thing3" as values, and reconstruct my enum property from that, but is there a better way?
And for that matter, is there a way to make tuples conform to NSCoding because right now (String, String) gets me a 'not compatible to protocol "AnyObject"' error.
Many thanks.
What I do is give the enum a type and encode and decode its raw value, or else implement description for the enum and encode and decode that string. Either way (or if you use some other way), you obviously need a way to convert in both directions between an enumerator and an archivable type.
how do you encode / decode description? For instance, the following throws an initialization error: Direction(description: (aDecoder.decodeObjectForKey("direction") as! String)) ?? .North
@Anconia Ask it as a question, not as a comment, please! There's no room to talk about it here... :)
thanks for the answer, it's time I buy your book(s) :-)
@Anconia and in fact it covers this situation: http://www.apeth.com/swiftBook/ch04.html#_enum_initializers
Yes you need to access the enum from the RAW value. Full example and discussion here:
How do I encode enum using NSCoder in swift?
Note this change in Xcode 6.1 " move code from the old-style “fromRaw()/toRaw()” enum APIs to the new style-initializer and “rawValue” property"
https://developer.apple.com/library/ios/releasenotes/DeveloperTools/RN-Xcode/Chapters/Introduction.html
|
STACK_EXCHANGE
|
The assign event allows you to change the value of a variable from within an event chain. In traditional programming this would be the equivalent of writing:
x = "some new value"
The assign event does not update values in a database. It is only used to change the value of variable.
The assign event can be set up in 5 quick and easy steps.
- Step 1. create an assign endpoint
- Step 2. create an assign action
- Step 3. create a variable in the action
- Step 4. add the assign event
- Step 5. build and test
Go to the endpoints tab in the Dittofi Design Studio and click on the "+ New Endpoint" button. Next give your endpoint the name "Assign", the path "/v1/assign/", description "Assign a variable to a new value" and the request method to "Get". The configuration for this is shown below.
Next, set a query variable that will be passed into the assign endpoint. Give this variable the type "text" and the name "SomeStartingValue".
Save and close the assign endpoint.
Next, go to the actions tab and create a new action by clicking the "+ New Actions" button. Rename the action to "Assign" and link the trigger component of the action to the "Assign endpoint" that was configured in Step 1.
The assign event requires that we switch the name of the variable passed in from our trigger endpoint to another value. This can be done in many ways however, the simplest way is to create a new global variable within the action. Let's do this now.
Create a new variable called "SomeNewValue" of type "text"
Note, the variable type must be equal to "text", since the value in our query variable is of type text.
Next add the assign event by (A) pressing "+ Add Event" and (B) selecting the assign event from the event drop down menu.
Next, map the variable "SomeStartingValue" (created in step 1) to the variable "SomeNewValue" (created in step 3).
This means that the contents of SomeStartingValue is now equal to the contents of SomeNewValue. In programming this would be:
// The value here is passed in from the query variable
SomeStartingValue = "Some starting value"
SomeStartingValue = SomeNewValue
// Print the output "Some new value"
Next, add a description for your event and press "Save".
Finally, set the response variable in your trigger component to your starting value. This will be used to check that the assign event has worked correctly in step 4.
To test the configuration, we first need to build the code. To do this, press "Build code". Once the code has been built, head back to the endpoints tab and open the "Assign endpoint" that was made in Step 1.
Click the "Run" button in the top left hand corner to test the endpoint and enter "some starting value" for the query variable that will be passed into the endpoint, as below.
If run successfully, you will see the Response code 200 and, in the body of the response, you will see the assign event returns the text "Some new value" which is the fixed text that you typed in Step 3.
|
OPCFW_CODE
|
How to make your own perfect game
Today, I’ll share my top ten tips to make a perfect game that everyone can enjoy.1.
Make a Perfect Game for All Seasons1.
Choose the Perfect SeasonTo make a game that’s perfect for every season, you need to choose the season for the game you want to make.
You can choose to make an online game, a free-to-play game, or even a classic game.2.
Pick a Good Game EngineYou can use any game engine for a game.
For example, you can use Unity for the Unreal engine, but Unity also comes with a bunch of extra features.3.
Create a Gameplay TreeThat’s a game tree that shows all the game play in a game: who is attacking, who is defending, and where does the player move.4.
Set Up a Game ObjectIn a game, all the objects in a scene have a certain amount of resources.
In order to move, a player needs to have the ability to use an object that has that resource.
This resource can be resources that are available to the player, or a resource that’s not available.5.
Make your GameObjectsIn order to make objects in your game, you’ll need to set up a game object that is an item in the scene.
You’ll also need to assign the object to a player, and assign that player an object with that resource (or a resource with that object).6.
Create an Object with an EventYou can create an object and assign it an event.
An event is a message sent to the object, such as a button click or a sound.
For an object to be able to trigger an event, you have to set the event to an object.7.
Set the Event to an ObjectThe event will be sent to an item that has a certain resource in its property, such a health, healthbar, or healthbar.
The item will then change its behavior to include the event.
For more information about events, check out the Unity tutorial on event handling.8.
Set an Event to a PlayerObject that has an event is attached to an existing object.
For this example, we’re using the health bar, but you can also use an item like the health object, or any other item.9.
Add an EventTo attach an event to a user object, use the AddEvent() method.10.
Add a GameObject to the GameObjectA game object is an object in a Unity scene that can be used by a game to do various things.
Game objects can be created by users, or developers can create them themselves.
In addition, the game object can be moved around in Unity, and its properties can be changed.
In this example we’re creating a game where the healthbar is an enemy, and the health is an Item with an ItemSetter.
|
OPCFW_CODE
|
There are many possible ways one could choose to nest columns inside a
nest() creates a list of data frames containing all
the nested variables: this seems to be the most useful form in practice.
A data frame.
A selection of columns. If empty, all variables are
selected. You can supply bare variable names, select all
variables between x and z with
The name of the new column, as a string or symbol.
This argument is passed by expression and supports
quasiquotation (you can unquote strings
and symbols). The name is captured from the expression with
Arguments for selecting columns are passed to
tidyselect::vars_select() and are treated specially. Unlike other
verbs, selecting functions make a strict distinction between data
expressions and context expressions.
A data expression is either a bare name like
x or an expression
c(x, y). In a data expression, you can only refer
to columns from the data frame.
Everything else is a context expression in which you can only
refer to objects that you have defined with
col1:col3 is a data expression that refers to data
seq(start, end) is a context expression that
refers to objects from the contexts.
If you really need to refer to contextual objects from a data
expression, you can unquote them with the tidy eval operator
!!. This operator evaluates its argument in the context and
inlines the result in the surrounding function call. For instance,
c(x, !! x) selects the
x column within the data frame and the
column referred to by the object
x defined in the context (which
can contain either a column name as string or a column position).
unnest() for the inverse operation.
1 2 3 4 5 6 7 8 9 10 11 12
Attaching package: 'dplyr' The following objects are masked from 'package:stats': filter, lag The following objects are masked from 'package:base': intersect, setdiff, setequal, union # A tibble: 3 x 2 Species data <fctr> <list> 1 setosa <tibble [50 x 4]> 2 versicolor <tibble [50 x 4]> 3 virginica <tibble [50 x 4]> # A tibble: 6 x 2 feed data <fctr> <list> 1 horsebean <tibble [10 x 1]> 2 linseed <tibble [12 x 1]> 3 soybean <tibble [14 x 1]> 4 sunflower <tibble [12 x 1]> 5 meatmeal <tibble [11 x 1]> 6 casein <tibble [12 x 1]> Loading required package: gapminder Warning message: In library(package, lib.loc = lib.loc, character.only = TRUE, logical.return = TRUE, : there is no package called 'gapminder'
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.
|
OPCFW_CODE
|
Can I convert an existing channel?
I'm not fining any way to convert an existing channel to a private channel, or to convert an existing private channel to a regular channel. Is either of those possible or planned for the future?
Also, up until now we've had to create separate teams to have private information. I'd love to move the channels from those teams back into our main team and make them private so I can then get rid of the extra team. Is there any way to do that or is it planned for the future?
Document Details
⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.
ID: ce34b12b-aa50-608b-bbfa-a1fb4bb79f8d
Version Independent ID: 9ddeb6f4-f30a-1e72-fc94-ae66def91f40
Content: Private channels in Microsoft Teams
Content Source: Teams/private-channels.md
Service: msteams
GitHub Login: @LanaChin
Microsoft Alias: v-lanac
The article above states that channel cannot be converted from private to standard or vice versa. But I would also like to know whether this will change in the future.
I don't understand the last row in the first table (when to create a private channel) which states "Create a new team or create a new private channel in an existing team" even though there is no team that includes all members of the intended private group. Shouldn't the recommendation for this last scenario be "Create a new team"? If that is too simplistic, I guess it would be "Add missing users as guests to the most relevant team and create a private channel or create a new team".
We need the capability to change the exisiting channel type to public or private. Is this possible.
Very good comment. I also look forward being able to move existing channels created in a team intended to be private to a common team and then make those channels private. This will be great to clean-up unnecessary teams and make daily work way more productive.
This will also narrow down the number of members that we have to look in case of a disaster (that can be accidental deletion of the resource group etc, when multiple members are using same Login ID). Much needed feature.
MUCH needed feature!!!!
I'm not clear why Microsoft would introduce this feature without the ability to blend what everyone has done as a work around because of the lack of the feature. This should have been a no-brainer. At the very minimum, there should have been a way to switch a current channel to private (one way street). Now, all of us are going to waste more time and resources creating NEW teams or new channels and manually recreating those WIKIs and moving files. Seems counter productive. I sure hope Microsoft is on top of this one.
Also looking for the option to convert public channel into a private channel and vice versa. Take a look at the UserVoice topics: https://microsoftteams.uservoice.com/forums/555103-public/suggestions/38535709-convert-private-channel-to-public-channel / https://microsoftteams.uservoice.com/forums/555103-public/suggestions/38974249-make-public-channel-private
I'm also looking for this feature. Right now, our only option would be to move all the contents from the public channel into the newly created private channel...
It would be a pain to go through all of them, plus you'd lose the thread history!
@kelemvor33 Thank you for submitting feedback. We understand that this issue has been resolved.
Please feel free to re-open this issue if there is a specific area of the docs that we can improve or make better. Thank you.
@scanum, you pointed that the issues has been resolved but I don't see the possibility to convert existing channels into private and vice versa. Would you please reopen the issue or please point out to the article stating it has been implemented!
@kelemvor33 Thank you for submitting feedback. We understand that this issue has been resolved.
Please feel free to re-open this issue if there is a specific area of the docs that we can improve or make better. Thank you.
|
GITHUB_ARCHIVE
|
Looking to edit photos of my wife by manipulating the images. The manipulation includes adding objects to make the photo give a different meaning. The objects added should look real and natural as if it was present in the actual image. Along with that in few pictures need to change the background of the image to a real outdoor public location. The background
I need a Tensorflow([iniciar sesión para ver URL] would be better) project which detect a 3D object by an android mobile phone camera. 1. Input: 2D vide...v=Q1f-s6_yHtw&t=20s) which help to understand regarding the output. I'm saying it might confusing, there should be no lidar needed for this project. Only 2D images and a point cloud of objects for the input data.
Looking for a Unity 3d programmer that can program a game app. The game will be 3D and will be in a board game style format. Artwork, 3D objects, Music, Sounds, and other various things will be provided. Primary programming will be the AI, functionality, transitions,and various screens resulting in a functional game. 3D part is just 1 screen for
We are working in electrical construction business and we need to count devices and cables on technical drawings so we can estimate the cost for the work.
Need at least 20 different photos of interior (all displays, layouts, panoramic) and exterior (facades) of ...requirements: 1) horizontal video 2) at least 60 seconds (no shaking) 3) 3K or 4K quality, clear and stable 4) The video should show everything you see in-store 5) No people or other objects that block the view should be in front of the camera
I need 3-5 Oracle Apex developers in order to build so...freelancers needs to have a solid knowledge/experience about Oracle Apex 19.x and Oracle DB 18c and higher. Tasks will be delivered in a data model script to build the database objects at dev environment and forms/reports requirements will be deliver in pseudo code to ease up the communications.
We need at least 20 different photos of interior (all displays, layouts, panoramic) and exterior (facades) of Pandora...1) horizontal video 2) at least 60 seconds (no shaking) 3) 3K or 4K quality, clear and stable 4) The video should show everything you see in-store 5) No people or other objects that block the view should be in front of the camera
I need two dynamic high-quality photoshop templates/mockups using smart objects & masks etc.. One living room scene & one master bedroom scene with several 'views' per scene & the ability to crop to 'uncomprimised' close-ups. They must be modern/contemporary and photo realistic. The templates/mockups will be used to display my own artwork & patterns
Objects Involved * Product * Price Dimension - child of product (fields: price, discount, quantity etc) * quote * quotelineitem A visualforce page shows list of a wrapper class. A row contains, Product, related price dimensions name as picklist options, price and discount related to selected price dimension, quantity. VF shows, existing quote line
Name of company: HD Clean Sneaks Main colours: main colour Navy blue , secondary colours: Grey, silver baby blue Theme: Luxury, Cool feel objects or icons to include: Trainers , cleaning products be creative This company is a luxury shoe cleaning company. Please be creative! People are coming here to have their sneakers cleaned so incorporate maybe
This design is to serve for an ice show logo. Th...buildings, plane, palm trees, the world shape etc. Colors: May be any colors but must also work as black on a white background. I included an example, but would like the objects to be behind the letters and serve a more professional look. Thank you in advance and reach out if you need any details.
Need to identify object in a live camera in C#.Net Windows application. When template match then put status of that part square is green otherwise ...Net Windows application. When template match then put status of that part square is green otherwise put to red. In a single screen having multiple location to verify w.r.t objects are available or not...
BOBJ Admin...components Business Objects Enterprise Content Central Management Server and the System Database File Repository Servers Web Application Services Web Intelligence Servers Adaptive Job Server and the Adaptive Processing Server Should have experience of creating and modifying SAP Transport Requests eg releasing and adding new objects in TRs
...are looking for US and Canada-based freelancers for a quick project taking photos of daily objects with your smartphones. This is just a trial after which we will have much more work related for this project. The goal is to collect photos of the following 6 objects: [iniciar sesión para ver URL] (opened or closed rain umbrellas - No parasol, No lace parasol, No cocktail
[Sims Need To Be From Austria] Let me explain what our company does Our company is a compliance company that monitors value added services (VAS), in other words services that you pay with your phone credit. We monitor these types of services/ads around the world to make sure they are following the correct rules and guidelines in each country. For
Needs: Create a formula that calculate % how much time is left. Create how % Tasks are done. Create if Tasks and Time are matching. If not matching than it should sho...that calculate % how much time is left. Create how % Tasks are done. Create if Tasks and Time are matching. If not matching than it should show we are behind. And similar objects...
Create a 3D animation of a blank piece of paper animating and folding into a car, a house, a ...a 3D animation of a blank piece of paper animating and folding into a car, a house, a wedding gown. All of it working like origami folding and unfolding into the different objects. The final frame of each object may be used in a print ad as well as digital.
I'm looking for someone to develop using Scriptable Objects a (low)Polly auto renderable world using MapBox for Unity. PHASE 1 requirements are very simple, we need to agree in what's going to be rendered because I don't want everything to be present in the map so most likely we're focusing in terrain, water, trees, vegetation and other ground level
Hi, We are a Furniture company that produces Furniture, Toys and Decoration Products for Babies and Kids. We need to get the Design of different Types of Furniture, Toys and Decoration Products. We would like to get designed the products below: 10 Babies ( 0-2 years ) Room (Furniture of the whole Room as a set ) 10 Kids Room ( 2-6 years ) and ( 7-11 years ) (Types of Furniture of the whole Room ...
I want to make an intro for a youtube channel. The video must be animated and it has to be 4 secconds long. The channel will be about destroying objects with dental equipment. So I am thinking to animate a drill handpiece, a exporer, a extaction forceps chasing together other random oblect down the road (laptops, fruits, phones, shoes)
We require an expert website d...symbols/notations to the front end. Integration to API tool for graph drawing, these graphs should be publishable to the front end for users to see. Integration to API tool for drawing objects such a pie chart, bar chart, histogram, etc Integration to a language translation tool such as google language translation API.
I am looking to have an application which can do the following: 1 Create a copy of AD (active directory) GPOs (group policy objects) 2 create a new AD GPO, link it, and enforce it 3 For the new created GPO, I would like to have a group of buttons where I can click one of them to do edit the new GPO with one configuration, and the other buttons do the
...other web based forums and platforms), handling FB and Instagram accounts. Pop-art and retro brand design products within everything from framed high end prints to art/design objects, clothes etc. It´s a big plus if you´re also skilled in graphic design and/or have experience in Wikipedia editing (but not required). We are creating a new virtual "artist
I want to train a model to detect a specific types of elements these elements like rectangles circules straight lines , sequares but all are hand dr...elements like rectangles circules straight lines , sequares but all are hand drawn the follwoing tasks will be handled : 1- image segmentation and data labeling 2- buildings Objects detection model
...js/wiki/Getting-started-with-WebGL-in-p5 More examples: [iniciar sesión para ver URL] Most of the code will be in here * I need to add basic physics so the little human people can push the objects and jump, just like in that demo, we will use the [iniciar sesión para ver URL] library: [iniciar sesión para ver URL] Some reference notes: [iniciar sesión para...
...dataset of images, I need to segment foreground objects from the background for each image. the dataset has groundtruth segmentation results. The output image should be a black and white image with foreground as white and background as black. So the work is basically segmentation of foreground objects from images. Want a conventional algorithm, not
...through innovative solutiuons and digital technologies. We are developing a platform which enables architects to share 3D library objects with each other. We are looking for a BIM architect who can develop 3D building objects (wall structure, roof structure, windows, doors etc.) for our platform. We look forward to hearing from you! Your "Companion"
...Developer Dynamics AX/365 We're looking for someone with development experience who wants to work directly with clients. A developer is tasked with building X++ development objects and support What you will do: Meeting regularly with client teams to understand their architecture needs Developing and debugging solutions between AX and other systems
...part#2). This includes dimensions calibration, we need to have a highly accurate understanding of human and environment dimensions. 4. Objects recognition. E.g. 3D Reconstruction with semantics (Understanding objects in the room). Desired tech stack: * Python * OpenCV * TensorFlow (or PyTorch) * (Optional) C/C++. Desired location: Ukraine (this
...part of a collection! We want to create a product collection in different colors (see the video link) STYLE: modern and original WHAT NOT TO DO No lamps and electrical objects. Our design style: [iniciar sesión para ver URL] Packaging reuse ideas: [iniciar sesión para ver URL] [iniciar sesión para ver URL] [iniciar sesión para ver URL]
...that shape of island is similar with something - or.. the logo will be just a 3d object sweet and calling? You can use your own idea which doesn`t contain any of previous objects, choose any what you like, but don`t forget to think about the name itself. Legends of Bali. What is it about? Mystery? - Maybe. Fun days for whole family - maybe as well.
Are you confident at PHP ? What I want is I want to render m...confident at PHP ? What I want is I want to render mp4 video after editing video using PHP. [iniciar sesión para ver URL] As you can see, you can import video and images and text and objects. after all these, I can export from my edited video.. That is what I want. Can you do this ? If then, bid.
|
OPCFW_CODE
|
package trcc;
import processing.core.*;
import java.util.ArrayList;
/**
* trcc Propos
*/
public class Propos {
public float padding = 10;
public int poster_w;
public int poster_h;
public float stroke_weight = 1;
public float fg;
public float bg;
PApplet app;
public final static String VERSION = "##library.prettyVersion##";
public PGraphics buffer;
/**
* Constructor
*
* @example Hello
* @param theParent the parent PApplet
*/
public Propos(PApplet theParent) {
app = theParent;
// padding = 10;
//
// poster_w = 586;
// poster_h = 810;
//
fg = 0;
bg = 255;
//
poster_w = 586;
poster_h = 810;
// stroke_weight = 1;
buffer = app.createGraphics(poster_w, poster_h, PConstants.P2D);
}
/**
* The surface of the poster
*
* @return PGraphics
*/
public PGraphics ground() {
buffer.beginDraw();
buffer.background(bg);
buffer.endDraw();
return buffer;
}
/**
* calculateFontSize Utility-Funktion
*
* @return float
*/
public float calculateFontSize(String headline, PFont font) {
// Bug: Headline ist etwas zu schmal
float val = 0;
buffer.beginDraw();
while (buffer.textWidth(headline) < poster_w) {
val += 1;
buffer.textSize(val);
}
buffer.endDraw();
return val;
}
/**
* Headline
*
* @return PGraphics
*/
public PGraphics headline(String txt, PFont font, char align, float fontSize, float lineHeight) {
buffer.beginDraw();
buffer.clear();
buffer.textMode(PConstants.SHAPE);
buffer.fill(fg);
buffer.noStroke();
buffer.textFont(font);
buffer.textSize(fontSize);
buffer.textLeading(fontSize * lineHeight);
if (align == 'L') {
double d = -fontSize * 0.2;
float y = (float) d;
buffer.textAlign(PConstants.LEFT, PConstants.TOP);
buffer.text(txt, 0, y);
} else if (align == 'C') {
double d = -fontSize * 0.2;
float y = (float) d;
buffer.textAlign(PConstants.CENTER, PConstants.TOP);
buffer.text(txt, buffer.width / 2, y);
}
buffer.endDraw();
return buffer;
}
/**
* Display a grid of elements
*
* @return PGraphics
*/
public PGraphics grid(float cols, float rows) {
float tile_w = poster_w / cols;
float tile_h = poster_h / rows;
buffer.beginDraw();
buffer.clear();
buffer.noFill();
buffer.stroke(fg);
buffer.strokeWeight(stroke_weight);
for (int x = 1; x < cols; x++) {
buffer.line(x * tile_w, 0, x * tile_w, buffer.height);
}
for (int y = 1; y < rows; y++) {
buffer.line(0, y * tile_h, buffer.height, y * tile_h);
}
buffer.endDraw();
return buffer;
}
/**
* Display an image
*
* @return PGraphics
*/
public PGraphics img(PImage image, float x, float y, int w, int h) {
if (app.frameCount == 1) {
image.resize(w, h);
}
buffer.beginDraw();
buffer.clear();
buffer.imageMode(PConstants.CENTER);
buffer.push();
buffer.translate(buffer.width / 2 + x, buffer.height / 2 + y);
buffer.image(image, 0, 0);
buffer.pop();
buffer.endDraw();
return buffer;
}
/**
* Display circles
*
* @return PGraphics
*/
public PGraphics circles() {
buffer.beginDraw();
buffer.clear();
buffer.stroke(fg);
buffer.noFill();
buffer.strokeWeight(stroke_weight);
buffer.push();
buffer.ellipse(buffer.width / 2, buffer.height / 2, buffer.width, buffer.width);
buffer.pop();
buffer.endDraw();
return buffer;
}
/**
* Display a scratch
*
* @return PGraphics
*/
public ArrayList<PVector> points;
public PGraphics scratch(int pts) {
if (app.frameCount == 1) {
points = new ArrayList<PVector>();
for (int i = 0; i < pts; i++) {
float x = app.random(buffer.width);
float y = app.random(buffer.height);
points.add(new PVector(x, y));
}
}
buffer.beginDraw();
buffer.clear();
buffer.noFill();
buffer.stroke(fg);
buffer.strokeWeight(stroke_weight);
buffer.beginShape();
for (int i = 0; i < points.size(); i++) {
buffer.curveVertex(points.get(i).x, points.get(i).y);
}
buffer.endShape();
buffer.endDraw();
return buffer;
}
/**
* rasterizer
*
* @return PGraphics
*/
public PGraphics rasterize(PImage img, float tilesX, float tilesY) {
PGraphics buffer2 = app.createGraphics(poster_w, poster_h);
if (app.frameCount == 1) {
img.resize(buffer.width, 0);
}
buffer2.beginDraw();
buffer2.clear();
buffer2.imageMode(PConstants.CENTER);
buffer2.image(img, buffer.width / 2, buffer.height / 2);
buffer2.endDraw();
float tileW = buffer.width / tilesX;
float tileH = buffer.height / tilesY;
buffer.beginDraw();
buffer.noStroke();
buffer.clear();
buffer.fill(fg);
PImage bufferImg = buffer2.get();
for (int x = 0; x < tilesX; x++) {
for (int y = 0; y < tilesY; y++) {
int px = Math.round(x * tileW);
int py = Math.round(y * tileH);
int c = bufferImg.get(px, py);
float b = app.map(app.brightness(c), 0, 255, 0, 1);
buffer.fill(fg);
buffer.push();
buffer.translate(x * tileW, y * tileH);
buffer.rect(0, 0, tileW * b, tileH * b);
buffer.pop();
}
}
buffer.endDraw();
return buffer;
}
/**
* meta
*
* @return PGraphics
*/
public PGraphics meta(PFont font, float fontsize, float lineHeight, float offsetY, String text) {
buffer.beginDraw();
buffer.clear();
buffer.noStroke();
buffer.textMode(PConstants.SHAPE);
buffer.fill(0);
buffer.textFont(font);
buffer.textAlign(PConstants.CENTER, PConstants.TOP);
buffer.textSize(fontsize);
buffer.textLeading(fontsize * lineHeight);
buffer.push();
buffer.translate(buffer.width / 2, buffer.height + offsetY);
buffer.text(text, 0, 0);
buffer.pop();
buffer.endDraw();
return buffer;
}
public PGraphics meta(PFont font, float fontsize, float lineHeight, float offsetY, String text1, String text2) {
buffer.beginDraw();
buffer.clear();
buffer.noStroke();
buffer.textMode(PConstants.SHAPE);
buffer.fill(0);
buffer.textFont(font);
buffer.textAlign(PConstants.LEFT, PConstants.TOP);
buffer.textSize(fontsize);
buffer.textLeading(fontsize * lineHeight);
buffer.push();
buffer.translate(padding, buffer.height + offsetY);
buffer.text(text1, 0, 0);
buffer.pop();
buffer.push();
buffer.translate(buffer.width / 2, buffer.height + offsetY);
buffer.text(text2, 0, 0);
buffer.pop();
buffer.endDraw();
return buffer;
}
public PGraphics meta(PFont font, float fontsize, float lineHeight, float offsetY, String text1, String text2,
String text3) {
buffer.beginDraw();
buffer.clear();
buffer.noStroke();
buffer.textMode(PConstants.SHAPE);
buffer.fill(0);
buffer.textFont(font);
buffer.textAlign(PConstants.LEFT, PConstants.TOP);
buffer.textSize(fontsize);
buffer.textLeading(fontsize * lineHeight);
buffer.push();
buffer.textAlign(PConstants.LEFT, PConstants.TOP);
buffer.translate(padding, buffer.height + offsetY);
buffer.text(text1, 0, 0);
buffer.pop();
buffer.push();
buffer.textAlign(PConstants.CENTER, PConstants.TOP);
buffer.translate(buffer.width / 2, buffer.height + offsetY);
buffer.text(text2, 0, 0);
buffer.pop();
buffer.push();
buffer.textAlign(PConstants.RIGHT, PConstants.TOP);
buffer.translate(buffer.width - padding, buffer.height + offsetY);
buffer.text(text3, 0, 0);
buffer.pop();
buffer.endDraw();
return buffer;
}
/**
* timestamp
*
* @return String
*/
public String timestamp() {
int y = PApplet.year(); // 2003,2004, 2005, etc.
int m = PApplet.month(); // Values from 1 - 12
int d = PApplet.day(); // Values from 1 - 31
int h = PApplet.hour();
int mi = PApplet.minute();
int sec = PApplet.second();
int mill = app.millis();
String val = "_" + String.valueOf(y) + String.valueOf(m) + String.valueOf(d) + "_" + String.valueOf(h) + "_"
+ String.valueOf(mi) + "_" + String.valueOf(sec) + "_" + String.valueOf(mill);
return val;
}
}
|
STACK_EDU
|
Location: United Kingdom (London, City of) Type: Permanent
Counterparty Credit Risk Quant – VP/ED level
The Counterparty Credit Risk team within the Quantitative Research group (QR CCR) is responsible for developing and supporting models to measure counterparty risk and funding costs for the investment bank. This requires large scale cross-asset scenario generation engines as well as highly optimized portfolio valuation models. The counterparty exposure calculations are used for credit exposure management, CVA & FVA hedging as well credit risk capital calculations. The responsibilities of the team span the full range of activities from new model specification, managing model approval, model implementation and support of our various stakeholders, including trading, risk, technology and capital reporting functions.
Design and implement new testing and diagnostic tools for identifying and understanding model weaknesses and their impacts. Tests include, among others, a variety of statistical tests, benchmarking against alternative models, testing calibration and implementation. Impact assessments can be risk-based or via comparison to alternative approaches, and often bespoke solutions will be required, tailored to the underlying model and its limitation.
Lead interpretation of test results and remediation work, including communications with stakeholders and coordination or direct implementation of modelling enhancements.
Working closely with asset-aligned Quantitative Research groups in order to understand and address product-level pricing limitations.
Liaising with technology teams in order to build out risk management systems and run diagnostics tools.
Ensuring clear documentation and testing of models and working closely with the Model Review Group in order to facilitate model approvals.
Supporting the trading team and risk organisation in pricing and risk managing credit risk and understanding of model limitations.
Liaising with Valuation Control and risk groups to understand limitations and risks in existing models and help in setting appropriate reserves and limits.
Required Skills and Experience:
A very structured mathematical approach to problem solving, experience with quantitative modeling, time series / econometrical analysis, hypothesis testing and general statistics, risk neutral pricing, business overview, and the ability to work in a dynamic environment.
Excellent communication skills are required in the interaction with trading, technology, and control functions. Ideally, you will also have a healthy interest in good software design principles.
A PhD. in a numerate subject from a top academic institution is a plus, but not an absolute requirement.
Very strong mathematical and financial modeling skills. Good knowledge of risk neutral pricing approaches for a variety of asset classes (e.g. interest rates, inflation, equity, FX). Good knowledge or understanding of statistical testing, times series analysis, econometrics.
Strong interest in programming and design. Ideally some experience with coding in Python. In addition, experience with C++ would be a plus.
Strong communication and documentation skills. Ability to present technical information clearly.
Pro-active attitude. Should have a natural interest to learn about our business, models, and infrastructure, and also desire and drive to identify, quantify, and fix model issues and limitations.
|
OPCFW_CODE
|
Departing a little from my usual posts on Windows Client and related news, I want to share my continued excitement with the investments from Microsoft related to Windows Server (specifically Windows Server 2012).
Whether your setting up a single server for your small business or looking to transform your datacenter environment, Windows Server 2012 can deliver. Windows Server 2012 has 4 key tenants to help your organization cloud-optimize your IT while saving costs on expensive solutions that are difficult to integrate.
- Built from the Cloud Up – Windows Server 2012 is designed for IT pros to optimize for the cloud while satisfying business needs faster and more efficiently by providing a highly available, easy-to-manage, multi-server platform that offers the following benefits: Flexible Storage, Continuous Availability, and Management Efficiency
- Transform the Datacenter – If you want the flexibility of a private cloud, implementing solutions in a virtualized environment is not enough. Windows Server 2012 lets you go beyond virtualization to deploy and more securely connect to private clouds in a flexible IT environment that adapts dynamically to changing business needs. New and enhanced features provide high performance and scalability to enable a truly multitenant infrastructure where networking, compute, and storage resources are isolated among tenants on the same host.
- Enable Modern Apps – Windows Server 2012 is a proven application and web platform that includes thousands of applications already built and deployed. It offers the flexibility to build infrastructure across premises on an open, scalable, and elastic web and application platform.
- Empower People-centric IT – Windows Server 2012 empowers IT to provide users with flexible access to data and applications from virtually anywhere, on popular devices—all with a rich user experience. It also simplifies management and improves data security, control, and compliance. Windows Server 2012 offers the following benefits to IT pros and end users: Access from virtually anywhere on any device, Full Windows experience anywhere, with Enhanced Data Security and Compliance.
Saving Costs with Windows Server 2012
There are two key areas that I want to focus on when considering cost savings with Windows Server 2012.
The first area is related to virtualization with Microsoft Hyper-V. Hyper-V has come a long way since its introduction as Windows Server Virtualization. Many large scale organizations are beginning to see the performance capabilities of Hyper-V and have evaluated their possible plans to migrate from the more expensive VMWare solution to Microsoft Hyper-V. With Windows Server 2012, this is made all the more possible with the technological breakthroughs of the platform and with the ease of management across the datacenter including the cloud. For your reference, I want to include a whitepaper for your research on why you should consider Windows Server 2012 Hyper-V for your organization? Whitepaper- Why Hyper-V? Competitive Advantages of Windows Server 2012 Hyper-V over VMWare vSphere 5.1.
Total Economic Impact of Windows Server 2012
The second are is just raw savings. Windows Server 2012 brings so much value to an organization. Many of my customers already own Windows Server and the upgrade rights to Windows Server 2012 and are looking for ways to prioritize the upgrades.
Put your own research to test by reviewing the Total Economic Impact of Windows Server 2012 whitepaper to see how Windows Server 2012 can benefit your organization and save costs.
|
OPCFW_CODE
|
Please Login to OpenNTF:
Did you happen to forget your password? Reset it here
IBM Connections Downloads
Videos on YouTube
Source Control on GitHub
IBM Collaboration Solutions News
Easy Admin Tools
XPages OpenLog Logger
OpenNTF Domino API
OpenNTF Domino API
XPages OpenLog Logger
NSF ODP Tooling
NSF ODP Tooling
Introduction and Participants
This document provides the details of the processes applicable to OpenNTF contributions and project development. It describes how to become involved, the roles and responsibilities of each of the various types of participants (Contributors, Committers, Technical Committee members) – and the details of submitting and clearing Projects.
It is recommended that participants in OpenNTF become familiar with the
. Many aspects of this Contribution Process are based on the IP Policy and if there are any discrepancies between the two documents, the IP Policy takes precedence.
Becoming A Contributor
Here are the steps to becoming a Contributor to a project (whether on the OpenNTF site or the github.com/openntf site.):
1. Set up an OpenNTF account - If you don't have one already, create an OpenNTF user account for yourself:
2. You need to execute the
Individual Contributor License Agreement
Print the agreement to create a hardcopy
Sign the hardcopy
Fax the signed hardcopy to +1-845-491-7347 or email a scan of the hardcopy to
IP-manager at openntf dot org
3. Or, if your employer has signed a
Corporate Contributor License Agreement
(“CCLA”), and that Agreement lists your name in the list of covered Contributors, then you do not have to execute an ICLA. If the CCLA does not list you, you can either ask your company to add you to the list, or alternatively complete an ICLA.
If there are any questions as to whether your company has signed a CCLA, and whether or not you are listed on it, you can contact
IP-Manager at openntf dot org
to find out.
3 Becoming a Committer
To become a Committer, you must first be a Contributor. You then need to send a request to
IP-Manager at openntf dot org
who will then set up an electronic vote of the existing Committers to accept or reject your application. Alternatively, the Steering Committee may appoint Committers.
Roles and Responsibilities for Contributors and Committers
Contributors form the backbone of OpenNTF. They are the ones who develop and manage the projects. Any Contributor may:
apply to join an ongoing Project by sending a request to the Project Lead;
initiate a Project – see below for a description;
become a Committer (as described in Section 3 above).
Committers are the OpenNTF Release Managers. As described in the IP Policy, Committers help with the Clearance process. The Project Clearance process is handled by Committers and the IP Manager.
Creation and Management of Projects:
While any Contributor may create new Projects, it is recommended that Project ideas first be posted to the
to canvas the opinions of other users. It is best to provide a relatively complete plan.
Before you get started, please consider the following:
If there is another similar project already on OpenNTF, consider teaming up with the folks running that project so that you can get more done in less time.
Only create projects and post code according to the terms of the
If your project will include code that you (or other Contributors) did not develop, then you should consult
the IP Manager before posting it. In this way you can get some assistance in managing the licensing issues associated with this 3
By creating a project, you agree to the IP Policy and to provide the full source code for the application. This is after all an open source website!
Every Project has its own starting page which contains the Project overview, release information, news, discussion, feature requests and bug submissions for your project. You can start a discussion about your project or post any news at any time. If you have questions, don't worry - we're there to help you! Click on ”Contact us” near the bottom of the Get Involved page or feel free to post your questions on the Main Forum.
The Project Leads are required to monitor the developer Forums associated with all Projects for which they have commit privileges, and to monitor any emails linking to content related to their project.
Once you have posted your project, you can send the IP Manager a request to clear the release. Having your code cleared gives users more confidence that your code is less likely to have IP issues, and more likely to work properly. The IP Manager and a Committer will then carry out an analysis of the Release. The items analyzed during the IP Review include:
Verification that Contributors are covered by ICLAs or CCLAs
party code accounted for in Notice files
Licenses are compatible
Code appears to run properly.
When the Clearance process is successfully completed, the release will be flagged as “Cleared”.
Projects that are sponsored by OpenNTF, such as OpenNTF Essentials:
the project license will be Apache 2.0
party components must be cleared.
This means that the full content of any OpenNTF-sponsored project, or project within OpenNTF Essentials, including 3
party components, must pass the OpenNTF clearance process and be licensed under, or be compatible with, Apache 2.0. There is a
list of pre-cleared projects
that contributors are free to use without further issue. If a component is not on the list, then a contributor can email ip-manager at openntf to ask for it to be checked out.
There is an OpenNTF space on GitHub to support the development of projects involving teams of Contributors. The idea is that development would be done on the Github site, with periodic releases of the project made to the OpenNTF site.
To Contribute to a Github project the same rules apply. All contributors (those who are members of the project “Team”, or even those who will to contribute code by providing their Pull to the project Team, must be OpenNTF Contributors as defined in Section 2 above.
To be allocated a Team space within the OpenNTF space of Github, you need to be registered on Github as well as on OpenNTF – and you need to send a request to IP-Manager at Openntf dot org.
This section provides a description of how to include the appropriate License and Notice information.
It is the Project Lead’s responsibility to ensure that the appropriate license information is provided with each Component, Sample and Project. In the top level (root) directory for each OpenNTF project there must be the following files:
License File. This must be one of the approved OpenNTF licenses (Apache, GPL3, AGPL3, or LGPL3)
Other License Files. At times Projects include software from other open source projects licensed under licenses which require that the license information be passed on in your project (such as the BSD and MIT licenses). The license text for these components must be included under the names “LICENSE_Xxxx.txt” where Xxxx is the name of the component.
Notice File. The first few lines of each NOTICE file should include the following text, suitably modified to reflect the product name and year(s) of distribution of the current and past versions of the product:
Copyright [yyyy] [Copyright owner(s)]
This product includes software developed for
The NOTICE file must also include a list of the 3
party components, the URL where it may be obtained, the name of the associated license, and any notices required by those components. Notice files may also contain credits for the Project Leads and Contributors.
All License and Notice files should be simple ASCII text files.
Where feasible, all source files in an Apache-licensed product should include the following header text. This is advisable, for example, for Java source files, but not for design elements in NSFs or NTFs.
Copyright [yyyy] [name of copyright owner(s)]
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
Source files in GPL/AGPL/LGPL projects should include a similar header – referring to the license being used.
Note however, if you are creating source files based on third party code, do not use the above source file header. In this case, you may insert your own copyright statements (along with the original copyright statements of the content) – and remember to list the third party code in your Notice file.
Do not, under any circumstances, remove copyright statements from any submission to OpenNTF.
OpenNTF Participation without writing code.
There are also ways to get engaged with OpenNTF without contributing code.
Any registered OpenNTF user can rate Projects and add comments as “Testimonials”.
Registered OpenNTF users can also create new ideas and vote and comment on other ideas.
General comments or questions can be posted in the OpenNTF forum;
OpenNTF is always also looking for people to help extend and maintain the OpenNTF technical infrastructure, e.g. to update the UI of the web site. If you would like to help please contact the chair of the OpenNTF Technical Committee at TC-chairman at openntf dot org.
|
OPCFW_CODE
|
objectTypes = {
// Obstacles are basically objects with a sprite, an offset, a depth and a group.
// An obstacle's group determines what happens when the ball collides with it
// Possible groups are:
// - obstacles: ball bounces off
// - deadlies: ball dies, level is restarted
// - destination: level is completed, next level is started
Ball: {
class: 'Ball',
depth: 0,
bitmap: 'Objects.Ball',
offsetX: 26,
offsetY: 26,
offsetLvlX: 26,
offsetLvlY: 26,
},
Grass: {
class: 'Obstacle',
bitmap: 'Objects.Grass',
mask: 'Masks.Grass',
group: 'obstacles',
depth: 3,
offsetX: 32,
offsetY: 7,
offsetLvlX: 32,
offsetLvlY: 7
},
GrassUpHill: {
class: 'Obstacle',
bitmap: 'Objects.GrassUpHill',
mask: 'Masks.GrassUpHill',
group: 'obstacles',
depth: 3,
offsetX: 32,
offsetY: 78,
offsetLvlX: 32,
offsetLvlY: 78
},
GrassDownHill: {
class: 'Obstacle',
bitmap: 'Objects.GrassDownHill',
mask: 'Masks.GrassDownHill',
group: 'obstacles',
depth: 3,
offsetX: 32,
offsetY: 78,
offsetLvlX: 32,
offsetLvlY: 78
},
Lian: {
class: 'Obstacle',
bitmap: 'Objects.Lian',
mask: 'Objects.Lian',
group: 'obstacles',
depth: 1,
offsetX: 40,
offsetY: 108,
offsetLvlX: 40,
offsetLvlY: 108
},
Bush: {
class: 'Obstacle',
bitmap: 'Objects.Bush',
mask: 'Objects.Bush',
group: 'obstacles',
depth: 2,
offsetX: 63,
offsetY: 57,
offsetLvlX: 63,
offsetLvlY: 57
},
Spikes: {
class: 'Obstacle',
bitmap: 'Objects.Spikes',
mask: 'Objects.Spikes',
group: 'deadlies',
depth: 4,
offsetX: 20,
offsetY: 20,
offsetLvlX: 20,
offsetLvlY: 20
},
Destination: {
class: 'Obstacle',
bitmap: 'Objects.Destination',
mask: 'Objects.Destination',
group: 'destinations',
depth: 1,
offsetX: 34,
offsetY: 42,
offsetLvlX: 34,
offsetLvlY: 42
},
// Controllers are objects with special collision handling
// Each controller has a canControl-function which returns true if the controller can handle a specific object
// If a controllable object that passes the canHandle-test collides with the controller, the doControl-function is called with the object as argument
Trampoline: {
class: 'Controller',
bitmap: 'Objects.Trampoline',
mask: 'Objects.Trampoline',
offsetX: 47,
offsetY: 12,
offsetLvlX: 47,
offsetLvlY: 12,
group: 'controllers',
depth: 1,
canControl: function (object) {
return object instanceof Ball
},
doControl: function (object) {
var peakTime, maxSpeed;
// Bounce the ball
if (object.speed.y > 0 && Math.abs(object.x - this.x) < 32) {
object.speed.y *= -1.1;
// calculate max speed
maxSpeed = -Math.sqrt(2 * main.gravity * (object.y - 30));
//console.log(maxSpeed);
object.speed.y = Math.max(maxSpeed, object.speed.y);
//console.log(object.speed.y)
}
}
},
TurnHV: {
class: 'Controller',
bitmap: 'Objects.Controller',
mask: 'Controllers.Mask',
opacity: 0,
offsetX: 8,
offsetY: 8,
offsetLvlX: 8,
offsetLvlY: 8,
group: 'controllers',
canControl: function (object) {
return object instanceof Hedgehog
},
doControl: function (object) {
// Only turn if necessary
if ((this.x - object.x) * object.speed.x < 0) {
return;
}
var oldSpeed, oldAnimationSpeed;
object.speed.x = -object.speed.x;
object.speed.y = -object.speed.y;
object.y += engine.convertSpeed(object.speed.y);
oldAnimationSpeed = this.animationSpeed;
object.animationSpeed = 0;
if (object.source === 'Objects.Hedgehog') {
engine.currentRoom.loops.onRunning.detachFunction(object, object.doMovement);
object.animate({widthScale: -object.widthScale}, {duration: 400, callback: function () {
this.animationSpeed = oldAnimationSpeed;
engine.currentRoom.loops.onRunning.attachFunction(this, this.doMovement);
}});
}
}
},
Teleport: {
class: 'Controller',
bitmap: 'Objects.TeleIn',
mask: 'Masks.Tele',
offsetX: 32,
offsetY: 32,
offsetLvlX: 32,
offsetLvlY: 32,
group: 'controllers',
depth: 0,
canControl: function (object) {
return object instanceof Ball
},
doControl: function (object) {
var i, dist, tele;
if (object.dead) {
return;
}
// Check if the portal has a destination
if (this.destination) {
// Find the distance from the ball to the teleport
dist = this.getDistanceTo(object);
// Fade out the ball based on the distance
object.stopAnimations();
object.opacity = Math.max(0, (dist - 15) / 30);
// If the distance is below 15 pixels, teleport the ball to the destination
if (dist < 15) {
// Find the destination based on the "this.destination"-var
for (i = 0; i < main.levelController.controllers.length; i ++) {
tele = main.levelController.controllers[i];
if (tele.name === this.destination) {
// Move the ball and fade it in
object.moveTo(tele.x, tele.y);
break;
}
}
}
object.animate({opacity: 1}, {duration: 400});
}
}
},
Power: {
class: 'Controller',
bitmap: 'Powerups.Power',
mask: 'Masks.Power',
offsetX: 22,
offsetY: 22,
offsetLvlX: 22,
offsetLvlY: 22,
group: 'controllers',
depth: 0,
canControl: function (object) {
return object instanceof Ball
},
doControl: function (object) {
if (object.power < object.powerMax && this.power > 0) {
this.power -= engine.convertSpeed(60);
object.power = Math.min(object.powerMax, object.power + engine.convertSpeed(60));
object.updatePower();
if (this.power < 0) {
this.animate({heightScale: 4, opacity: 0}, {duration: 200, callback: function () {
engine.purge(this);
}});
}
else {
this.direction = (1 - this.power / this.powerMax) * Math.PI / 2;
}
}
}
},
Hedgehog: {
class: 'Hedgehog',
bitmap: 'Objects.Hedgehog',
mask: 'Objects.Hedgehog',
group: 'deadlies',
depth: 0,
offsetX: 38,
offsetY: 27,
offsetLvlX: 38,
offsetLvlY: 27,
initSpeedX: 100,
initSpeedY: 0
},
HedgehogClimbing: {
class: 'Hedgehog',
bitmap: 'Objects.HedgehogHanging',
mask: 'Objects.HedgehogHanging',
group: 'deadlies',
depth: 0,
offsetX: 21,
offsetY: 34,
offsetLvlX: 21,
offsetLvlY: 34,
initSpeedX: 0,
initSpeedY: 100
},
HedgehogSwinging: {
class: 'Hedgehog',
bitmap: 'Objects.HedgehogHanging',
mask: 'Objects.HedgehogHanging',
group: 'deadlies',
depth: 0,
offsetX: 20,
offsetY: -105,
offsetLvlX: 20,
offsetLvlY: -105,
initSpeedX: 0,
initSpeedY: 0
},
};
|
STACK_EDU
|
A very EASY and FREE way for ANYONE to write their first computer program in TEN MINUTES. Note: This instructable is for people that think that programming is some sort of magical thing that you need expensive programs or high tech skills to do. Hopefully this instructable will remove the veil from their eyes to show them that it is easy and accessible to anyone with a computer.
Coding means to write code, or to write instructions for a computer. Programming, similarly, means to write code or instructions. Today, you will program with blocks on the computer (if you’re using an online tutorial) or with pen and paper (if you’re using an unplugged activity). Debugging means to check code for mistakes and try to fix errors. ACTIVITY (30-45 MINUTES) Challenge your.
Computer Programming - Basics - We assume you are well aware of English Language, which is a well-known Human Interface Language. English has a predefined grammar, which needs to be followed t.Lesson 1: Write your first computer program. Overview. In this lesson, learners of all ages get an introductory experience with coding and computer science in a safe, supportive environment. This lesson has been designed for young learners, ages 4-10, but can be adapted for older learners using the differentiation suggestions provided. Purpose. This lesson introduces the core CS concepts of.An office suite can be used to write documents or spreadsheets. Video games are computer programs. A computer program is stored as a file on the computer's hard drive. When the user runs the program, the file is read by the computer, and the processor reads the data in the file as a list of instructions. Then the computer does what the program tells it to do. A computer program is written by a.
Today's Posts; Mark Channels Read; Member List; Calendar; Forum; Members' Introduction; If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.Read More
A computer program is a collection of instructions that can be executed by a computer to perform a specific task. Most computer devices require programs to function properly. A computer program is usually written by a computer programmer in a programming language.From the program in its human-readable form of source code, a compiler or assembler can derive machine code—a form consisting of.Read More
Computer code is essentially a list of instructions that can be run by a certain program. Most code consists of plain-text documents so they can be used for many different programs. A unique file.Read More
How many ways can we write a Java program. There are many ways to write a Java program. The modifications that can be done in a Java program are given below: 1) By changing the sequence of the modifiers, method prototype is not changed in Java. Let's see the simple code of the main method.Read More
It's a simple puzzle game where you push tokens around and clear the board by matching pairs. The game would play the same with or without a storyline, but the author chose to include a story about magicians and their apprentices, and a mythical land graphically displayed in the opening cinematics. This simple game won Computer Gaming World's puzzle game of the year award.Read More
Computer Programming. Why Programming? You may already have used software, perhaps for word processing or spreadsheets, to solve problems. Perhaps now you are curious to learn how programmers write software. A program is a set of step-by-step instructions that directs the computer to do the tasks you want it to do and produce the results you want. There are at least three good reasons for.Read More
Search script on this computer? Windows 10 write program? I know free writing programs, since I'm creative rich me a simple writing program on Windows 10 that doesn't cost any money, and smaller than the Open Office, is there anything useful here? Search writing programs and that not always the expensive of Microsoft however a free alternatives with function variety and writing programs? Write.Read More
Compiling a Java program. A compiler is an application that translates programs from the Java language to a language more suitable for executing on the computer. It takes a text file with the .java extension as input (your program) and produces a file with a .class extension (the computer-language version). To compile HelloWorld.java type the boldfaced text below at the terminal.Read More
Programming is writing computer code to create a program, to solve a problem. Programs are created to implement algorithms. Algorithms can be represented as pseudocode or a flowchart, and.Read More
This is possibly the smallest program your computer could run, but it is a valid program nonetheless, and we can test this in two ways, the second of which is much safer and better suited to our kind of experiments: Using whatever means your current operating system will allow, write this boot.Read More
|
OPCFW_CODE
|