Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
Fill in the Blanks
A free HTML5 based question type allowing creatives to create fill in the blanks, also known as cloze tests, with H5P in publishing systems like Canvas, Brightspace, Blackboard, Moodle and WordPress.
Would you like to create content like this on your own?Get started
Register on H5P.com to start creating H5P Interactive content. Your content can be accessed via direct link, embeded, or inserted into any learning management system that supports LTI integration.
Learners fill in the missing words in a text. The learner is shown a solution after filling in all the missing words, or after each word depending on settings.
Authors enter text and mark words to be replaced with an asterix. In addition to native and second language learning, Fill in the blanks can be used to test the learner's ability to reproduce facts or produce mathematical inferences.
Learn how to create Fill in the blanks in this tutorial.
The H5P content on this page is licensed under Creative Commons Attribution 4.0 International unless another Creative Commons license is specified under rights of use. The author of the content is H5P Group
New to H5P? Read the installation guide to get H5P on your own site.
Thu, 11/11/2021 - 22:21
There have been some comments about apostrophe vs. right single quote errors, but this still does not appear to be resolved (even testing on the most up to date version of fill in the blanks).
Is there a fix for this? Or is there some reason why fill in the blank cannot treat apostrophes and right single quote characters as equivalent?
Specifically, submitting a single right quote (i.e., ’ or ’, in html terms) in place of an apostrophe (i.e., ') in an answer will return an "incorrect" result.
I'm still trying to figure out how the learners reporting this issue are entering the answer with a single quote, because it usually will take more work. The leading explanations are:
I posted about this issue on the content github site in an open issue about this, and was redirected here.
Here's an example screenshot, if it helps make things clearer.
Finally, here's the live link to this H5P content in question.
PS: In case it comes up, this content is focused on spelling and correct typing/text entry, so turning on "allow spelling errors" is not an option that will allow us to use this content and provide accurate feedback.
Fri, 02/04/2022 - 18:32
Can the image add below the text?
As subject. Thanks
|
OPCFW_CODE
|
Trying to run jar file but getting cannot find class
I am trying to run a jar file from the Win7 command line, but am getting the dreaded could not find or load main class PRCreateExecution.
I can successfully build the jar file from a Win7 batch file on the command line.
My current manifest file is named PRCreateExecution.mf and is located in here: C:\WDEclipseIDEWorkspace\MC3\src\PurchaseRequests\
The manifest file contains:
Manifest-Version: 1.0
Created-By: 1.8.0_40 (Oracle Corporation)
Main-Class: PurchaseRequests.PRCreateExecution.class
(extra LF is here)
I run the Win7 batch file to build the jar from
C:\WDEclipseIDEWorkspace\MC3\src\PurchaseRequests:
jar -cvmf PRCreateExecution.jar C:\WDEclipseIDEWorkspace\MC3\bin\PurchaseRequests\PRCreateExecution.mf C:\WDEclipseIDEWorkspace\MC3\bin\PurchaseRequests\PRCreateExecution.class C:\WDJarFiles
The jar file gets created successfully.
Now I'm using this batch statement to try and run the jar file:
java -cp C:\WDEclipseIDEWorkspace\MC3\bin\PurchaseRequests;. PurchaseRequests.PRCreateExecution
from in here:
C:\WDEclipseIDEWorkspace\MC3\src\PurchaseRequests
but am getting the could not find main class PurchaseRequests.PRCreateExecution.
PRCreateExecution source snippet:
package PurchaseRequests;
public class PRCreateExecution {
public static void main(String[] args)
Thanks for any help...
Usually when I run .jar files from batch files, I use the code java -jar JarFileName.jar.
I'm using a package so I'm pretty sure mine needs to be package.jarFilename.jar
If you go to File > Export, you can select the class with the main method. Then, export it as a .jar file. That way, all referenced code will be put into the .jar file. Run it with java -jar ClassWithMainMethodName.jar.
Remove the .class suffix from the manifest.
It should look like:
Manifest-Version: 1.0
Created-By: 1.8.0_40 (Oracle Corporation)
Main-Class: PurchaseRequests.PRCreateExecution
Afterwards run java -jar (name of your jar-file).jar
I removed the .class from the .mf file.
java -jar PurchaseRequests.PRCreateExecution.jar
Error:Unable to access jarfile PurchaseRequests.PRCreateExecution.jar
I had a typo but your response was the solution. How do I give you the credit?
I faced a similar kind of issue building distributable jar files using Netbeans. My suggestion would be to try and run the jar directly from the command line and from within the directory path of its location.
It is seen that you have something like this in your manifest file:
Manifest-Version: 1.0 Created-By: 1.8.0_40 (Oracle Corporation) Main-Class: PurchaseRequests.PRCreateExecution.class (extra LF is here)
I suggest you change the lines as follows:
Manifest-Version: 1.0 Created-By: 1.8.0_40 (Oracle Corporation) (insert CF)
Main-Class: PurchaseRequests.PRCreateExecution.class (extra LF is here)
You may use this as your reference:
https://docs.oracle.com/javase/tutorial/deployment/jar/appman.html
This post might be of use, too:
http://stackoverflow.com/questions/12767886/use-of-manifest-file-in-java
Firstly, it sounds like you're not actually trying to run your jar file at all. You're not mentioning it anywhere on your command line.
Secondly, it looks like your classpath is wrong - it should probably be
java -cp C:\WDEclipseIDEWorkspace\MC3\bin;. PurchaseRequests.PRCreateExecution
That's assuming that the bin directory contains a PurchaseRequests directory which contains PRCreateExecution.class
Thirdly, you should follow Java naming conventions for packages - they should be lower case.
When I run your suggested line: java -cp C:\WDEclipseIDEWorkspace\MC3\bin;. PurchaseRequests.PRCreateExecution I get an Error:A JNI error has occurred.
@user337447: Well that sounds like a different problem, and we don't have enough context to help you further.
@mp911de gave me the solution but I don't see where to click to give him the credit.
@user337447: You can click on the tick.
|
STACK_EXCHANGE
|
Hi @freemo , a small question for when you have time: how did you find discourse so far, in terms of maintenace?
I managed a phpbb community, I'd just like something where I do not have to mess with the code, and I do not have to put the whole community in standby when there is an update as some plugin will likely break.
Is it giving you many headaches, so far?
@arteteco Thats a bit loaded, depends on your setup. **if** you follow **exactly** the way they tell you to install it then it likely works. That means baremetal, always up, no load balancing or redundancy, no enterprise level features.
I on the other hand hacked it to run in a docker container and be load balanced. This was a **huge** effort and compounded by the fact that the support community is very hostile to any other install processes, so you will not get help.
Also their support communitya nd community as a whole is very for-profit oriented. So it is in their best interest not to help you beyond the standard install route. When i went there asking for basic advice I basically got a response along the lines of "This is an unsupported install, but if you pay me 100$ an hour I can fix your problem, otherwise take your problem elsewhere"
With that said once I got it up and running the maintenance is a breeze, but that is in large part due to my own hacks as writing services that run through containers instead of bare metal is by its very nature much easier to maintain.
@arteteco Its also resource hungry. I'm likely going to have to double the memory as it is running pretty slow with a memory bottleneck right now. Luckily i have plenty to spare.
@freemo thanks for your reply! I thought that docker install was the preferred way, do you mean that you had to hack the docker files?
How much memory are we using atm on qoto?
@arteteco There is an installer, it does use docker, and that is the "prefered" way.. but its not "standard" docker. There is no way you can bring the service up through say docker-compose.yml or a call to docker (or several calls).
There is an installer which you also have to rerun anythime you make any changes. IT ultimately kicks off docker.
Thing is because of the way they do it you literally loose all advantage to using docker in the first place (That being you can just spin up a container and it works).
Worse yet since you cant run a docker container inside a docker container the fact that they use bastardized-docker is exactly the reason why running a sane standardized docker container is extremely difficult.
My whole setup was little more than doing away with the baremetal aspect and making it into standard-docker and that meant completely rewriting the containerization.
As far as the docker perspective goes of discourse it is an absolute shit-show.
@freemo I see, those are... interesting design choices. How much memory are we using right now? I though couple of GBs would have done for it
@arteteco Not sure off hand ill check when im by my computer
@freemo Sure, no problem, thanks a bunch for the help!
QOTO: Question Others to Teach Ourselves. A STEM-oriented instance.
No hate, No censorship. Be kind, be respectful
We federate with all servers: we don't block any servers.
|
OPCFW_CODE
|
Runtime Class Not Found
Not the answer you're looking for? Help, my office wants infinite branch merges as policy; what other options do we have? Many online jar search engine available in the market such as http://help4j.com/. So you're saying that's not enough - I can't trust Eclipse to know that this also needs to be added to the classpath? –Michael Jay Apr 19 '09 at 1:34 http://dailyerp.net/in-java/purpose-of-the-runtime-class.html
How the Java Runtime Finds JAR-class-path Classes A JAR file usually contains a manifest, which is a file that lists the contents of the JAR file. I know they probably had their reasons to make it like that, but it would be so nice to have at least a small hint, that NoClassDefFoundError was a result of Well every java stuffs in my browser is giving me this. I am 16, and dont understand this stuff i want to fix this cause i play runescape, and i cant play with this error popping up everytime i try to play other
Noclassdeffounderror In Java
Depending on how you import the classes and the directory structure of your jar file adding the toplevel jar file may simply not be enough... Extensible code to support different HR rules How can I keep the computers on my spaceship from dying after a hull breach? BTW, there are couple of more tools to search JAR files e.g.
- java.lang.classnotfoundexception org.hibernate.hql.ast.hqltoken java.lang.classnotfoundexception org.springframework.web.context.contextloaderlistener java.lang.classnotfoundexception org.eclipse.core.runtime.adaptor.eclipsestarter java.lang.classnotfoundexception org.apache.catalina.startup.catalina java.lang.classnotfoundexception javax.mail.messagingexception java.lang.classnotfoundexception oracle.jdbc.driver.oracledriver This ClassNotfoundException comes when you are trying to connect Oracle database from Java program using JDBC but you don't
- Feature fA includes plugin pA.
- Use of a particular class loader determines a security policy associated with the class loader.
- share|improve this answer answered Jan 13 '14 at 15:35 Jason Robertson 111 add a comment| up vote 1 down vote It could happen if your classpath is not correct Let us
- We will get back to the resolution strategies at the end of the article.java.lang.ClassNotFoundException: Sample Java program Now find below a very simple Java program which simulates the 2 most common
- From the name java.lang.ClassNotFoundException looks quite simple but underlying cause of it is always different andwhich classifies it as an environmental issue.
- share|improve this answer edited Feb 10 '14 at 22:24 Nathaniel Ford 9,258134065 answered Feb 10 '14 at 22:05 Md Omar Faroque Anik 1,054918 add a comment| up vote 1 down vote
- share|improve this answer answered Jul 10 '13 at 2:42 Jon Skeet 915k50166387542 Thanks for a very easy to read and understand answer Jon.
more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed Thanks Kapil for your valuable input.you are right J2EE is different ball game with sheer use of Classloaders implemented by different web or enterprise server and it could result in more December 5, 2011 at 11:12 PM Anonymous said... Eclipse Classpath Can't harvest/forage bushes Don't notify contacts when starting teamviewer How are there so many species on the space station 'A long way from anywhere V'?
I swear my eyes were bleeding at the time. –ojblass Apr 19 '09 at 2:18 Now that it's working thanks to everyone's help, I'm kind of wondering why it Classnotfoundexception Java more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed missing JAR file(s) from your classpathIf the missing Java class is not from your application code, then identify if it belongs to a third party API you are using as per see here What is the origin of the story that Santa Claus lives at the North Pole?
Want to post it here? –Alessandroempire Apr 23 '13 at 16:32 This is what I get: [email protected]:~/Java/Trabajo/src$ javac -cp .;../lib/*.jar Main.java javac: no source files Usage: javac
Making a class available at compile-time doesn't embed the class into your output or anything like that. Where should a galactic capital be? You define a classpath on the command line by saying java -cp and then your classpath. Could Not Find Or Load Main Class anybody plese help me January 29, 2012 at 11:38 AM Anonymous said...
Skip to main content Download Getting Started Members Projects Community Marketplace Events Planet Eclipse Newsletter Videos Participate Report a Bug Forums Mailing Lists Wiki IRC How to Contribute Working Groups Automotive The user class path is specified as a string, with a colon (:) to separate the class path entries on Oracle Solaris, and a semicolon (;) to separate the entries on definitions for some hints. –ojblass Apr 19 '09 at 1:42 @ojblass - Under "Run Configurations" I see the guice.jar in the Classpath section as well as in the Source http://dailyerp.net/in-java/runtime-class-in-java-tutorial.html Depending on how you start your application, you need to revise the argument to -cp, your Class-Path entry in MANIFEST.MF or your disk layout.
Please click the link in the confirmation email to activate your subscription. Browse other questions tagged java class classpath classnotfoundexception or ask your own question. January 25, 2016 at 9:43 PM Post a Comment Newer Post Older Post Home Subscribe to: Post Comments ( Atom ) Interview Questions core java interview question (160) data structure and Read More From DZone Stronger Anti Cross-Site Scripting (XSS) Filter for Java Web Apps 10 Best Practices for Code Commenting & Formatting Creating Custom Login Modules In JBoss AS 7 (and
Filter a collection by NOT FIND_IN_SET Big O Notation "is element of" or "is equal" Can't harvest/forage bushes Does putting down the visors help defogging the windshield? Does it have something to do with the class path? Did you try to put library to classpath of app server, as I mentioned here: stackoverflow.com/a/16149435/1430055 ? –Maxim Kolesnikov Apr 23 '13 at 17:10 And how shall you invoke As for the Classpath, I thought Eclipse automatically takes care of that when you add a jar to your project under the Libraries tab of the Java Build Path -> that's
If found true will do+1 again (In comments) –supernova Oct 20 '15 at 13:22 add a comment| up vote 3 down vote I was using spring framework with maven and solved This invokes an internal class loader, which can apply a security policy to extension and user classes. Use of these options does not modify the set of class files used to run the javac or javadoc commands themselves. The Java platform SDK includes a system policy file that grants trusted status to extension classes and places basic restrictions on user classes.
You can then make the jar executable for a nice out of the box solution.
|
OPCFW_CODE
|
Just downloaded this proggie a couple days ago and I like it so far. This is the first vector editor I have used. I mainly got it so I could make tribal tattoo shapes for making tattoos on the skin textures of the 3D models I create for various animation programs, mainly Poser 7.
Alot of the features dont seem to work though. The main ones I am interested in are in the effects menu. The swirl and wave features do nothing. I have tried using them on rectangles, various star patterns and on line drawings with no success. Hoping someone can tell me whether there is some trick to it or whether this are just options that will be hopefully added later.
- Hi and welcome to the inkscape-wiki! It would be helpful, if you could give more details about your computer, OS and which version of inkscape you are running. Most probably only one dependance is missing. like python, for example. If you could start inkscape from the commandline, you would be able to see, what is missing. greetings Stefan --SvH 16:12, 28 February 2009 (UTC)
I have no idea what you mean by opening from the command line lol. I am using Inkscape 0.46 on my pc running Windows XP. All of the features that I have found not to work are in the effects menu. Effects/Raster/swirl and wave are the 2 main ones I am really interested in using. As I said I mainly using this program to create tribal type tattoo designs because Photoshop 7's vector capability is sadly lacking. And I cannot afford Illustrator or CorelDraw right now lol.
If you could tell me what this command line feature is I will be glad to try and copy and paste the info from it here so I could get some help. Thanks.
Hm. I am on linux, so i really can't say exactly how you open a command line in Windows XP. My last windows was win98. you have to press on the start button and search for a program called console or dos-box or something. then it opens a black screen, where you can type. there you just type in "inkscape.exe > error.txt" and then enter-key. inkscape starts in a window, you can edit your drawing, then try to apply the effect, which is not working. then end inkscape, and look into the file error.txt. --SvH 18:27, 28 February 2009 (UTC)
Ahhh ok. Well I am not getting an error message, it just doesn't do anything. Someone on the forums suggested I try converting objects to paths and that didn't work either. When I hit the apply button a little white window( sometimes a black window) pops up and is gone faster than you can blink your eye lol. But the image on the screen remains the same. Oh well at least I can make some cool shapes with this.
- I hope you believe me, when i say "It should work." :) Where did you get this binary from? Could you try to download another one? Newer or older? Was it an official one, or just a develop-version? --SvH 01:23, 1 March 2009 (UTC)
|
OPCFW_CODE
|
export works for vitb, but not for vitl
Hi, and thank you for making this available!
exporting using the depth_anything_vitl14.pth model gives me this error:
size mismatch for depth_head.scratch.output_conv2.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 64, 3, 3]).
depth_anything_vitb14.pth works perfectly!
Also:
Can we export a pointcloud / depth mesh from this repo?
Do you support the metric depth models?
Can I swap to greyscale depth maps?
Thanks again!
What will you put the command here that produces the error
I used following command and generated onnx model without problem:
C:\depth-anything-tensorrt>git clone https://github.com/LiheYoung/Depth-Anything
Copy dpt.py in this repo to C:\depth-anything-tensorrt\Depth-Anything\depth_anything
Copy export.py in this repo to C:\depth-anything-tensorrt\Depth-Anything
C:\depth-anything-tensorrt>cd Depth-Anything
C:\depth-anything-tensorrt\Depth-Anything>python export.py --encoder vitl --load_from depth_anything_vitl14.pth --image_shape 3 518 518
Result message:
Model exported to depth_anything_vitl14.onnx
Don't forget that when you export depth_anything_vitl14.pth large model, you need to set the encoder argment to vitl: --encoder vitl
About your questions:
Can we export a pointcloud / depth mesh from this repo? No
Do you support the metric depth models? No
Can I swap to greyscale depth maps? Yes
Don't forget that when you export depth_anything_vitl14.pth large model, you need to set the encoder argment to vitl: --encoder vitl
This was the issue. Thank you!
About your questions: Can we export a pointcloud / depth mesh from this repo? No Do you support the metric depth models? No Can I swap to greyscale depth maps? Yes
Sorry one more question: How can I either get the depth value from the color image, or convert to greyscale and get depth value there (from a selected pixel)?
Thanks!
Why not convert the depth result image to grayscale image first and then save/use it? Here is python example:
img_gray = cv2.cvtColor(depth_mat, cv2.COLOR_BGR2GRAY)
cv2.imwrite("gray-depth.jpg", img_gray)
It will return depth values normalized between 0 and 255.
或者更改depth_anything. cpp中的推理后处理代码,如下所示:
// Convert the entire depth_data vector to a CV_32FC1 Mat
cv::Mat depth_mat(input_h, input_w, CV_32FC1, depth_data);
cv::normalize(depth_mat, depth_mat, 0, 255, cv::NORM_MINMAX, CV_8U);
// Rescale the colormap
int limX, limY;
if (img_w > img_h)
{
limX = input_w;
limY = input_w * img_h / img_w;
}
else
{
limX = input_w * img_w / img_h;
limY = input_w;
}
cv::resize(depth_mat, depth_mat, cv::Size(img_w, img_h));
return depth_mat;
I modified depth_anything.cpp with this code and encountered an error during inference:
OpenCV: terminate handler is called! The last OpenCV error is:
OpenCV(4.10.0) Error: Assertion failed (src[i].dims <= 2 && src[i].rows == src[0].rows && src[i].type() == src[0].type()) in cv::hconcat, file C:\GHA-OCV-1_work\ci-gha-workflow\ci-gha-workflow\opencv\modules\core\src\matrix_operations.cpp, line 67
|
GITHUB_ARCHIVE
|
RECOMMENDED: If you have Windows errors then we strongly recommend that you download and run this (Windows) Repair Tool.
Luckily, you can change what apps demand your attention in the Windows 10 Action Center. If you’re one of the brave who decide to install the LinkedIn app, you’re probably more plugged in than most to your career. But those updating blog.
Introducing Gulp, Grunt, Bower, and npm support for Visual. – Scott Hanselman on Programming, The Web, Open Source,NET, The Cloud and More
Plan the base topology of Microsoft Dynamics AX Applies to: Microsoft Dynamics AX 2012 R3 Before you install Microsoft Dynamics AX, you should determine the arrangement of computers, or the topology, that you will use in your.
Microsoft Visual Studio Express is a set of integrated development environments (IDEs). These libraries can, however, be installed from an older version of the Windows SDK and Windows. In October 2013, Microsoft released four new versions of its Visual Studio Express products. Jump up ^ "Registration Issues".
How to Install and Setup Visual Studio Express 2013: 9 Steps – How to Install and Setup Visual Studio Express 2013. Visual Studio Express 2013 supports Visual Basic, C#, and C++. This makes it suitable for a beginner as the.
Visual Studio 2013 Express Installation Error. and ask questions about the install and setup of Visual. to install Visual Studio 2013 Express.
This wasn’t an issue with Metro apps, which were only allowed to place one tile on the Start screen at install time. it’s worth noting that some apps use illogical names; Visual Studio Express for Windows Phone, for example, identifies.
Like so many of us have (probably) done, I had installed Visual Studio 2013 Express prior to the Community edition. When you install both,
There is also the case of accessing a PC that has undergone a critical error or has no OS running. Out of the box, it requires the samples to be compiled with Microsoft Visual Studio* 2013, so if you just want to play around Intel® AMT.
Microsoft Visual Studio – Wikipedia – Screenshot of Visual Studio 2013, editing a program’s C++ source code. Developer(s) Microsoft: Stable release: 2017 (March 7, 2017; 6 months ago ()) Written in
Error Unable to locate package source while installing Visual Studio 2012 Update 3. Posted by Anuraj on Thursday, August 22, 2013.Net Visual Studio.
Nov 12, 2014. If you are running into an error installing Visual Studio and you are. both Visual Studio 2013 and Visual Studio 2015 installed side by side on.
Visual Studio 2013 Express – Error during installation. Hi, I have tried to install Visual Studio 2013 Express for Web, and I guess something went wrong.
Vb.net Application Error Handler Aug 17, 2009 · VB.Net – web browser control. Automate clicking on and manipulating popup windows after downloading.exe file Mar 12, 2014. That is why it is always a good idea to implement a robust error handling mechanism in your web application. To that end ASP.NET MVC offers. The abbreviation CS
Aug 12, 2013. If Microsoft Visual Studio C++ 2010 SP1 is already installed, SDK 7.1 may fail to install. Either Professional or Express works. and I got the following error message: Error: The Microsoft Windows Software. Hi all, I'm using MATLAB R2013b and apparently need to install this SDK in order to install a.
If you’ve been on-board with Windows 10 since July How to Upgrade to Windows 10 Now & Why You Should Wait How to Upgrade to Windows 10 Now & Why You Should Wait Have you been waiting patiently since July 29 for your Windows.
by Microsoft. Installation Notes. ASP.NET and Web Tools for Visual Studio 2013.2 are bundled in the main installer and can be downloaded as part of Visual Studio 2013.
The TV soundbite, the press conference, the never-ending round of interviews on 24-hour news channels; they have come for the political speech like a great swarm of Hyundais, and they have priced it out of existence. And social media, that.
Jun 26, 2015. Visual Studio 2013 was released in 2013. Visual Studio 2015 on your Windows 8.1 machine that you won't have as many issues as I had.
I'm having trouble installing Visual Studio 2013 for. (this error coming up on Visual Studio 2013). Hang when try to install Visual Studio 2013 Express on.
|
OPCFW_CODE
|
is it posible to merge feature to release with git flow
I'm using gitflow and Sourcetree.
By default, Sourcetree make me merge Feature to Develop, Develop to Release, and Release to Prod
Thing is sometimes, one merge Feature to Develop, and it fails, so it needs more work.
Then, If someone, want to make a release, git flow will merge those errors to Release.
I would prefer to merge the only Feature that has been validated to release.
Is it posible to do that with git flow?
Is the "git flow" you're referring to a product or a piece of software? I generally see that being used to refer to a workflow/process, which is something you can one-off change or modify to do whatever you want.
https://github.com/nvie/gitflow
The answer is yes.
GitFlow is a set of open source scripts which you can modify according to your needs.
but in this case it has nothing to do with gitflow, its a pure coding. You can commit bad content regardless if you are using git flow or not and you should not do it unless you have checked your code before committing it so git flow is not the case here.
How can you tell if the commit is good or bad?
Once you have answer to this question you can simply modify the feature script which responsible for merging feature into develop and block the merge.
I would prefer to merge the only Feature that has been validated to release.
As noted in the previous paragraph once you know how to identify good commit simply modify the gitflow script according to your needs.
Sources
Check this around line 313
https://github.com/nvie/gitflow/blob/develop/git-flow-feature
# lines 313 >
# merge into BASE
git_do checkout "$DEVELOP_BRANCH"
if [ "$(git rev-list -n2 "$DEVELOP_BRANCH..$BRANCH" | wc -l)" -eq 1 ]; then
git_do merge --ff "$BRANCH"
else
if noflag squash; then
git_do merge --no-ff "$BRANCH"
else
git_do merge --squash "$BRANCH"
git_do commit
git_do merge "$BRANCH"
fi
fi
So, I need to edit gitflow script... is there any other way? As I use Sourcetree, I'm afraid it get complicated, thinking about update that could reset all changes, etc...
In my opinion it has nothing to do with gitflow since developers should not commit bad code to any dev branch, but since you asked how to do it using git flow the answer is that you will need to update the scripts.
well, it is not about bad code, code could easily work in feature, but have a integration problem in dev. those situations happens a lot
yes, but understanding your answer, the answer might be no, because, I should modify several scripts Feature > Rel, Feat > Prod, so git flow seems a bit useless, or unadapted to my needs
You simply need to block the merge into develop, once you stop it there it will not be merged into prod since it will not be in develop
I assume you will if it will not be done with gitflow
If a merge to develop leads to problems, you should not commit it. My recommendation would be to merge the develop branch into the feature branch before you merge the feature into develop and resolve any conflicts there (be it merge conflicts or logical conflicts).
Also - if you do work with the gitflow model - you shouldn't merge anything into release (except maybe bugfixes). Instead, release should be a new branch from the current state of develop and if you think that there are any features that are not ready for shipping yet you can turn them off in the release branch.
|
STACK_EXCHANGE
|
2021-10-26 23:09:59 Find the results of "
do remon" for you
Doraemon (Japanese: ドラえもん) is a Japanese manga series written and illustrated by Fujiko F. Fujio.The manga was first serialized in December 1969, with its 1,345 individual chapters compiled into 45 tankōbon volumes, published by Shogakukan from 1970 to 1996.
Trouble seems to follow Nobita around. Fortunately for him, he’s got Doraemon, a trusty cat-type robot from the 22nd century. Watch Doraemon - Hindi Kids serial on Disney+ Hotstar now.
doremon products;1: https://amzn.to/3jDqGp82: https://amzn.to/3lSEqPK3: https://amzn.to/3CBvkfT4: https://amzn.to/3CBIiKUdoraemondoremon new ep in hindidorae...
Nếu mà Doremon có thật ngoài đời thì sao ?!?!THE 5050Tập 2: https://www.youtube.com/watch?v=1n-IZJG5hZU SUBSCRIBE: http://bit.ly/The5050_secret FACEBOOK ...
doremon this doremon is made by the help of python language and python turtle graphics. Doraemon is a fictional character in the Japanese manga and anime series of the same name created by Fujiko Fujio, the pen name of writing team Hiroshi Fujimoto and Motoo Abiko.
Doraemon in hindi latest episode 2019 Doraemon in Hindi New 2019 Doraemon hindi Doraemon Cartoon 2019 #Episode936 / Doraemon In Hindi New Episodes 2016 - Toy Town New Compilation 2017 New Compilation 2017
Watch doremon cartoon in urdu - sargodhian.01 on Dailymotion. Islamic cartoons for kids-Islamic Dua Before Go To sleep-Children Urdu Poem-School Chalo urdu song-Good Morning Song-Funny video Baby Cartoons - kids Playground Song - Songs for Children with Lyrics-best Hindi Urdu kids poems-best kids Hindi Urdu cartoons
CÁC BẠN XEM VIDEO NÀY NẾU THẤY HAY THÌ ĐỪNG QUÊN ĐỂ LẠI CHO MÌNH 1 LIKE NHA 😁👕👍Great!👖
|
OPCFW_CODE
|
Dynamic class in python
It is probably the wrong title, but here is my problem.
I have a system comprised of a microcontroller (MCU), a serial interface (SPI), a DAC (Digital / Analog converter), an electrode (E). Each element is defined as a class in my python modelization.
As a first step, I want to monitor the output on the electrode as I input something in the microcontroller.
Let's consider the following:
Input: 2 mA on the electrode during 1 ms.
MCU send the new DAC value via the SPI: 30 us
DAC updates its register and output: 400 us
MCU send a switch on command to the electrode: 1 us
The electrode is now outputting.
1 ms later, send a switch off command to the electrode: 1us
The electrode doesn't output anymore.
My 2 biggest issues are 1. How to take into account this time component and 2. How to monitor the SPI line to determine if something has to be done.
class Electrode:
def __init__(self, id):
self.id = id
self.switch = False
self.value = 0
def output(self):
if self.switch:
return self.value
else:
return 0
class SPI:
def __init__(self):
self.msg = None
class MCU:
def __init__(self):
self.name = "MicroController"
def send_SPI_msg(self, SPI, msg):
SPI.msg = msg
class DAC:
def __init__(id):
self.id = id
self.cs = 1
self.register = None
self.output = None
def read_SPI_msg(self, SPI):
message = SPI.msg
# update register and output
My system actually has 16 DACs and electrodes and a field-programmable gate array which are all listening to the same SPI. What I described above is a fairly simplified version.
Question is: How to have the components check the value in SPI.msg regularly and act accordingly?
In reality, each component is doing its life. Thus actions are performed in parallel. Since I'm trying to simulate the timeline and the action performed, I do not mind doing everything serially with a timeline variable (attribute) for each element. I just have issues to figure out how to have my classes interact together.
i.e. I can't do the following in python or I will get stuck:
class DAC:
def __init__(id):
# init
def read_SPI_msg(self, SPI):
while True:
message = SPI.msg
# update register and output if needed
Maybe an event triggering could be used... But I don't know how.
Maybe with multithreading, defining one thread / element?
EDIT: Current state:
class SPI:
def __init__(self):
self.attached_dacs = []
self.attached_fpga = []
self.attached_mcu = []
def attach_device(self, device):
if type(device) == DAC:
self.attached_dacs.append(device)
elif type(device) == FPGA:
self.attached_fpga.append(device)
elif type(device) == MCU:
self.attached_mcu.append(device)
def send_message(self, msg):
for device in self.attached_dacs + self.attached_fpga:
device.on_spi_message(self, msg)
class SpiAttachableDevice:
def on_spi_message(self, SPI, message):
if self.cs:
self.execute_SPI_message(message)
else:
return None
class DAC(SpiAttachableDevice):
def __init__(self, id):
self.id = id
self.cs = False # Not listening
def execute_SPI_message(message):
# Do stuff
class FPGA(SpiAttachableDevice):
def __init__(self):
self.electrodes = list()
self.cs = False # Not listening
def execute_SPI_message(message):
# Do stuff
class MCU:
def __init__(self):
self.electrodes = list()
What math are you using? Do you have continuous time or discrete? I guess the elements are functions that map input to output?
@syntonym Messages are made of 16 bits. The MCU does not comprise a floating point unit. The elements are the electrical component, i.e. the DACs, Electrodes, ... that I defined as class. The time is discrete, with a clock ticking at 8 Mhz. I do not need to reproduce the timing aspect, I can for instance make the following: ticks = range(8000000) which corresponds to 1 second. Hope it helps :)
I'm assuming you want to keep it single-threaded and you don't use asyncio. In this case, you might want to employ observer or pub/sub pattern when implementing the SPI:
class SPI:
def __init__(self):
self.attached_devices = []
def attach_device(self, device):
self.attached_devices.append(device)
def send_message(self, msg):
for device in self.attached_devices:
device.on_spi_message(self, msg)
class SpiAttachableDevice:
def on_spi_message(self, spi_instance, message):
raise NotImplementedError('subclass me!')
So you can use it like this:
spi = SPI()
device_1 = Device()
device_2 = Device()
spi.attach_device(device_1)
spi.attach_device(device_2)
spi.send_message('hello')
I haven't done anything to be able to send SPI messages from Device objects, but you can update the abstraction accordingly.
Really interesting, let me try to implement this :)
So indeed I manage to make it work. Am I correct saying that by replacing the raise statement, actions (for instance print (self.id)) will be performed for each devices? What if an action on one device impact another? i.e. If a DAC value is changing, I need 400 us before I can turn on the electrode.
@Mathieu Yes, NotImplementedError is just a placeholder. My impression was that modeling SPI is your primary concern, and otherwise you already have a strategy for managing relationships in time.
Let's say it was one of the primary concerns ^^ The message always goes from the MCU to the devices, so I should not need a method to send SPI messages from Device Objects. Can you check the EDIT with the current implementation to see if I understood your answer correctly?
@Mathieu looking at the code, yes, this is pretty much what I meant.
Alright! Well thanks for the help, I'm going to work from that point to see what comes out of it before asking for additional help :)
You could move the while loop simply outside:
class SPI:
def __init__(self, msg):
self.msg = msg
class Component:
def __init__(self, spi):
self.spi = spi
def tick(self, t):
msg = self.spi.msg
if msg = "...":
...
spi = SPI()
components = [Component(spi), ...]
for t in range(TOTAL_TIME):
for component in components:
component.tick(t)
As stated in your comment you want more a timeline view on what is happening. You can have an explicit timeline with which your components interact. External input (state changes) can be set beforehand in the same manner. To order the timemline I'll just run sort each time but it would probably be more performant to use something like a priority queue.
This mainly differs from Vovanrock2002 answer by not recursing in each timestep and having an explicit timeline.
class Component:
def __init__(self, timeline):
self._timeline = timeline
self._out = [] #all connected components
def poke(self, changed_object, time):
return []
class Clock(Component):
def __init__(self, timeline):
Component.__init__(self, timeline)
self._out.append(self)
self.msg = "tick"
self._timeline.append((200, self, msg))
def poke(self, time, changed_object, msg):
self._timeline.append((time + 200, self, self.msg))
timeline = []
spi = SPI(timeline)
components = [spi, Clock(timeline), ComponentA(timeline), ...]
timeline.append((500, spi, "new DAC value"))
while timeline:
timeline.sort(key=lambda event: event[0], reverse=True)
event = timeline.pop()
time, changed_component, msg:
for connected_component in changed_component._out:
connected_component.poke(time, changed_component, msg)
This way you have an explicit timeline (which you could also "record", just add each popped event to some list) and you can have arbitrarily connected components (e.g. if you want to have multiple SPIs).
Far too basic and sadly can't work. As stated I have actually 16 DACs, 16 electrodes, and a few other components. My first approach was as yours, checking every tick what has to be done. It's a mess, especially when you start saying: DAC 1 tasks are: This now, this in 300 this in 500. DAC 2 tasks are this in 5, this in 200 and this in 400... And so on...
I'll have a look, the first question that comes to my mind is what does the _ in fornt of timeline, out, etc... If I get it right, Component are initialized when the clock is initialized.
That's just a python convention to indicate "don't mess with it, it's internal to this class" like private in java, see e.g. this SO. The clock was just an example of an component that does something every 200th time cycle. For your case you would probably set some msg on the SPI and then at the same time let it being poked to transmit the msg to all connected devices.
|
STACK_EXCHANGE
|
Explaination of prim's algorithm
I have to implement Prim's algorithm using a min-heap based priority queue. If my graph contained the vertices A, B, C, and D with the below undirected adjacency list... [it is sorted as (vertex name, weight to adjacent vertex)]
A -> B,4 -> D,3
B -> A,4 -> C,1 -> D,7
C -> B,1
D -> B,7 -> A,3
Rough Graph:
A-4-B-1-C
| /
3 7
| /
D
What would the priority queue look like? I have no idea what I should put into it. Should I put everything? Should I put just A B C and D. I have no clue and I would really like an answer.
Prim's: grow the tree by adding the edge of min weight with exactly one end in the tree.
The PQ contains the edges with one end in the tree.
Start with vertex 0 added to tree and add all vertices connected to 0 into the PQ.
DeleteMin() will give you the min weight edge (v, w), you add it to the MST and add all vertices connected to w into the PQ.
is this enough to get you started?
---
so, in your example, the in the first iteration, the MST will contain vertex A, and the PQ will contain the 2 edges going out from A:
A-4-B
A-3-D
Here's prim's algorithm:
Choose a node.
Mark it as visited.
Place all edges from this node into a priority queue (sorted to give smallest weights first).
While queue not empty:
pop edge from queue
if both ends are visited, continue
add this edge to your minimum spanning tree
add all edges coming out of the node that hasn't been visited to the queue
mark that node as visited
So to answer your question, you put the edges in from one node.
If you put all of the edges into the priority queue, you've got Kruskal's algorithm, which is also used for minimum spanning trees.
It depends on how you represent your graph as to what the running time is. Adjacency lists make the complexity O(E log E) for Kruskal's and Prim's is O(E log V) unless you use a fibonacci heap, in which case you can achieve O(E + V log V).
You can assign weights to your vertices. Then use priority queue based on these weights. This is a reference from the wiki: http://en.wikipedia.org/wiki/Prim's_algorithm
MST-PRIM (G, w, r) {
for each u ∈ G.V
u.key = ∞
u.parent = NIL
r.key = 0
Q = G.V
while (Q ≠ ø)
u = Extract-Min(Q)
for each v ∈ G.Adj[u]
if (v ∈ Q) and w(u,v) < v.key
v.parent = u
v.key = w(u,v)
}
Q will be your priority queue. You can use struct to hold the information of the vertices.
|
STACK_EXCHANGE
|
The debate between frontend and backend developers has been going on for years. According to my opinion, backend development is actually harder than frontend development. Because you have to have the knowledge and stable coding skill for scalability, security, and performance. There are pros and cons to each side, but ultimately it depends on what you’re looking for in a career.
What are Frontend and Backend Development?
Backend development, on the other hand, is responsible for powering the functionality of the site or app. This means working with databases, managing user data, handling security, and everything else that happens behind the scenes. Backend developers usually work with languages like PHP, Ruby on Rails, and Python.
Why Backend Development is Harder?
You have to worry about scalability and performance
When you’re working on the front end, you can get away with things being a little slow and clunky because the user doesn’t have to wait for the page to load before they can use it. But when you’re working on the back end, every millisecond counts.
Users are impatient and they won’t tolerate a slow website or app. As a result, you have to be very careful about things like scalability and performance when you’re working on the back end. This can be a challenge, especially if you’re not experienced with optimizing code for performance.
You have to be good at both coding and system design
When you’re working on the front end, you can get away with being just a good coder. But when you’re working on the back end, you have to be good at both coding and system design. This is because the back end is responsible for everything from storing data to processing requests to sending responses back to the client.
As a result, you need to be able to design efficient systems as well as write code that is clean and easy to maintain. This can be a challenge, especially if you’re not experienced with designing systems.
Your application needs to be secure
When you’re working on the front end, your main concern is usually making sure that the user interface is easy to use and looks good. But when you’re working on the back end, your main concern is security. This is because the back end is responsible for storing sensitive data like passwords and credit card numbers.
As a result, you need to be very careful about things like SQL injection attacks and cross-site scripting attacks. This can be a challenge, especially if you’re not experienced with web security.
What is Hard about Frontend Development?
There are quite a few challenges that frontend developers face daily. From having to learn new technologies such as Sass or ReactJS, to dealing with compatibility issues between different libraries and frameworks.
One of the first things that any frontend developer will need to learn is how to use a CSS pre-processor. A CSS pre-processor is a language that extends the capabilities of CSS, making it easier and more efficient to write CSS code. The most popular CSS pre-processors are Less and Sass.
While learning a CSS pre-processor may not seem like a big deal at first, it can actually be quite challenging. First of all, there is a bit of a learning curve associated with them. It can take some time to get used to the syntax and features of a CSS pre-processor. Additionally, because they are not native CSS, they can sometimes be difficult to debug if something goes wrong.
Another challenge that front-end developers face is working with CSS frameworks. A CSS framework is a collection of predefined CSS styles that can be used to style a website. The most popular CSS frameworks are Bootstrap and Foundation.
CSS frameworks can be extremely helpful when it comes to quickly style a website. However, they can also be quite limiting. This is because you are restricted to using only the styles that are defined in the framework. If you want to deviate from the framework in any way, you’ll often find yourself having to write custom CSS code to override the framework styles. This can be quite time-consuming and frustrating.
Additionally, if you want to use features from multiple different libraries or frameworks, you run the risk of running into compatibility issues between them.
|
OPCFW_CODE
|
The following blog post has been drafted by Bing AI Chat, based on LERF: Language Embedded Radiance Fields. I am including it on my blog as a memory-jogger to what looks like a really exciting development, and as an example of an AI drafted blog post.
Have you ever wondered what it would be like to point at any part of a 3D scene and ask questions about it using natural language? For example, you could ask“Where is the red car?” or“What is the name of this building?” or even“What is the most expensive item in this room?”.
Well, thanks to a new research paper by Justin Kerr, Chung Min Kim, Ken Goldberg, Angjoo Kanazawa and Matthew Tancik from UC Berkeley and Google Research, this is now possible with LERF: Language Embedded Radiance Fields.
LERF is a novel method that combines two powerful techniques: NeRF (Neural Radiance Fields) and CLIP (Contrastive Language-Image Pre-training). NeRF is a way to represent 3D scenes as continuous functions that map 3D coordinates to colours and densities. CLIP is a way to learn joint embeddings of images and text that can perform zero-shot image classification based on natural language prompts.
By combining these techniques, LERF creates a system that allows users to explore and interact with 3D scenes using natural language queries, making it an intuitive way to navigate and understand complex virtual environments. LERF learns a dense, multi-scale language field inside NeRF by volume rendering CLIP embeddings along training rays, supervising these embeddings across training views to provide multi-view consistency and smooth the underlying language field.
After optimisation, LERF can extract 3D relevancy maps for a broad range of language prompts interactively in real-time. For example, you can ask LERF to highlight “the brightest spot” or “the most metallic object” or “the closest thing to me” in any given scene. You can also use more abstract or semantic queries such as “something I can sit on” or “something related to music” or “something blue”. LERF supports long-tail open-vocabulary queries hierarchically across the volume without relying on region proposals or masks.
LERF has potential use cases in robotics, understanding vision-language models and interacting with 3D scenes. For example, you could use LERF to control a robot arm by telling it where to go or what to pick up using natural language. You could also use LERF to analyse how vision-language models perceive different aspects of 3D scenes by querying them with various prompts. You could also use LERF to have fun and play games with 3D scenes by challenging yourself or others with creative questions.
If you want to learn more about LERF and see some amazing demos of it in action, check out their project website at https://lerf.io/ . You can also read their paper here: https://arxiv.org/abs/2109.03828 .
: Kerr J., Kim C.M., Goldberg K., Kanazawa A., Tancik M., (2022). LERF: Language Embedded Radiance Fields. arXiv preprint arXiv:2109.03828.
: Mildenhall B., Srinivasan P.P., Tancik M., Barron J.T., Ramamoorthi R., Ng R., (2020). NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. In Proceedings of European Conference on Computer Vision (ECCV).
: Radford A., Kim J.W., Hallacy S., Ramesh A.A.K.N.S.A.R.A.D.H.Y.A.R.E.D.D.Y.S.I.V.E.S.H.G.O.P.I.N.A.T.H.D.A.G.E.R.M.A.N.N.C.H.E.N.G.Z.I.E.G.L.E.R.J.W.U.J.M.C.O.U.L.T.E.R.P.A.R.M.A.R.C.H.E.N.K.O.F.E.D.U.S.M.L.U.O.Z.I.L.B.E.R.M.A.N.C.H.O.W.D.H.Y.K.I.M.H.J.U.N.G.J.P.A.R.K.H.L.E.E.J.B.Y.U.N.K.W.O.N.C.L.I.P.: Connecting Text And Images*. OpenAI Blog.
: Kerr J., Kim C.M
|
OPCFW_CODE
|
Citation: Proceedings of the 2017 Federated Conference on Computer Science and Information Systems, M. Ganzha, L. Maciaszek, M. Paprzycki (eds). ACSIS, Vol. 11, pages 1291–1295 (2017)
Abstract. Head pose estimation from camera images is a~computational problem that may influence many sociological, cognitive, interaction and marketing researches. It is especially crucial in the process of visual gaze estimation which accuracy depends not only on eye region analysis, but head inferring as well. Presented method exploits a 3d head model for a user head pose estimation as it outperforms, in the context of performance, popular appearance based approaches and assures efficient face head pose analysis. The novelty of the presented approach lies in a default head model refinement according to the selected facial features localisation. The new method not only achieves very high precision (about 4\degree), but iteratively improves the reference head model. The results of the head pose inferring experiments were verified with professional Vicon motion tracking system and head model refinement accuracy was verified with high precision Artec structural light scanner.
- A. Wojciechowski, and K. Fornalczyk, “Single web camera robust interactive eye-gaze tracking method“, Bulletin of the Polish Academy of Sciences, vol. 63 no.4, pp. 879, 2015.
- S. Langton, H. Honeyman, and E. Tessler, “The influence of head contour and nose angle on the perception of eye-gaze direction“, Perception and Psychophysics, vol. 66, no. 5, pp. 752-771, 2004.
- E. Murphy-Chutorian, and M. M. Trivedi, “Head pose estimation in computer vision: A survey“, IEEE transactions on pattern analysis andmachine intelligence vol. 31 no.4, pp. 607-626, 2009.
- J. M. Rehg, G. D. Abowd, A. Rozga, M. Romero, M. A. Clements, S. Sclaroff, I. Essa, O. Y. Ousley, Y. Li, K. Chanho, H. Rao, J. C. Kim, L. L. Presti, J. Zhang, D. Lantsman, J. Bidwell, and Z. Ye, “Decoding Children’s Social Behavior“, Computer Vision and Pattern Recognition (CVPR), pp. 3414-3421, 2013.
- P. Kucharski, P. Łuczak, I. Perenc, T. Jaworski, A. Romanowski, M. Obaid and P. W. Woźniak, “APEOW: A personal persuasive avatar for encouraging breaks in office work“, Proc. of the 2016 FedCSIS Conf., Eds. M. Ganzha, L. Maciaszek and M. Paprzycki, IEEE, ACSIS, Vol. 8,pages 1627-1630, 2016.
- D. Rozado, A. El. Shoghri, and R. Jurdak, “Gaze dependant prefetching of web content to increase speed and comfort of web browsing“, Int. J. of Human-Computer Studies vol. 78, pp. 31-42, 2015.
- C. Chen, P. Wozniak, A. Romanowski, M. Obaid, T. Jaworski„ J. Kucharski, K. Grudzień, S. Zhao, M. Fjeld, “Using Crowdsourcing for Scientific Analysis of Industrial Tomographic Images“, ACM Trans. on Intel. Syst. and Tech., Vol. 7 Issue 4, art no. 52, 25p., 2016.
- I. Jelliti, A. Romanowski, K. Grudzień, “Design of Crowdsourcing System for Analysis of Gravitational Flow using X-ray Visualization“, Proc. of the 2016 FedCSIS Conf., Eds. M. Ganzha, L. Maciaszek and M. Paprzycki, IEEE, ACSIS, Vol. 8, pages 1613-1619, 2016.
- Q. Zhao, and Ch. Koch, “Learning saliency-based visual attention: A review“. Signal Processing, vol. 93 no. 6, pp. 1401-1407, 2013.
- H. Wilson, F. Wilkinson, L. Lin, and M. Castillo, “Perception of head orientation“, Vision Research, vol. 40, no. 5, pp. 459-472, 2000.
- M. Kowalski, and W. Skarbek, “Online 3D face reconstruction with incremental Structure From Motion and a regressor cascade“, Symp. on Photonics Applications in Astronomy, Communications, Industry and High-Energy Physics Experiments. Int. Soc. for Opt. and Phot., 2014.
- A. Gee, and R. Cipolla, “Determining the gaze of faces in images“, Image and Vision Computing, vol. 12, no. 10, pp.639-647, 1994.
- T. Horprasert, Y. Yacoob, and L. Davis, “Computing 3-d head orientation from a monocular image sequence“, Proc. Int. Conf. Automatic Face and Gesture Recognition, pp. 242-247, 1996.
- V. Kazemi, and J. Sullivan, “One millisecond face alignment with an ensemble of regression trees“, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1867-1874, 2014.
- Dlib C++ Library., http://dlib.net/
- M. Fischler, and R. Bolles, “Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography“, Comm. of the ACM, vol. 24 no. 6, pp. 381-395, 1981.
- J. G. Wang, and E. Sung, (2007). “EM enhancement of 3D head pose estimated by point at infinity“, Image and Vision Computing, vol. 25 no. 12, 1864-1874.
- A. Asthana, S. Zafeiriou, S. Cheng, and M. Pantic, “Robust discriminative response map fitting with constrained local models“, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3444-3451, 2013.
- R. Hartley, and A. Zisserman, “Multiple view geometry in computer vision“, 2nd edition, Cambridge Univ. Press, 2004.
- Static adult human physical characteristics of the head., https://en.wikipedia.org/wiki/Human_head#/media/File:HeadAnthropometry.JPG
- A head-and-face anthropometric survey of U.S. respirator users., https://www.nap.edu/resource/11815/Anthrotech_report.pdf
- Artec Eva laser scanner., https://www.artec3d.com/3d-scanner/artec-eva
- T. Baltrusaitis, P. Robinson, L. P. Morency, “Openface: an open source facial behavior analysis toolkit“, App. of Comp. Vision, p. 1-10, 2016.
- T. Baltrusaitis, P. Robinson, L. P. Morency, “Constrained local neural fields for robust facial landmark detection in the wild“, Proc. of the IEEE Int. Conf. on Comp. Vision Work., p. 354-361, 2013.
- L. P. Morency, J. Whitehill, and J. Movellan, “Generalized adaptive view-based appearance model: Integrated framework for monocular head pose estimation“, Automatic Face and Gesture Recognition, 8th IEEE International Conference on. IEEE, p. 1-8, 2008.
- N. Wang, X. Gao, D. Tao, and X. Li. “Facial feature point detection: A comprehensive survey“, CoRR, 2014.
- T. F. Cootes, C. J. Taylor, D. H. Cooper, and J. Graha, “Active shape models-their training and application“, Computer vision and image understanding, vol. 61 no. 1, pp. 38-59, 1995.
- G. J. Edwards, Ch. J. Taylor and T.F. Cootes, “Interpreting face images using active appearance models“, Automatic Face and Gesture Recognition, Proc. Third IEEE Int. Conf. on. IEEE, pp. 300-305, 1998.
- R. Staniucha, and A. Wojciechowski, “Mouth features extraction for emotion classification“, Computer Science and Information Systems (FedCSIS), 2016 Federated Conference on. IEEE, pp. 1685-1692, 2016.
- K. A. Funes, “3D Gaze Estimation from Remote RGB-D Sensors“, PhD Thesis, Ecole Polytechnique Federale de Lausanne, 2015.
- M. Kowalczyk, and P. Napieralski, “An Effective client-side object detection method on the Android platform“, Journal of Applied Computer Science, vol. 23, pp. 29-38, 2015.
- X. Xiong, and F. Torre, “Supervised Descent Method and its Applications to Face Alignment“, Comp. Vision and Pattern Rec., 2013.
- X. Cao, Y. Wei, F. Wen and J. Sun, “Face Alignment by Explicit Shape Regression”, International Journal of Computer Vision, vol. 107, pp. 177-190, 2014.
|
OPCFW_CODE
|
Is this sample size big enough to analyze with Propensity Score Matching?
Suppose I have a dataset where 9 patients occured with the post-operative complication.
(e.g. information such as height, smoking, weight, age, disease status) and rest of the 150 patients without the post-operative complication.
In this case, I only can have 9 as the number of control group and experimental group respectively at most.
Just in general, the statistical analysis result with this dataset can be valid or not?
there is no universal threshold which will give you a definite answer to that. The only thing we can say for sure is that the larger the sample the better the statistical power.
@utobi Thanks for a comment. That's really true. I just like to know how big it should, and the decent sample size just in general to write a medical paper to persuade with statistical analysis.
I would say that sample size is not enough to even analyze a randomized trial, let alone an observational study with propensity score analysis. Would you trust the results of a study with only 9 participants in one of the treatment groups?
Let's look at both aspects of your question separately: PS matching and sample size calculations.
Propensity score matching
When you do propensity score matching, you are aiming to balance your groups on a set of observed covariates that inform treatment. By matching people who have (largely) similar propensity scores, we can try to achieve this balance between groups. However, you are not limited to matching on a 1:1 ratio. You could for example match the groups 1:10, meaning you could get 9 individuals in the exposure group and 90 in the unexposed group. Mind you that at a certain point matching more individuals becomes statistically redundant though, as you are still limited by the 9 people in the exposed group.
One alternative to PS matching could be PS weighting, where you weight individuals up or down based on their propensity score, but do not remove any individuals from the dataset. This can still allow you to balance the groups on covariates. A great paper introducing this concept and helping you choose the PS weight is Desai & Franklin 2019 BMJ.
Sample size calculations
To determine whether our random sample from the (theoretical supra)population is large enough in regards to random sample variation, there exist formal sample size formulas. A great introduction to sample size calculations is Noordzij et al. 2010 Nephrol Dial Transplant.
With 9 individuals in one group, from experience I would say that your sample size is likely too small to detect any meaningful differences, but with the sample size calculations, you can still detect how many individuals you would need to be able to detect a meaningful difference, so I strongly suggest you try these out.
Mind you that for different objectives in medical research, different sample size calculations exist: they differ for different outcomes, different modelling strategies, and different objects (e.g., detecting a difference vs. creating a prediction model).
Sources
In case the links didn't work, here are the citations of the works I mentioned. Both are open-access.
Desai RJ, Franklin JM. Alternative approaches for confounding adjustment in observational studies using weighting based on the propensity score: a primer for practitioners. BMJ. 2019 Oct 23;367:l5657. doi: 10.1136/bmj.l5657.
Noordzij M, Tripepi G, Dekker FW, Zoccali C, Tanck MW, Jager KJ. Sample size calculations: basic principles and common pitfalls. Nephrol Dial Transplant. 2010 May;25(5):1388-93. doi: 10.1093/ndt/gfp732. Epub 2010 Jan 12. Erratum in: Nephrol Dial Transplant. 2010 Oct;25(10):3461-2. PMID: 20067907.
Thanks for the very understandable answer. I have one thing to worry about when implementing 1:10 ratio for propensity score matching. Should the rest of 9 patients come within caliper width 0.2 that I choose? Do I need to change it to something like 1.0 in order to have 10 pairs for each?
@nan you should try and base the caliper width based on what width is acceptable to balance groups. If a caliper width of 0.2 does not allow you to match 10 individuals each to one person in the exposed group, this might be another indication that your sample size is (too) small. Seemingly then, there is much variation in the PS in your sample, leading to few PSs that are within that caliper width. See also this paper.
Thanks for the extra information. I've read the paper above and found out and caliper width should be set as 0.2 as long as some covariates such as Age and BMI are continuous. So I'll choose the less pairs over the more pairs with wider caliper width.
|
STACK_EXCHANGE
|
Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °
You are not logged in.
Post a reply
Topic review (newest first)
As an aside, you can speed up the partial fractions bit by noting that
in other words, whenever the products in the denominator differ by one, you can do this. In general;
with x ≠ -a,-b and a ≠ b.
i think you're right, thank you very much bobbym
I think those two answers are algebraically equal.
Thank you very much to both bobs, tehe, I only wonder if anybody knows how:
Could have been obtained. If only out of interest
Strictly, it is the expression within the log that isn't defined (eg. 1-z) And log has been extended into complex numbers to allow for the log of a negative. This is just as well in view of what I do below.
This may seem strange but (i) you are right to think it's all down to the laws of logs and (ii) logs still obey those laws even for values that are undefined in real numbers.
even though you might think those negative logs shouldn't 'exist'.
And all this means that your first answer is correct.
They are all antiderivatives of that integrand. This can be proven by differentiation. I would do it the way you did.
Hmmm...that's what i was trying to do, well, I really need to get off to bed, but for what it's worth, our old friend Wolfram gives, as its answer to
Which I can get from my answer, so I guess it's just a question of laws of logs, I don't suppose anyone has any ideas?
Okay, thanks a lot bobbym, i'll give it a try in the morning, perhaps i just made a mistake when i put:
Back into the equation, which is why I couldn't get the answer which the book has:
An antiderivative is a class of functions, there can be more than one. In definite integration it all gets absorbed into the constant of integration. This is how I understand it.
Sorry bobbym, yes, i agree, i get the same for both.
I hadn't, but having done so, my answer:
And wolfram's gives:
Which, surely, is equivalent to:
for both. Not worrying about the 1 / 2.
Well, this one wasn't that easy, but my working so far (which I think is on the right track, but maybe here's where my problem is after all) goes like this:
From which I get:
But the fact that I can't get the correct answer from here and that wolfram alpha tells me that:
Rather makes me suspect that I'm wrong about this I'm sure it must have something to do with the fact that the natural logarithm isn't defined for z ≤ 0, but i just can't seem to work out why this is the correct answer and my textbook disperses it's calculus over the course of the book just a bit so it's not that easy to find this information, at least not without starting from page one and working through to the very end
|
OPCFW_CODE
|
I'm finally working on a new personal website.
It will be jasonrubenstein.com.
I haven't worked on a personal project of any kind in a few years. I haven't had the interest, really, but in the last few weeks something has been nagging at me. I needed to build something because NOT building anything was Driving. Me. Nuts.
So, a new personal website, my little "ME!" on the interwebs.
I'll have my music up there, and some words made into sentences corralled into paragraphs, and some photos, and some other things.
Yesterday morning, after the rolling up of sleeves and the brewing and drinking of coffee, I dove into hand-coding html and css to create a basic prototype of the design I have in mind. The design is minimalist, with at most three fonts (two sans-serif and one serif). Much like in music, where what is between the notes is as important (and sometimes more important) than the collected notes, what's not in space is as important (and sometimes more important) than what is collected in other parts of the space. I'm keeping this in mind as I go.
Back to yesterday: once I had a working prototype of a webpage, I shattered it into several pieces.
Those pieces became the building-blocks for several web pages.
Once I had the pieces of the shattered webpage, I jumped into python and built a little webpage-builder function that consumes shards of shattered prototype-webpage and produces several new, different, webpages. This little, and simple, html rendering engine builds the pages for my new website.
Once I turned the webpage-shards into proper webpages, I used a couple of open-source packages to set up a webserver. Using Greenhouse and Feather, I set up a little server in the comfort of my own home. (The future! It is here!)
I made a deliberate and certain decision to eschew the use of a templating engine (Cheetah, Mako, etc). I'm having more fun writing my own rendering functions. I'm not using a framework because, well, what fun would that be? That and I'm not framing a subdivision of houses, I'm building a little mid-century-modern joint with big windows and Helvetica.
I'm shooting for simple.
I decided against more commonly used http server solutions as I want to work with a newer open-source package and help work out the kinks in whatever way I can.
Yesterday, from 7am through 7pm, was really, exceptionally, fun.
I've been experimenting with fonts with the intention of beautiful web typography. I love typography and clean, minimalist design, and I'm going to see if I can get what's in my imagination out onto the screen.
I'm learning different things than I learn at work, and remembering things I have forgotten I knew. (Or at least I've forgotten that I remembered how to do some of this stuff a few years ago but in the meantime of non-use had forgotten to remember it, or simply forgot it, and now have remembered where I put some of this knowledge).
This project is going to take a while. I have a few pages of content to work through followed by wrestling some css into submission. Not to mention some spit&polish of the http server, the image server, and deciding from where the hell to serve the mp3 files of music.
But since I have an addiction to shipping product, this thing will be live relatively soon.
The most important thing, the thing that is most important, the point that makes the point is: I'm finally, Finally, finally working on a project for the love of working on, and shipping, a project. I want to learn, hands-on, how servers along the lines of Greenhouse and Eventlet really work, and how something like Feather or Spawning really work. I want to hack at css to make pretty san-serif to happen on my computer screen, even though the problem has been solved 132,619 times already.
So, I'm doing this because not doing it was becoming impossible. Well, and the vanity of my name on a live website that's all about me.
Vanity might have a little to do with this.
Update: At lunch, a friend asked me if I thought I was over-engineering a solution for a very simple project (a static website). Yes, I am! The website is the macguffin , the thing that gives me the reason to go on this journey of coding. I could just set up nginx and serve html and be done with it. But that's not the point to me; the point to me, right now, is rolling up my sleeves and playing with some tech. Next round, I'll work on something that solves a real problem. This round, the problem I want to solve is personal, and not technical.
|
OPCFW_CODE
|
Does a PC's shield guardian make death saving throws?
So, you have a shield guardian, and its amulet, too! It's got 142 hp, and regenerates 10 hp a round; it's invincible, right?! No. All too soon something will knock it down to 0. What then?
Does the shield guardian make death saving throws, or when it hits 0, is it just lights out, little x's on the eyes?
What do the rules say about it? What has worked for you? Is there anything in prior versions or in lore that provides any guidance?
Related: Why do we assume that PHB rules apply to monsters?
It is up to the DM
The rules do not provide specific guidance, so in the end the DM will need to decide.
But the PHB says PCs make death saving throws
The PHB says:
If damage reduces you to 0 hit points and fails to kill you, you fall unconscious.
And then it says:
Whenever you start your turn with 0 hit points, you must make a special saving throw, called a death saving throw, to determine whether you creep closer to death or hang onto life.
But, when the PHB says "you", it means a PC. So, if a shield guardian were a PC, it would make death saving throws. But a shield guardian isn't a PC. Or at least, since nowhere in the rules it says to treat shield guardians as PCs, then a shield guardian isn't a PC unless the DM decides it's a PC, and in that case you're pretty firmly in houserule territory, so the DM decides.
What is a shield guardian? Is it a monster? An NPC?
Is a shield guardian a monster? Well, if you encounter a hostile one, sure. And it definitely has a stat block. Or maybe it's an NPC.
So, do monsters and NPCs get death saving throws?
The PHB goes on to say:
Most DMs have a monster die the instant it drops to 0 hit points, rather than having it fall unconscious and make death saving throws.
although it also adds that there are exceptions, and that:
Mighty villains and special nonplayer characters are common exceptions; the DM might have them fall unconscious and follow the same rules as player characters.
So, if you choose to look at a shield guardian is a monster or an NPC, then it is explicitly up to the DM.
Regeneration
As GoodNickname noted in comments, perhaps the wording of the shield guardian's regeneration ability provides a clue.
The regeneration ability says:
Regeneration. The shield guardian regains 10 hit points at the start of its turn if it has at least 1 hit point.
That the ability only works when at 1 or more HP might hint that the designers intended for it to be alive at 0 HP, instead of just instantly dying, otherwise, why specify it?
I think this is a little thin, since we don't know designer intent, and making guesses based on slight wording differences doesn't seem super solid, but it's worth mentioning.
What about guidance in other 5e source materials?
Shield guardians are mentioned numerous times in the 5e materials (Curse of Strahd, Icewind Dale: Rime of the Frostmaiden, Princes of the Apocalypse, Out of the Abyss, Tome of Annihilation, Waterdeep: Dungeon of the Mad Mage), but nowhere does it describe what happens when a shield guardian hits 0.
Okay. What about things that are sort of like shield guardians, like familiars, sidekicks, steel defenders, homunculi, golems?
There are many creatures that bear some (perhaps slight) resemblance in form and/or function to a shield guardian.
When a familiar drops to 0 hit points, "it disappears, leaving behind no physical form." (PHB/Basic Rules)
When a sidekick drops to 0, it "makes death saving throws, just like a player character". (Tasha's Cauldron of Everything)
The rules make no mention one way or the other about death saving throws for the artificer's Steel Defender (Tasha's Cauldron of Everything), the homunculus (Xanathar's Guide to Everything), or the many, many kinds of golems (Basic Rules, multiple other sources).
If the shield guardian does make death saving throws
The shield guardian description says:
If the guardian is within 60 feet of the amulet's wearer, half of any damage the wearer takes (rounded up) is transferred to the guardian.
This isn't voluntary.
The PHB says:
If you take any damage while you have 0 hit points, you suffer a death saving throw failure.
So, if the shield guardian is at 0, and the amulet's wearer takes damage, then the shield guardian takes half, and so suffers a death saving throw.
The DM has to decide
In the end, the DM will have to decide. Either way has consequences. Allowing death saving throws adds some logistic burden, not allowing them might mean that the guardian can't really participate in high-level combat, since once it's below 1/4 to 1/2 hit points, and hitting 0 will render it useless, then, the prudent thing to do might be to withdraw it from combat.
And then, of course, what to do with a dead shield guardian? But then, that's another question.
You might note that if your shield guardian is making death saves, it is also taking a failed death save every time you take damage, if you are within 60 feet.
@kirt Good point -- added.
The fact that the Shield Guardian's regeneration ability specify it only works when at 1 or more HP might hint that the writer intended for it to be alive at 0HP, instead of just instantly dying (otherwise, why specify it?)
Warforged are not constructs. They’re humanoids.
@GoodNickname I added a section on regeneration.
@ThomasMarkov Good point. I removed the section on warforged.
The regen ability working at 1 hp almost certainly shows they expected it to be dead at 0 hp, it's ability literally gets called out as stopping. Compare to a troll which is alive at 0 hp. It's a limited use magic item, getting it killed is just the end of that use.
@SeriousBri You might consider developing that into an answer. The 0 hp/1 hp thing for regenerators isn't consistent, btw. Vampires say 1 hp.
Vampire's are a good call because their regen deliberately switches off at 1 as well so they stay in mist form (their 'dead' state). I will have a look at an answer tomorrow if I get time
|
STACK_EXCHANGE
|
Image Source: Wikimedia Commons
Time travel is, of course, the stuff of science fiction. H. G. Wells wrote about it in 1895, and it’s been fertile territory for film and television makers ever since. But the ability to store and retrieve digital records has at least made it possible to travel back in time with data...
For users of statistics, it turns out this can be a pretty handy thing to do: estimates and measures of many indicators get revised as methods improve, and as geographies and economies shift over time. A statistical data Time Machine can help answer questions like how much estimates been revised - and even whether different decisions might have been taken with the benefit of hindsight.
Now, 2015 is the year of the Data Revolution. So, let’s make a contribution by making a Time Machine using World Bank open data. We're pleased to announce that the World Development Indicators Database Archives are now available in the DataBank Application, read more below on how we got here!
Time Machine Version 1: a big bookshelf
Until fairly recently, the only Time Machine option was to use printed data tables in paper publications, like the World Development Indicators (WDI). Here’s my own primitive version: my bookshelf - this takes me back to 1997, and, if I look a bit harder, I can even get back to 1978 (when World Development Indicators was first published) or 1966 (when the World Bank published the very first World Bank Atlas.)
Now this may not look like a proper Time Machine (I mean, where are the flashing lights?), but it does have one big advantage: it’s very easy to use. But it has a few pretty serious weaknesses: for instance, it’s hard to extract the time-series data: the 1999 edition contains tables which gives the latest estimates for 1997; the 2000 version has estimates for 1998; and so-on. But what if the estimates for 1997 were revised in the meantime? And, of course, you need a big bookshelf.
Time Machine Version 2: downloadable database snapshots
So a few years back we started to make historical versions of the WDI database available online, at the main WDI page:
This Time Machine is much better than the bookshelf version (still no flashy lights, though): each database has all published time-series, based on the latest available estimates at the time of publication. And the database contains some metadata - or notes - about the database and the series. But some versions of the database are in proprietary formats, like Microsoft Access or Excel. And suppose you want to see the history of revisions to certain series, like population or GDP? Since each database is a snapshot at a point in time, you’re going to have to do some clever dataset manipulation to combine everything together, and that’s kind of tricky and error-prone. Unless you happen to be into that sort of thing, of course.
Time Machine Version 3: a query tool with selectable data revisions
Inspired by others, and especially the work of the team behind the FRED and ALFRED databases at the St Louis Federal Reserve, we’ve been working on improving this by constructing our own proper data archive (by the way, if you haven’t seen ALFRED before, do so now - as some folk seem to have figured out, I’m a pretty big fan!) You can now access the World Development Indicators Database Archives via the DataBank applicaiton.
Here’s how one query looks: I’ve selected the United States GDP time series (in dollars) between 1980 and 2000, from five archived databases: those published April 2002, 2005, 2008, 2011, and 2014. The first four databases all had very similar values - small revisions between each database update. But the 2014 database update revised the series considerably. Why? Because the U.S. introduced the latest national accounting standards in 2013.
Incidentally, here’s how a similar query (just using two vintages of US GDP, current and 2011) looks in ALFRED:
To build our new archives system, we started by finding as many old versions of the WDI database as we could: we’ve managed to get back to the late 1990s, which is around the time that the WDI was first published on magnetic disks for use in personal computers. Early electronic versions of the WDI used a system called “STARS,” designed for Microsoft’s DOS based PCs; later versions are available in much more standard database formats, such as SQL. Once we managed to get a copy of each database, converting them to a common database format was pretty easy -- though in a couple of cases we had to find an appropriately old version of DOS to run the STARS system! One hard part, though, was tracking the revisions to the coding system we use. Countries have come (e.g. South Sudan) and gone (e.g. Yugoslavia); data series have occasionally been renamed or coding schemes been adjusted; and so-on.
Anyway, we’ve done our best to combine all the archived versions of the WDI into one “archive” database (the WDI Database Archives, or WDI-DA for short), and we’ve made this available today as a “beta” test release to users through the DataBank application. Typically, in the DataBank, you select a country and series combination to view the time series (Malawi’s GDP, for example). With the archive version, WDI-DA, you’ll also select the database version - the latest version will give you the most recent (and therefore, we hope, the most accurate) estimates - currently, that version would be December 2014. But you’ll also now be able to add estimates for any of the previous versions to your query. No more searching through pages and pages of previous books (unless you’re one of those who still like the feel of real paper). And no need, any more, to download all those copies of the database.
What would you like to see in an open data time machine and API?
We know that this new “Time Machine” is not going to be for everybody; it’s aimed at a relatively small group of specialized users. We know, too, that there is a lot more to do: we’d like to add the “version” dimension to the API, for instance, and include the notes and metadata that accompany each archive. We know that there might still be some inconsistencies in the naming and coding conventions over time, though we’ve been fixing what we can: we’ll talk in more detail about this in a later blog. We know that the old databases don’t have the same level of metadata - and frankly it’s been difficult to put the metadata into the database correctly (right now, we’ve just included the metadata from the latest version). And we know that the database might raise many more questions than we will always have answers for - especially those of the “why did you revise these numbers” variety....
So please bear with us: that series in the chart above is just one of the roughly 220,000 we update each quarter. And so tracking the reason behind every revision that has occurred over the 50 or so database updates in last 25 years is, well, you know, tricky. Still, don’t let that stop you asking questions: it would be really helpful to know where we should focus our effort.
As I mentioned at the start of this blog, one use case for archives like this is to find out what data was available for decision-making at the time, compared to what we now know: the “hindsight” factor. We’re also using the archive databases ourselves for quality control and consistency checking purposes. And we might also be able to use them for better understanding the accuracy of some of the estimates. Will you use these archives - and, if so, how? Let us know in the comments below, via our Helpdesk or on Twitter to @worldbankdata.
|
OPCFW_CODE
|
At Hogwarts, why didn't many students own snowy owls?
Just like the question indicates — why didn't more students own snowy owls? Eeylops Owl Emporium clearly states that they sell snowy owls among the other types of owls they sell, although I don't have the exact quotes. And it states clearly in GoF that there weren't many snowy owls at Hogwarts:
Instinctively, Harry looked up, but there was no sign of white among the mass of brown and gray. -Goblet of Fire, Chapter 13 "Mad-Eye Moody", pg. 194
This sentence strongly indicates there are very little snowy owls at Hogwarts. And we know that students — for example, Lavender Brown — think Hedwig, a snowy owl, is beautiful. So, why didn't more students own snowy owls?
To be fair: that sentence doesn't mean few students owned snowy owls; it only means Harry couldn't see any at that moment. There's loads of nooks & crannies & roosting spaces in the owlery. I think the best reading of that sentence is that from that vantage point, he couldn't see any white owls.
In-Universe Guess: most likely cost. Snowy Owls are some of the rarer types and therefore likely to be more expensive. Out of Universe Guess: JKR just wanted to make a point of Hedwigs absence, and also to make Harry more unique.
Snowy owls are probably expensive.
It seems likely that snowy owls are more expensive than most other types of owl. Hagrid had gotten money out of the Potters’ vault to buy Harry’s school supplies, and he bought Harry’s pet for him as a birthday present.
“Just yer wand left – oh yeah, an’ I still haven’t got yeh a birthday present.’
Harry felt himself go red.
‘You don’t have to –’
‘I know I don’t have to. Tell yeh what, I’ll get yer animal. Not a toad, toads went outta fashion years ago, yeh’d be laughed at – an’ I don’ like cats, they make me sneeze. I’ll get yer an owl. All the kids want owls, they’re dead useful, carry yer post an’ everythin’.” - Harry Potter and the Philosopher's Stone, Chapter 5 (Diagon Alley)
The Potters were rich, so their vault had a lot of money, and Hagrid would have wanted Harry to have a particularly special birthday present since he knew Harry had not been treated well and this was likely to be the first present he ever received.
“Twenty minutes later, they left Eeylops Owl Emporium, which had been dark and full of rustling and flickering, jewel-bright eyes. Harry now carried a large cage which held a beautiful snowy owl, fast asleep with her head under her wing. He couldn’t stop stammering his thanks, sounding just like Professor Quirrell.
‘Don’ mention it,’ said Hagrid gruffly. ‘Don’ expect you’ve had a lotta presents from them Dursleys.” - Harry Potter and the Philosopher's Stone, Chapter 5 Diagon Alley)
Hagrid would have had the money (from the Potters’ vault) and the desire to get Harry a particularly special birthday present. It seems likely that the reason snowy owls are rare at Hogwarts is because they are expensive, and most parents do not choose such expensive pets for their children attending Hogwarts.
beat me to it! Was citing the exact same quote. My only niggle is that I don't believe Hagrid was spending Harry's own money in buying the owl, otherwise it's not really a present :)
@NKCampbell Oops sorry, wasn’t trying to beat you! I fully agree that usually you can’t buy someone a present with their own money. The reason I suggest Hagrid may have done that is because I’m not sure how much money of his own Hagrid would have.
You may also want to note that snowy owls are not native to Britain, or at least aren't anymore. They might be harder for even a wizard to obtain or own outside of their native range.
no worries and no harm done!
@NKCampbell Thanks! :)
@SpaceWolf1701 That's mentioned in Goblet of Fire too: “Hedwig’ll attract too much attention,” said Hermione at once. “She stands out. A snowy owl that keeps returning to wherever he’s hiding . . . I mean, they’re not native birds, are they?”
Hagrid immediately pushes Harry off to the very best stores to buy all the finest things that are specifically needed for Hogwarts. Hagrid doesn't appear to be wealthy himself (and harry is nouveau riche) but there's no need to scrimp and I strongly agree that he was likely spending Harry's money rather than his own.
Ha grid is rich enough to buy barrels of Butterbeer
It's easy to buy the best when you're spending someone else's money.
|
STACK_EXCHANGE
|
Last night I went through and got rid of every remnant of the "utterlyboring.com/blog/archives/xxxxxx.php" addresses from this site's archives, replacing them with their current, more useful (though sometimes longer) URLs. I had a pile (nearly 1300) statements like this in my .htaccess file for this site:
Redirect Permanent /blog/archives/001275.php http://utterlyboring.com/archives/2003/12/22/thing_may_break_for_a_bit.php
I've long since quit using the "/blog" subdirectory here, because I realized that I'm never really going to put anything else here other than the blog, so there's no sense for subdirectories. (The original reason for the subdirectory was because this site was hosted under jakeortman.com, and I had my resume, portfolio, etc... on here, so subdirectories were needed.)
So I had those redirects in place since I did a massive rebuild nearly a year ago. Now I'm getting rid of those redirects now that there's no remnants of them anywhere on my site, and if there are old bookmarks to them at this point, oh flippin' well. The entries are old, and Google's long since picked up the proper URLs, and now that I've gotten my 404 page to work with dynamic publishing (thanks to this article), I just hope people are smart enough to use a search form.
I also got rid of my RSS 1.0 feed, and made an .htaccess redirect to the RSS 2.0 feed (whose location has changed as well to www.utterlyboring.com/index.rss). If this buggers up your feed readers, let me know, but I have setup .htaccess redirects for all the various addresses (FeedDemon got the redirect, and immediately started reading from the new address, changing its settings -- man, that's slick). I just didn't see the need for two RSS feeds and an Atom feed.
That gives me one less index template to rebuild upon posting, as well. So that makes it so that my RSS 2.0, Atom, Main Index, Main Archive, and Left Column Archive and Stats (up on the right column of this page) are the only index templates getting rebuilt on posting, and the individual entry's page is getting built for that entry as well. Otherwise, everything else is getting built dynamically and cached with MT 3.0's dynamic templates, making posting and rebuilds fly right along (much more so than before).
Long time readers of this site (and I'm talking REALLY long time readers) know that this site had a stage for a few months where there was no posting. While the archives here date back to Nov. 2002, I've been posting on this site for longer than that. The problem was that I had a problem with my server at my previous host combined with an early 2.x build of MovableType that I didn't know how to use. Being an idiot, I didn't back up, so I lost several months worth of posts, and I got too busy in my new job to do anything about it. I thought the entries were gone.
Then Barney and I were discussing the Web archive, and I thought to myself "Wait a minute, I betcha they have those old posts." and sure enough, they did. Granted, I posted for a couple weeks then, but it was several posts a day. So expect those to be copied/pasted as entries into this system as time allows.
So expect more random crap to show up on this site on a future date. It'll be dated, and some of the URLs won't work, but I've put quite a bit into this site over the years, and there's no sense on not having it all in one place.
|
OPCFW_CODE
|
Free Data Science Courses Ideas
Category : Uncategorized
Most men and women think that once they’re certified from a flight school, they can fly any kind of airplane. Nowadays you have to diligently follow all of the program material supplied in the training course. This course will provide the student value for money because one can also do the job for the length of the training course.
The training course is at an introductory level with assorted practical assignments. This course will provide you with in learning the significance of information science. These courses will supply you with the mandatory learning for honing your abilities and have projects to finish.
If you would like to turn into a data scientist without a higher education term papers sale degree, then there are some options that could choose. A degree in any data science courses will provide you with the skills you must process and analyze enormous data. Only basic math is needed to begin learning the advanced techniques involved with compiling and understanding statistics.
For there are only a few folks with the proper data scientist job qualifications. Rushing to have a job in data science (going through one of the above-mentioned methods) means you’re going to be competing against thousands and thousands of others in the identical specific position. For someone who’s passionate about data science and wishes to earn a mark in the area, choosing online courses in data science and the related field is the perfect way to get started.
Many people don’t understand what things to expect and might have a good deal of questions concerning the what’s and the hows. The practical expertise supplied by the men and women who’ve been in this field for several years. Relevant work experience might be considered.
A number of the industries using data science are explored here. You might opt for a data science certification based on your need to learn the entire data science skillset or you may want to brush up on your existing data science abilities or you may want to obtain a knack of the trending big data and data science technologies. 43% of information scientists utilize R.
Vskills Certification Course in Data Science with Python will give a way of transcending the theory of information science with the support of Python and several other integrated toolsets. Logistics Logistics is just one more field that has used data science to increase its efficiency. So if you’re a DASCA certified Data Science Professional, then be assured that you’re on the route of succeeding.
Comprising multiple projects and spanning an estimated 4 months, its aim is to find each student familiarized with the most frequent instruments and methods utilized in Data Science. If you’re a busy professional, the on-line class is there to find in-depth understanding about data science. Various other process analysis strategies which use event data will be shown.
As you may have noticed by now, Udacity is supplying a lot of information science courses. https://grademiners.com There’s no need to fill the application form in the event you are spending online. You should get an appetite for data.
What You Should Do About Free Data Science Courses Starting in the Next 10 Minutes
In the time of IT, there are several undiscovered alternatives to raise revenues and strategies to acquire more economical. If you’re keen on bagging a dream job in a trusted company, the datascientist is a best alternative. The better part of the program material is absolutely free, but you can pay for its premium access also.
|
OPCFW_CODE
|
Some of you will know, use and might even love the CiviRules extension. We certainly do! Quite a few of the organizations we support with their CiviCRM stuff use and love it, and judging by the question on StackExchange and issues and pull requests on GitHub quite a few more do too!
This is wonderful! But it also means that quite a few organizations are faced with the challenge of keeping CiviRules up to date with the latest CiviCRM versions. And want to make sure the functionality remains the same whenever new upgrades or little fixes to CiviCRM happen. So we think it would be nice if we were able to:
- ensure CiviRules is compatible with the latest and greatest core CiviCRM
- add a bunch of automated unit tests to CiviRules that would be run together with the core automated tests so we immediately know about bugs or software conflicts when something changes in core.
To make this possible we need funding. Initially some funding to make...Read more
If you contribute to CiviCRM, we want to know about it. Now, you might ask "don't you already know given that contributions improve the code, coordinate events, extend the system, etc.?" Well, yes, that is true, but coordinating all of that information in such a way that we, as a small Core Team, can recognize it effectively is no small task. And since contributions across all aspects of the project will play an increasingly important role in improving the code and growing the ecosystem, we need your help to better understand who's done what, when, and how it fits into the overall roadmap, working groups and various initiatives within the CiviCRM project.
If you contribute to CiviCRM, we encourage you to take a moment each week, month, quarter... however frequently (or infrequently) you want... to record you contributions to the project...Read more
You are invited to the first CiviCamp in the UK, brought to you in Manchester, the “uncrowned capital of the north”! Firstly, for those not familiar with the term, what is a CiviCamp? It’s a bit like a CiviCon, the annual conference for CiviCRM, where people gather for workshops and networking, to get a better idea of what CiviCRM is capable of and how to implement it more effectively in their own workplace.
This CiviCamp is mainly focused at users and those who are exploring CiviCRM to implement, but we also welcome implementers and developers to come and share their knowledge with others, and to pick up ideas from the community.
We already have some confirmed workshops, including: Introduction to CiviCRM, Using CiviEvents to manage training, Data Protection for the Third Sector, CiviHR, Open Data, CiviCRM and SMS, and the CiviBooking Extensions.
There will also be time to bring your own issues and questions in our ‘Birds of a Feather’ sessions to explore...Read more
CiviCRM will have a booth at one of the biggest free and open source conferences: FOSDEM.
The FOSDEM conference is held every year in Brussels (Belgium) and attracts more than 8000 participants from all over the world.
See http://fosdem.org for more information.
This year, the conference will be on Saturday 4 February and Sunday 5 February 2017.
Having a booth at a conference with more than 8000 open source enthusiasts, more than 600 lectures and lightning talks by organizations like MySQL, Mozilla, Python... is a great opportunity to promote CiviCRM!
Help at the Booth
Want to help promoting CiviCRM? Join us at the booth! Please email me at email@example.com for the practical details.
The wiki is kind of like that drawer in your kitchen where you put things that seem useful but don't really have "a place". And it works okay, especially when its your kitchen, because you have a decent idea of what you've chucked in there over the years.
Hi my name is Sean and I'm an aspiring CiviCRM developer. After many years as a CiviCRM user and administrator, I've carved out some time in my life to effectively "go to school" on CiviCRM development. Last month, I got started by diving into reading the wiki, hoping it would serve as my text book. But instead I found someone else's kitchen drawer filled with – useful things, for sure – but also that familiar medley of...Read more
JMA Consulting is pleased to welcome Jon Goldberg as our new Director of Operations effective today.
After a brief stint as a political organizer, Jon spent 13 years working in various capacities at a non-profit legal organization, primarily in IT. In 2010 he co-founded Palante Technology Cooperative and started their CiviCRM department, where he worked for 7 years. Outside of work, Jon can be found engaging in queer community organizing, (dis-)assembling electronics, and training parrots.
"I'm really excited to have Jon join us given his keen appreciation of how to help progressive organizations achieve their missions using CiviCRM. He's got a deep and wide knowledge of CiviCRM. I appreciate how he gives back to the community like through StackExchange, where he is the top ranked CiviCRM contributor," said Joe Murray, President of JMA Consulting and co-author of...Read more
A few weeks ago, we rolled out an outline of how we’ll manage contributions to CiviCRM going forward. Full details about the framework are now online here. For this post, we’re pleased to announce that we’ve taken the effort forward by enabling self-reporting on contributions via a simple contribution log.
While managing community contributions is central to the Core Team’s role, it truly is a complex task to onboard, evaluate, reward and recognize contributors that come to the project for different reasons and from different sources. It’s more than a full time job. Because of this, we run the risk of diluting the efforts of our senior developers, and hence their capacity to work on CiviCRM (the software). At the end of the day, nobody wants that! So, in order to keep the Core Team...Read more
The CiviCRM Core Team is pleased to announce that it will begin hosting monthly webinars for project contributors and supporters (members, partners, sponsors) beginning December 8th, 2016, and continue on the second Thursday of each month throughout 2017. These webinars will be a mix of overall project updates (provided quarterly) and technical improvements and demonstrations (provided 8 months out of the year). A full schedule and details will be provided in advance at http://civicrm.org/webinars
As a project, CiviCRM continues to evolve, relying on community support and contributions more than ever. Core Team webinars are intended to provide another opportunity to connect contributors and supporters with the progress and direction of both the software and the project as a whole. While these webinars are presentations by the Core Team, Q...Read more
Long time contributor Eileen McNaughton recently won the New Zealand Open Source Award for Open Source Contributor, so we thought we’d reach out to a few members of the community to get input on her efforts with CiviCRM. Erik Hommel and Dave Greenberg are kicking off this blog post with their own personal thanks to Eileen. If you have a comment, story, or just want to say thanks, post it in the comments!
Thanks from Erik Hommel
I was really really happy last week to read that Eileen McNaughton won the NZ Open Source Contributor Award 2016. I can not compare to other open source projects as I only know the CiviCRM community really well, but man does she contribute! Always approachable on our communication channels, ready to help anyone in the community, fixing code, enhancing code and mothering most of the unit tests. There are times where I thought she had two lives at the same time until I read...Read more
Nearly 78% of sites using CiviCRM are on either version 4.6 or 4.7 (check out CiviCRM stats online). Why is that significant? Because those are the only two community supported releases currently. If you’re not on one of these versions, most importantly, don’t be alarmed. There might be a reason you’re not… perhaps you’re using a partner that continues to support an previous version, or have customizations that prohibit an upgrade. If that’s the case, feel free to skip the rest of this post. But, if there’s no good reason not to upgrade, then read on.
What do we mean when we say that 4.6 and 4.7 are the only versions being supported? Well, just that. CiviCRM 4.7 is the latest stable version of the software and is the primary focus of the Core Team. Version 4.6 is the current LTS, meaning that security updates (but not new features) will be back ported to it for a not-yet-determined amount of time. If you’re on any other version, then...Read more
|
OPCFW_CODE
|
As we frequently note, a staggering number of real-world software products start their lives as Access databases running from a shared folder somewhere. There are professional developers who end up maintaining these monstrosities.
Gregory has had the misfortune of being one of those developers. A client has a terribly performing Access database, and it happens to be the driver of their business: it generates insurance quotes for an insurance company.
Let's take a look at some of the code.
'A Pause function Public Function HoldIt(Longish As Integer) Dim Startof Dim temp temp = 0 Startof = Second(Now) Do temp = Second(Now) - Startof Loop Until temp > Longish End Function
Hey, I think I found the performance problem. The only good thing I can say about this busy loop is that they actually check the system time, and didn't just throw a pile of iterations at it and hope it was good enough.
Then again, why do they want a pause function anyway? I'm not sure I want to know.
Public Sub MegaQuit() Dim FredBlogs Dim intx As Integer Dim intCount As Integer intCount = Forms.Count - 1 For intx = intCount To 0 Step -1 If Forms(intx).Name <> "HiddenStarter" Then DoCmd.Close acForm, Forms(intx).Name End If Next If pboolCloseAccess <> True Then FredBlogs = MsgBox("Application will close. Continue?", vbOKCancel, "EXIT") If FredBlogs = vbCancel Then DoCmd.OpenForm "Start_up" Else pboolCloseAccess = True DoCmd.Quit acQuitSaveAll End If End If End Sub
This method closes all the open windows, asks a confirmation, and then either returns to the startup screen or quits. It's honestly nothing spectacular, aside from the mega-name of the function, and the use of
FredBlogs. TIL that "Fred Bloggs" is the UK equivalent of "John Q. Public" in the US- a placeholder name for the average person on the street.
No, that doesn't help me understand why that's the name of this variable, but at least I learned something.
But let's close out with a function that outputs some error messages. I expect to see t-shirts based off these error messages on Shirts that Go Hard before the end of the week.
Public Static Sub FrErr(NameOfApp) Dim Count Count = Count + 1 If Count < 5 Then On Error GoTo FrErrErr MsgBox "I'm broken. I Don't know what happened (I wasn't running at the time)," & vbCrLf & _ "but I called: " & _ NameOfApp & " and bang! The duff code came back with " & vbCrLf & _ Err.Number & ":" & Err.Description & ". Sorry." Else MsgBox "I'm broken. I Don't know what happened (this isn't the first time)," & vbCrLf & _ "but I called: " & _ NameOfApp & " and bang! The duff code came back with " & vbCrLf & _ Err.Number & ":" & Err.Description & ". I'm very sorry. Have you considered restarting the PC?" End If Exit Sub FrErrErr: MsgBox "I'm Broken. I Don't know what happened and when I tried to find out I got an error. Sorry.", , "Sorry" End Sub
Duff code is not to be confused with Duff's Device
I'm broken. I don't know what happened (this isn't the first time).
This post originally appeared on The Daily WTF.
|
OPCFW_CODE
|
Support Limitations for Stateflow Software Features
Simulink® Design Verifier™ does not support the following Stateflow® software features. Avoid using these unsupported features in models that you analyze.
ml Namespace Operator, ml Function, ml Expressions
The software does not support calls to MATLAB® functions or access to MATLAB workspace variables, which the Stateflow software allows. See Access MATLAB Functions and Workspace Data in C Charts (Stateflow).
C or C++ Operators
The software does not support the
sizeof operator, which the
Stateflow software allows.
C Math Functions
The software supports calls to the following C math functions:
pow(only for integer exponents)
The software does not support calls to other C math functions, which the Stateflow software allows. If automatic stubbing is enabled, which it is by default, the software eliminates these unsupported functions during the analysis.
For information about C math functions in Stateflow, see Call C Library Functions in C Charts (Stateflow).
For details about automatic stubbing, see Handle Incompatibilities with Automatic Stubbing.
Atomic Subcharts That Call Exported Graphical Functions Outside a Subchart
The software does not support atomic subcharts that call exported graphical functions, which the Stateflow software allows.
For information about exported functions, see Export Stateflow Functions for Reuse (Stateflow).
Atomic Subchart Input and Output Mapping
If an input or output in an atomic subchart maps to chart-level data of a different scope, the software does not support the chart that contains that atomic subchart.
For an atomic subchart input, this incompatibility applies when the input maps to chart-level data of output, local, or parameter scope. For an atomic subchart output, this incompatibility applies when the output maps to chart-level data of local scope.
Recursion and Cyclic Behavior
The software does not support recursive functions, which occur when a function calls itself directly or indirectly through another function call. Stateflow software allows you to implement recursion using graphical functions.
In addition, the software does not support recursion that the Stateflow software allows you to implement using a combination of event broadcasts and function calls.
For information about avoiding recursion in Stateflow charts, see Avoid Unwanted Recursion in a Chart (Stateflow).
Stateflow software also allows you to create cyclic behavior, where a sequence of steps is repeated indefinitely. If your model has a chart with cyclic behavior, the software cannot analyze it.
For information about cyclic behavior in Stateflow charts, see Detect Cyclic Behavior (Stateflow).
However, you can modify a chart with cyclic behavior so that it is compatible, as in the following example.
The following chart creates cyclic behavior. State A calls state
A1, which broadcasts a
Clear event to state B,
which calls state B2, which broadcasts a
back to state A, causing the cyclic behavior.
If you change the
send function calls to
use directed event broadcasts so that the Set and Clear events are
broadcast directly to the states B1 and A1, respectively, the cyclic
behavior disappears and the software can analyze the model.
For information about the benefits of directed event broadcasts, see Broadcast Local Events to Synchronize Parallel States (Stateflow).
Custom C/C++ Code
If your model consists of custom C/C++ code, Simulink Design Verifier supports analysis based on these settings:
If you enable import custom code option and the custom code analysis option is set to
Off, the model is compatible for analysis, but calls to the custom code are stubbed during analysis.
If the import custom code option is set to
Off, the custom code is not supported and the model is incompatible for analysis.
Textual Functions with Literal String Arguments
The software does not support literal string arguments to textual functions in a Stateflow chart.
Stateflow Charts Containing Ports
The software does not support export function and subsystem build for Stateflow charts that contain entry or exit ports.
|
OPCFW_CODE
|
Support test discovery within fat JARs
I would like to run some Scala tests using Spark Submit. This is an environment where one has to create a fat JAR that contains the project and all of its dependencies, including test dependencies. Let's assume we already did this, and called this jar fat.jar.
We would like to run ScalaTest as follows, to run all tests in our fat JAR that belong to our project package.
java -classpath fat.jar org.scalatest.tools.Runner -o -w com.example
Alternative
Currently, there is no such option available, but there is a workaround. We can run ScalaTests in the jar as follows.
java -classpath fat.jar org.scalatest.tools.Runner -o -w com.example -R fat.jar
However, this approach has limitations.
It requires a fat.jar to exist on disk, and the file path to be known. If we use an external system to launch the jar, like Spark Submit, we may not know where this file lives, and we may not be able to provide the correct path to the Runner class.
If the file lives in an odd location, e.g. dbfs:/fat.jar like Spark Submit, this path is treated like a URL. This leads to the following exception: java.net.UnknownServiceException: no content-type at java.net.URLConnection.getContentHandler(URLConnection.java:1241).
There are workarounds.
Invoke Spark APIs to figure out where the JAR is located after upload.
Provide a malformed URL to ScalaTest, which then fails and falls back to the filesystem. If the malformed URL is actually a correct file path, it uses the filesystem instead.
Needless to say, this is not a great solution. Is it possible / feasible to implement auto-discovery within the same jar?
@Oduig I think this is an interesting use case, while I do not have an environment to try this, the related code is in this function:
https://github.com/scalatest/scalatest/blob/main/jvm/core/src/main/scala/org/scalatest/tools/Runner.scala#L1510
As what you were describing it is using URLClassLoader, which should work for valid URL using standard protocol like HTTP, I am not sure about dbfs though, it probably won't work following what you described.
The -R is passed in as runpathList for the function, I wonder if you use -R "" it will pass in a empty list (I'll try it from my side soon), which will use the class loader for class of org.scalatest.Suite:
https://github.com/scalatest/scalatest/blob/main/jvm/core/src/main/scala/org/scalatest/tools/Runner.scala#L1514
In your case I think it will be the class loader of the fat.jar?
Thanks for your reaction, I took it for a try. Just passing -R "" does not work, as there is a check in ArgsParser to prevent it:
if (dashArg != expectedDashArg)
throw new IllegalArgumentException("First arg must be " + expectedDashArg + ", but was: " + dashArg)
if (compoundArg.trim.isEmpty)
throw new IllegalArgumentException("The argument string must actually include some non-whitespace characters.")
This can be avoided by omitting -R entirely, this also results in the correct class loader being used. However, the same argument list is then passed to SuiteDiscoveryHelper.discoverSuiteNames, which then treats the empty list as an indication that there are no suites, rather than scanning the full classpath of the loader.
@Oduig Whao, you are really fast to test it!
Yes I think the code has been written in a way to avoid full classpath scan because of performance concern, I believe it was the time way before Spark was introduced and we didn't foresee the 'fat' jar use case like this.
Even if we attempt full classpath scan, looking at the code in discoverSuiteNames:
https://github.com/scalatest/scalatest/blob/main/jvm/core/src/main/scala/org/scalatest/tools/SuiteDiscoveryHelper.scala#L81
which uses also JarFile (from URL or file system) to read and scan the classes from the jar file, I think it will probably hit the same content-type problem for path like dbfs:/fat.jar. The Java reflection API allows use to load classes but unfortunately fall short of 'query' support, e.g. to query all classes under 'com.example' package. There's a java lib for that:
https://github.com/ronmamo/reflections
but unfortunately it has a pretty big dependencies and it probably not going to work for classes from dbfs:/fat.jar .
I can't think of a nice solution better than your current workaround yet, I think we need an additional way for ScalaTest to read and construct the JarFile instance other than the current URL and file system way.
@Oduig I think there's a way to get the location of jar file a class is loaded from, for example:
https://stackoverflow.com/questions/1983839/determine-which-jar-file-a-class-is-from
In the fat jar case I think we can get the location of jar file for class org.scalatest.Suite, hopefully that will return the location of the jar file on the file system, if that works then we can create a JarFile from it to perform suite discovery.
Do you think it may work?
@cheeseng To be completely honest, I don't have deep knowledge on the classpath and loaders, so this is some speculation: the classpath is already in memory, so it seems that it should be quicker to scan than a file on disk. That said it seems there's no easy API to use to do it...
I think there are two things that would help. First, we could catch the java.net.UnknownServiceException when the URL loader is used, and then use the file loader instead. This would make "dbfs:/" paths work out of the box.
Second, I tried your suggestion and it works! The file path it returns is quite weird but could be useful:
file:/local_disk0/tmp/addedFile1321117052481084364my_example_assembly_0_1_0-05bba.jar
If this file could be read, I think it would work. I have to do some further testing to verify that the path is accessible.
@Oduig Fyi I have an experiment branch here (branched off from 3.2.x-new):
https://github.com/cheeseng/scalatest/tree/experiment-fat-jar
I am not sure how I can test it, but may be you can. Fyi for testing purposse you may build scalatest-app jar file that includes everything by doing:
sbt "project scalatestApp" clean publish
The output jar path shall be stated in the build output.
It is getting late over here, I shall call it a day already. It will be interesting to see what you can get if you can test on my experiment branch and we may work out a solution for it. :)
@Oduig Sorry for replying late, glad to learn that your problems are solved now. I am not sure if it is good to include it as official support, let's see what @bvenners thinks about this. :)
|
GITHUB_ARCHIVE
|
I’m not getting the usual marks on the go board signalling which was the correct move, win-pct loss of different moves, etc. Not even for key moves. Not even for games I had already analyzed before and for which I know I could see such marks.
I am a site supporter. My profile says so, so I assume it wasn’t a problem with payment or something of the sort.
I used both firefox and google chrome for this and both have the same problem. Tried refreshing the page several times, and enabling and disabling the review, with no luck. I tried both Leela Zero and KataGo and neither helped with that.
At first I thought maybe the review was just taking longer than usual, but given that I can’t see the marks even for my older reviews, I guess that’s not the reason why.
I assume it’s probably either a configuration error by me or a bug on the site, but I’m surprised to find no other threads talking about this in the support forum (maybe I didn’t know where to look) and I’m really not finding anywhere where such a thing could be configured.
Thanks in advance to anybody who takes the time to read this, even if you can’t help. And, of course, any other ideas are welcome. Even if you feel it’s far-fetched, ideas are appreciated.
Update: I can now see it in my old games. I was confused about not seeing it in older games because I have a couple of duplicated SGF uploaded and I was looking at the wrong version of the games.
So as far as I can tell the problem only exists in the last game I uploaded. So I guess review taking longer is a real possibility then?
I can see that though you are definitely as supported, AI review did not run for a game of yours.
I triggered it manually.
If it happens again, please report that game using the “Report game” button on the right menu, we’ll track down what might be broken.
Update - something odd: although I triggered it, there still are not full analysis results.
I’ll investigate further
Something’s not working properly, obviously… anoek is on to it.
Thanks a lot!
Just in case it helps, I’ll post a couple more details:
- I’m not seeing red dots shuffling over the win-pct chart as I usually do.
- The game was played on Pandanet and exported as an sgf file.
- When I analyzed another game just now (also from an sgf file, but a historic game instead of one of mine: https://online-go.com/game/22630807) I get three static red dots instead of the usual dot-shuffling.
- My account was created by logging in through Google.
- 24hs ago, I analyzed a game and it worked properly. Didn’t try analyzing any other games in between that one and this one.
- The game you can see in my account is the second attempt I made of analyzing that game. I tried deleting the SGF file from my collection and adding it again to see if it helped, but it didn’t.
Thanks! That’s great!
I just uploaded another game that I just played and the same seems to be happening. So it might be a recurring thing with my account, for some reason. https://online-go.com/game/22656765
|
OPCFW_CODE
|
Kanban: Continuously improving our process!
In my previous post I explained how our development process works at a high level. Now I will try to go into detail and explain how we develop day-to-day.
Out of our team of 4 developers, one developer (me) has the role of product owner. And to be honest being a product owner is almost a full time job these days.
We start every day with a daily stand-up meeting in front of our Kanban board. Every team member answers the well known questions. When everyone has answered the questions we look at our "small controlled experiments"-timeline to see if there are experiments that will expire today which we have to evaluate. We discuss if the experiment has been successful or not. This discussion could lead to three different outcomes:
- We’re happy with the results of the experiment and embed this in our process;
- The experiment didn’t deliver what we’re hoping for, so we cancel the experiment;
- Undecided, so we continue with the experiment for another fixed period.
After that we can bring up new experiments to improve our process. The person who brings up the experiment explains his objective and improvement. We do not criticize the idea, we just give the experiment a due date. On that day we will evaluate and discuss the experiment. This way we continuously improve our process. There are no rules for the experiments, we only try to keep the impact of an experiment within reasonable limits.
At the end of the stand-up we pull new stories from our backlog. The product owner gives a small explanation about what the story is all about.
Our Kanban board
We use our Kanban board to visualize the current state of our activities and to manage the development process. We don’t use any digital tools for managing our process at the moment. Our Kanban board is the only and absolute truth.
We have divided our process into six stages. Every stage can contain a minimal and maximal amount of tickets at the same time.
We defined the following 6 phases:
- Isolated testing
- Code review
- Acceptance testing
All these phases are represented by a column on our Kanban board. We write our stories on stickies. One story per sticky. We often refer to it as a ticket or issue. Every ticket gets a number corresponding with an issue in our GitHub Kanban repository. This repository only exists for managing the issues, allowing us to link our pull requests to our Kanban stories.
A developer places his personal magnet on a ticket when he starts working on a particular ticket. If a ticket is ready to be pulled into the next phase we place a green magnet at the bottom right corner to indicate that the ticket can be pulled.
Phase 1: Start
When we pull a ticket from our backlog it will be placed in the Start phase. This phase only implies that the ticket will be picked up very soon. Tickets get pulled to the next phase by the order they appear in the start phase. Tickets are placed in start during the daily stand up. The product owner decides which tickets will be placed in start.
Phase 2: Analyze
The second phase is analyze. In this phase the product owner adds acceptance criteria to the ticket. These criteria are supposed to be very high level and should provide the developer with a lot of freedom in the way he implements the ticket. The criteria are commented on the issue in GitHub. When the product owner is done analyzing, a developer reviews the criteria.
Phase 3: Build
In this phase the actual coding is done. When we implement a story we always try to do this with the minimal scope in mind. Just implement the story's intend. For example if a story says: "As a recruiter I want to delete a candidate", don’t implement a popup asking the user if he is sure. If the product owner (or the stakeholders) would like to see a popup, they should create a story for this. When a front-end design is needed the developer makes one himself. We always start building in pair. Coding needs to be done using our coding standards. And it must be unit tested. Before a story can be pulled into the next phase the developer makes a screencast that demonstrates the story.
Phase 4: Isolated testing
The product owner watches the screencast and tests the ticket on our staging environment, to see if it works and if all acceptance criteria are met.
I have to admit, because the criteria are very high level it can occur that it was not implemented the way I expected. We consider this a win! Because the feedback-loop was very small, not a lot of time and other resources were lost, and we learned a lot! The product owner learned that his criteria were wrong, and the developer learned that his interpretation was wrong. So the next ticket will have better criteria and will be better understood.
On the other hand, my experience as a product owner is: I am often positively surprised by the implementation. By giving this kind of freedom to a developer you get better ways of approaching a problem than you can think of yourself.
Phase 5: Code review
I think this is the most important phase in our process. Every company should have this phase if they write code and have more than one developer. Now a second developer reviews the code. Are our coding standards used? Are the tests complete and do they make sense? Finally when the second developer has approved the code our lead developer conducts a final review and merges the code in our trunk.
Phase 6: Acceptance testing
This phase was created in our process to do the integration tests. But as we don’t have a dedicated tester at the moment we skip this phase for now.
Signaling team feedback on tickets
The phases "analyse", "isolated testing" and "code review" must be reviewed by a second person. We visualize everything using our Kanban board. When a ticket in a particular phase is ready to be reviewed, a green magnet will be placed on the bottom left corner of a ticket. We add comments at the GitHub issue or at the pull request if there are code reviews comments. We place a small sticky at the bottom left corner of a ticket to indicate that there are comments. The name of the reviewer and the phase where the ticket was and when the comments are added are written on the sticky. Once the comments are solved, the sticky is moved to the right and the reviewer checks if the comments are correctly solved. If so he places the green magnet on the bottom right corner. If there are new comments he places a seconds sticky on the left and the process repeats itself until there are no comments. The small stickies always stay on the right corner of a ticket, this way we can see which tickets had a lot of problems and we should probably investigate why there are a lot of comments. Maybe the acceptance criteria where incompleted or the developer missed some insights. Now we have noticed the problem, we can try to learn from it, and avoid them in the future.
The Kanban board visualizes our process. This way we can detect problems in our process, learn from our mistakes and try to avoid them in the future. If you don’t visualize problems in your process you don’t know that they exists let alone that you can solve them!
Come and join us in our quest to build the best development team in the universe...and beyond!
|
OPCFW_CODE
|
wrong service time in code crunch
In the hidden test cases of code crunch, after looking at the test output, it seems that
customer 4 has service time of customer 6
customer 6 has service time of customer 4
customer 5 has service time of customer 8
customer 8 has service time of customer 5
customer 7 has service time of customer 9
customer 9 has service time of customer 7
and the error just continues...
However I cant seem to reproduce this error after trying a few different test.in.
The test.in in the pdf works fine.
My project 1 also works fine in code crunch (i only changed around 2-3 lines of code for lab 5).
Anyone has a clue what might be causing this?
my simulator runs roughly like this:
for i in inputTimes:
make a dummy customer with only arrivalTime
make a Arrive event, add it to PQ
Then pop all the event in PQ before the Arrive event
Pop the Arrive event from PQ
(if there is available server/queue then) replace customer with arrivalTime and ServiceTimeSupplier.get()
get the next event of Arrive and add it to PQ
clear whatever is left in PQ
Hi I encountered the same issue when doing this question. I think what the test cases expect is that the service time for customers are only retrieved when the customer is being served. Thus since customer 6 is served before customer 4, the service time of customer 6 should be retrieved first. What I did last time was retrieving the service time when waiting events are evaluated. This tends to produce errors in this case since customer 6's service time should be before customer 4's. Hope this helps!
Hi, if we can only retrieve the service time of customers during their service event, how would it be possible to generate the service events of customers that need to wait?
Say a customer X arrives and joins a queue, it would go from an arrival event to a wait event which is processed immediately. The wait event would then have to generate a service event but the time of this service event for customer X would depend on existing customers in the queue where the service event would start after all the customers in the queue are served. However, there would be cases where the customers in the queue have not been served and we cannot retrieve their service time yet. How would we then determine the start time of the service event for customer X without knowing how long the queue time is (because we don't know the service duration of customers in the queue)?
Hi I encountered the same issue when doing this question. I think what the test cases expect is that the service time for customers are only retrieved when the customer is being served. Thus since customer 6 is served before customer 4, the service time of customer 6 should be retrieved first. What I did last time was retrieving the service time when waiting events are evaluated. This tends to produce errors in this case since customer 6's service time should be before customer 4's. Hope this helps!
oh I see! thank you so much!
Hi @xBoommy , it is true that we can only know that service time when the service event is processed. Therefore I changed my previous structure for project 1 (which evaluates wait event to generate its serve events) to something like returning serve event from done events. In this way we can determine next serve event's time from the current timings.
@jingyiiiiz Thanks for the reply! I'll give it some thought.
hi, @jingyiiiiz I am facing the same problem. for my project 1 I too updated the server next available time when Arrive/Wait is evaluated to Serve. you mention returning Serve Event from Done Event. I am not sure how to go about this. Since there is already a PQ of all Arrival Events of all customers generated, if I return Serve Event from Done Event of another customer would that not result in duplicated events of one customer?
Hi @zhanyi3, so what I did was not returning the Serve Event of the same customer, but the serve event of the next customer in the waiting queue of that counter (ofc no serve event should be returned if no one is in the queue when the previous customer is done serving).
hi @jingyiiiiz, can I ask if Serve Event is returned from Done Event how to avoid cyclic dependency cos Serve is also returning Done Event returning? thank you!
hi @jingyiiiiz sry maybe I wasn't clear. I get that you meant the next customer but I am still confused because of the nature of PQ. for eg the case below
test case:
1 2 3
0.5
0.6
0.7
1.0
1.0
1.0
first event is polled from PQ:
---
0.5 1 arrives
0.6 2 arrives
0.7 3 arrives
---
0.5 1 serves by 1
0.6 2 arrives
0.7 3 arrives
---
0.6 2 arrives
0.7 3 arrives
1.5 1 done serving by 1
---
0.6 2 waits at 1
0.7 3 arrives
1.5 1 done serving by 1
---
before I return Serve of 2 from Done of 1, I can't reach the Done or rather I can't proceed without processing the wait due to the nature of PQ. Am I missing something?
Hi @liAiyujelly, yes I faced the same issue of getting cyclic dependencies because of the next event. So what I did was adding a kind of checker into simulator's simulate to check whether the event is a done event && there's a person waiting in queue, if so -> serve event is created and put into the queue. It works but I'm not so sure if this is a good design... If you have any other thoughts about the design maybe can share!
Hi @zhanyi3, ohh I see what you mean. What I did was that when processing the waiting event, I did not do anything (since we cannot extract service time during waiting event). Hope this helps!
ah I see... thanks! this cleared my doubts
@jingyiiiiz Hello!
Hi @zhanyi3, ohh I see what you mean. What I did was that when processing the waiting event, I did not do anything (since we cannot extract service time during waiting event). Hope this helps!
@jingyiiiiz hello! When you say you didn't do anything, does it mean that u don't call the nextEvent function of wait?
@BlabbyDuck Hi, yes that's what I did.
@jingyiiiiz thank you! I managed to solve the issue :D
|
GITHUB_ARCHIVE
|
Reliability regression with binary response data (probit analysis) with JMP
Jul 29, 2014 10:22 AM
Many readers may be familiar with the broad spectrum of reliability platforms and analysis methods for reliability-centric problems available in JMP. The methods an engineer will select – whether to solve a problem, improve a system or gain a deeper understanding of a failure mechanism – are dependent on many things. These dependencies could include whether the system or unit under study is repairable or non-repairable. Is the data censored, and if so, is it right-, interval-, or left- censored? What if there are no failures? How can historical data on the same or similar component be used to augment understanding?
I’d like to address a data issue specific to the response variable. The Reliability Regression with Binary Response technique can be a useful addition to the tools that reliability engineers or medical researchers use to answer critical business and health-related questions. For instance, when the response variable is simply counts of failures, rather than the much more commonly occurring response that is continuous in nature, alternate analytical procedures should be used. For example, say you are testing cell phone for damage due to dropping phone onto floor. You may test 25 phones each at various heights above the floor, e.g. 5 feet, 8 feet etc. Then you simply record the number of failures (damaged phones) per sample set. In a health related field, you may want to test the efficacy of a new drug at differing dosages, or compare different treatment types and record the patient survival counts.
The purpose of this blog post is to help you understand how you can perform regression analysis on reliability and survival data that has counts as the response. This is known as Reliability Regression with Binary Response Data, sometimes referred to as Probit Analysis. The data in Table 1 is a simple example from a class I attended at the University of Michigan a number of years ago. The study is focused on evaluating a new formulation of concrete to determine failure probabilities based on various load levels (stress factor). A failure is defined as a crack of some specified minimum length. Some questions we would like to answer include the following:
For a given load, say 4,500 lbs., what percent will fail?
What load will cause 10%, 25%, and 50% of the concrete sections to crack?
What is the 95% confidence interval that traps the true load where 50% of the concrete sections fail?
Table 1: Concrete Load Study
The data contains three columns. The Load column is the amount of pressure, in pounds, applied to the concrete sections. Trials are the number of sections tested, and Failures is the number of sections that failed as a result of crack development under the applied pressure. We will use JMP’s Fit Model Platform to perform the analysis. Depending on the distribution selection you choose to analyze your data with, I refer you to Table 2 below which will assist you in selecting the correct Link Function and appropriate transformation, if required, for your x variable.
Transformation on X
Table 2: Depending on your distribution, this table will guide you to the appropriate Link and Transformation selections in the Fit Model Dialog.
Open the data table and click the JMP Analyze menu, then select Fit Model. Once the dialog window opens, select the Load and Trials column and add to the Y dialog. Add Load as a model effect, and then highlight load in the Construct Model Effects dialog, click the red triangle next to Transform and select Log. Your model effect should now read Log(Load) as seen in the completed Fit Model dialog screen below. Select Generalized Linear Model for Personality, Binomial for Distribution since we are dealing with counts and Comp LogLog for the Link Function since we are using a Weibull fit for this example.
Figure 1: Completed Fit Model Dialog for fitting a Weibull in our example.
Next select Run. You will see the output in Figure 2:
Figure 2: Initial output with Regression Plot and associated output. Note the Log(Load) parameter estimate of 4.51 is the Weibull shape parameter.
So now let’s begin to answer the questions we posed at the beginning. To find out what percent of sections fail at a load of 4,500 lbs, go to the red triangle at the top next to the output heading Generalized Linear Model Fit. Select Profilers > Profiler. See Figure 3. Scroll down in the report window and drag the vertical red dashed line to select 4,500 for load, or highlight the load value on the x-axis and type in 4,500. You will see that at a load of 4,500 pounds, we can expect a 45% failure rate. The associated confidence interval may be of interest as well. With this current sample, results could range from as small as 29% up to as high as 65%.
Figure 3: Prediction Profiler with a load of 4,500 pounds.
Now, to find out what load will cause 10%, 25%, and 50% of the concrete sections to crack, we again go to the red triangle at the top of the report and select Inverse Prediction. You will see the following dialog in Figure 4. Type in 0.1, 0.25 and .50 to obtain results for 10, 25 and 50 percent, respectively.
Figure 4: Dialog for Inverse Prediction
Scroll down in the report where you will find the Inverse Prediction output. See Figure 5. The predicted load value, in pounds of pressure, for the B10 is 3055, B25 is 3817and B50 is 4639. A corresponding plot, which includes a visual representation of the confidence intervals, is also provided.
Figure 5: Inverse Prediction output.
Finally, we would like to find the 95% confidence interval that traps the true load where 50% of the concrete sections fail. Again, refer to the Inverse Prediction output in figure 5. We find that a lower bound of 3,873 up to an upper bound of 5,192 traps 95% of the true load where 50% of the sections fail.
JMP has numerous capabilities for reliability analysis, with many dedicated platforms such as Life Distribution, Reliability Growth and Reliability Block Diagram, to name just a few. However, as you can see here, you can perform other reliability and survival analysis methods that using other JMP analysis platforms.
|
OPCFW_CODE
|
On Wed, 2002-05-29 at 14:49, Marcia Abade wrote:
> Im very new user of PostgreSQL and I need to apply a patch whitten in
> C. Im looking for the information "how to apply patches in PostgreSQL"
> and it is not part of the manual.
> Could you help me? And by suggestion, could you include this information
> in the Administration manual?...
Applying a patch is not a task that is in any way specific to
PostgreSQL, so it doesn't, I think, belong in PostgreSQL's manual.
A patch is produced by using "diff -c" or "diff -u"; it contains file
pathnames like this:
@@ -27,6 +27,7 @@
$(PERLS) $(TCLS) $(SCRIPTS): %: %.in
sed -e 's,@MODULE_FILENAME@,$$libdir/$(NAME),g' \
-e 's:@SQLDIR@:$(datadir)/contrib:g' \
+ -e 's:# -\*- perl -\*-:#! /usr/bin/perl -w:' \
-e 's:@BINDIR@:$(bindir):g' \
-e 's:@LIBDIR@:$(datadir)/contrib:g' $< >$@
chmod a+x $@
@@ -224,22 +224,21 @@
# See $PGDATA/pg_ident.conf for more information on Ident maps.
...[and so on]...
The patch is applied with the command "patch" - check its man page.
If your current directory is postgresql-7.2.1, you will apply the above
patch with the command "patch -p1 </path/to/the/patch/file"; "-p1" tells
patch to drop one path element from the pathname of the files to be
patched; in this example, that would change the first path to
"contrib/rserv/Makefile". If there is no path element to drop, use
Oliver Elphick Oliver(dot)Elphick(at)lfix(dot)co(dot)uk
Isle of Wight http://www.lfix.co.uk/oliver
GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C
"If any of you lack wisdom, let him ask of God, that
giveth to all men liberally without finding fault, and
it will be given to him." James 1:5
In response to
pgsql-docs by date
|Next:||From: Bruce Momjian||Date: 2002-06-02 22:19:37|
|Subject: Re: timestamp() functions still documented|
|Previous:||From: Marcia Abade||Date: 2002-05-29 13:49:26|
|Subject: HOW TO APPLY PATCHES|
|
OPCFW_CODE
|
the auto correcting always convert like ddd -> dd, very annoying, can we disable auto correcting by default?
should do it.
You can set some custom overrides for commands you use most often... but this feature will stay on by default (until I'm convinced it's annoying).
Check out this file for more details... you can set overrides in custom to your heart's content.
I think setopt correct would be a much more sensible default setting. setopt correctall will for example try to automatically correct any word in a command to some file or whatever else it'll find in your autocomplete path (user home-dirs, servers from .ssh/config, etc).
At least it's not autocorrect, or I probably would've started a damnyouzshcorrectall.com site by now ;p
It is annoying. On Ubuntu I installed 'tree' and then try to run it and it asks every time if I want it to correct to 'tee'. Now why would I want it to do that? :-) Thanks for an awesome shell but spell correct is for word processors, not terminals IMHO.
Thank you for the overrides info.
Thank you pstadler for the line to disable it. Dropped that in my .zshrc file and it turned that annoying thing off.
It's damn annoying.
It is annoying 💩
🐸 Very annoying. Please disable auto correct all by default. I have tens of user profiles and I always get burned by this and it is annoying to have to disable it or to remember to disable it after creating a fresh user profile.
I really only find this annoying when using bundler binstubs, has anyone found any of of getting the correction system to support binstubs?
Agreed that this is annoying. I think the more sensible default would be to have it off.
Agreed that this is annoying, keeps asking me if I want to correct git status to git stats...
count me into the annoying camp
@robbyrussell I think this might be worth taking a look at again.
After a recent oh-my-zsh update i keep getting auto correct suggestions despite having unsetopt correct_all in my .zshrc =(
@paulwittmann Likewise. I added the following to mine.
@unsymbol thanks so much, works like a charm!
Help! Help! Jane, stop this crazy thing! Jane!
@isimmons What's more annoying is that Zsh doesn't pick up on newly installed programs. Try tab-completing something you've just installed
@rummik That's a different issue. You can run rehash to let zsh find newly installed programs or PATH changes.
Is there a way to "star" issues on Github or somehow let the developers know that so many people are interested in an issue? Does the number of "watchers" show up to the developer?
@hrj Hm. Hadn't known about rehash. Seems like a wrapper for various package managers that does a rehash afterward would be handy.
My point was more that @isimmons' problem with completion was because Zsh didn't know about the program he had just installed, so it was correcting him
My concern about autocorrect behavior is that eventually, it will suggest something harmful, I'm not going to notice that my command has been rewritten, and I'm going to press enter. At that point it will have been my fault for having allowed autocorrect in the first place because that's what it does and that's the the ways it's always been.
@robbyrussell It seems that there is still a number of people that would prefer a change. Are you convinced that it's annoying yet?
@BBonifield Doubtful, this thread is 2 years old and it's still in.
@0x1A We need to stay strong. Change only comes to those who continue to fight for what is right.
Please disable or remove autocorrect by default.
Because it's so hard to type print "unsetopt correct\nunsetopt correct_all" >> ~/.zshrc
print "unsetopt correct\nunsetopt correct_all" >> ~/.zshrc
@rummik It's the only thing that annoys me in out of the box oh-my-zsh configuration. And of course it's hard to type, you need to google for it.
@sheerun man zshoptions
It's not just about the inconvenience of changing the defaults. The bigger problem is, the defaults are dangerous! OMZSH changes my command in a very unintuitive and unexpected way. I have, on occasions, deleted files due to this feature when I was new to OMZSH. Luckily, those files were version controlled, and I learnt the options to disable auto correct.
These days, I don't bother installing OMZSH into new user profiles. Of course, since it is open-source, I could fork it. But I am now wary of using such a tool, given that the creators are blind to potentially destructive features.
(sigh) I did like the project once upon a time.
@hrj I agree with you man, there's been too many times where the autocorrect is something completely unrelated and I press enter out of muscle memory. It's pretty annoying but I don't think it'll end up getting changed anytime soon.
Admittedly, I don't think I've seen this thread since I closed it. I typically only look at pull-requests.
Who wants to send one..?
@robbyrussell Pull request added, comments appreciated!
@robbyrussell Thanks for the quick merge!
I feel as a shadow has been lifted from my soul. Our long dark nightmare is finally over! Congratulations everybody!
Thank you @robbyrussell and @BBonifield
So this is why auto-correct stopped working!
@Drarok Ha, yeah looks like it.
You know I would leave it on if it was actually smart.
gitf etch --> git etch
Dammit, be smarter!!
I'm kind of not serious. enabling correction to mess with the arguments is probably not a safe thing to do.
I guess i'm switching it back off now.
|
OPCFW_CODE
|
Samba Videos App's
Now you can create APP's to consume our platform. To do this, you only need to use your imagination and follow our development standards.
1 - Always create a specific project where developers can create new app's and test them. See here how to create a new project.
2 - Authorize, in this project for testing, the users with "Developer" profile to create/maintain app's. For further information, see our article.
3 - Only users with Developer profile are able to create new app's, and only the Account Owner can "Enable" the created app's. After installing, the app will be automatically enabled for all users of the account.
Creating an App in Samba Videos
To create a new app in Samba Videos is quite simple. Follow the steps below:
1 - Access the menu App's and click the tab "Settings"
2 - In the app's management area, click button "+Create app"
3 - Fill out the form to create application correctly. Click "Create" to finish the process.
Now we're going to illustrate a standard Manifest (Json) to associate an app with Samba Videos. Json contains information about the app extension points.
Obs: The file with the code above is attached to this article, so you can download it.
After creating the app, a token (App ID) will be generated. It's a unique key for identification in Samba Videos.
The full URL of an extension point is determined by the application domain, concatenated with the prefix "sambaapps" and the "URL_path" extension point: <domain> / sambaapps / <URL_path> .
So, for example, for an app hosted at the domain "http://aplicativo-exemplo.com" and with extention point "/info", the full URL of the extension point will be "http://aplicativo-exemplo.com/sambaapps/info".
An app works as a Samba Videos extension. When accessing the extension point, the corresponding app page is displayed on a iframe in Samba Videos, allowing the user interaction with the app. Information on the user context, such as the current project ID, are provided in the extension points calls in the form of parameters (URL parameters). The available extension points are:
Extension of Samba Videos through tabs accessed from the top menu "App's". There's no restriction on the amount of tabs that a app can have.
Parameters sent in the extension point URL: "pid" (Project ID) and "user" (user email).
Extension of Samba Videos involving a specific media. This extension point adds an option to the option list of a media (drop-down menu on the right side of each media, in the list of contents) and in the menu on the top of the edition media screen.
Parameters sent in the extension point URL: "pid" (Project ID), "user" (user email) and "media_id" (Media ID).
Extension of Samba Videos involving multiple medias. This extension point adds an option to the menu "Actions" (drop-down menu at the top of the list of contents). To use it, simply select one or more medias of the list and click the extension point on menu "Actions".
Parameters sent in the extension point URL: "pid" (Project ID), "user" (user email) and "media_ids" (list of the selected media IDs, separated by ",").
Installation e Activation
After development, the app installation must be performed by the "Account Owner". The installation option is located in the option "App's" in the settings menu. After activation, the app will be automatically enabled to all users of the account.
The app activation for each user occurs by sending an activation request to the app, in the standard address: "<domínio>/sambaapps/activate", via GET, with the parameters "user" (encrypted user email) and "access_token" (user's access token).
"http://aplicativo-exemplo.com/sambaapps/activate?user=Encrypted User Email&access_token=USER TOKEN"
The app must return a response with status 200 of this request so the activation is concluded.
If the extension points are not visible for the users of the account after installation, it's possible that the activation failed for absence of an appropriated application response to the activation requests.
- Extension points will be only visible to an user after its first log in affter the app installation in the account (the activation request of the user for a newly installed app occurs in this user log in).
- When creating an app, the field "domain" is optional. To an app without a domain, it will be generated an access, but sending a request for activation does not occur and no extension points will be displayed.
- The developer will no longer be able to access the app after installation. o perform maintenance, the account owner must uninstall the application so that the developer can access it again.
|
OPCFW_CODE
|
First published on the the Sparx Systems Community site, July 2013.
This article makes frequent references to the Sparx Systems Enterprise Architect (EA) tool, but the observations are equally true of any UML-compliant modelling tool.
Seven ways to organise your EA models so that other people can understand them
If you have spent many hours creating a great EA Model, hopefully you want the rest of your organisation to use it as well. But how can you make it readable?
Or maybe have you just picked-up a model which you created a few years back, only to be baffled by your own work. What exactly was this model all about, and where has all that great stuff gone ?
Or perhaps you’ve inherited a model from someone else who isn’t available to tell you what’s in it. How are you supposed to sort out what’s complete and useful, from the ‘other stuff’?
Over the years, I’ve come across all of these several times, and have developed a few tricks to avoid them.
If you have more techniques for helping other people to understand your models, please email Ian at eaDocX dot com.
1 A Package is not a Bucket
The most important ‘thing’ in Enterprise Architect is definitely the Package. It’s also the simplest. Just a folder with stuff in it, right?
The Package or rather the family of Packages which you create, say more about your model than anything else. If you just use them as a bucket to put things, then you’re missing-out on a critical way to communicate the intent of your model.
Some rules for Packages:
- Sensible names. It may seem amusing to call a package ‘new stuff’, but nobody else will ever look there to find anything. If it’s ‘new stuff which was invented in the meeting..’ then call it that.
- Descriptions. A Package without a description isn’t just half-dressed, it’s practically naked. There is always something you can say about what’s in the package, where it came from, whether it’s finished or not.
I suggest that anyone who creates a package in a shared model and doesn’t add a description should buy the coffees for all of next week.
- Authors. Enterprise Architect will make the Author of the Package the person who created it. But go further, and make the Author the Owner of the information in it. So, even if someone is totally confused at what’s in the package, they can always email the author…
2 Notes, notes, notes
I’ve been teaching UML and other modelling techniques for more than 15 years, so apologies to all former students for repeating this. If you’re in that select group, can you remember the most important UML (or BPMN, or SysML..) modelling construct ?
The humble note.
They don’t cost anything, they never run out, and they can communicate more about whyyour diagrams look the way they do than anything else.
Add them to elements, to links, to anywhere you can think of. But make sure to keep them up-to-date: a diagram with misleading notes is worse than one with no notes at all.
3 Single-purpose Packages
If you’re going to follow the rules above, and describe what’s inside each package, then having one, or a small number, of different types of ‘thing’ in a folder is sensible: it’s easier to find things, and easier to write a quick description.
This also becomes important if you are going to document your model using a document generator – either RTF or eaDocX.
A Package with one type of thing in it can be documented as a simple table, with the Package name becoming a title for the table.
4 Different things get different stereotypes
The idea of the Stereotype is one of the key ideas of UML, which EA has extended to cover all the other model types it supports. So whether you’re creating SysML diagrams, BPMN business processes or Use Cases, you can use stereotypes.
So use them.
A stereotype is just a ‘special kind of’ thing. So if you have use cases which are sometimes complete (all scenarios filled-in) then make them <<fully dressed>>Use Cases, or if not <<partially dressed>> . So a reader finding one of these will know whether it will be completed or not: they know what to expect.
The same can be true of any other element. Using a stereotype can tell your readers what they are looking at.
Stereotyping also makes it easier for documentation tools like eaDocX to change how they format their outputs. For example, a <<fully dressed>> Use Case should print its scenarios, and highlight where they are missing – that’s an error. But <<partially dressed>>Use Cases don’t need to.
5 Status is everything, or Somewhere to Play
When you read a model, probably the most common problem is that you don’t know what the status of something is: a diagram, an element, or a whole package of the model.
Is this completed, signed-off and implemented, or just some ideas I had over coffee one day?
So using the EA ‘Status’ fields (with some sensible values) is really, really useful to readers.
But you can do more to help separate the ‘finished’ content from the ‘just thinking’ stuff.
Why not have an area of the model which is just a sandpit? Somewhere where modellers can try things out, and to which no standards apply. Readers are not encouraged to look in these packages. Everything is work-in-progress or incomplete.
Equally, the areas which are for ‘real’ content DO obey all the local rules: packages must have descriptions, only the approved stereotypes are used etc.
6 Public and Private diagrams
The great power of EA is that it allows us to create links between all kinds of elements, depending on what kind of problem we’re trying to solve.
There are several ways to create these links: the Relationship Matrix is a quick way, but diagrams are also very common. And this creates a problem for the reader.
Are they looking a ‘proper’ diagram, which they are supposed to understand, or is this a diagram which you just created to establish some relationships, and isn’t really for public use?
So get used to naming diagrams so that this is obvious, and to prevent accidental printing of these diagrams in documents.
Pick a naming convention for ‘do not print’ documents: we add ‘hidden’ in front of the document name. We’d like to use a diagram stereotype, but that doesn’t appear in the Project Browser. So ‘My untidy diagram’ becomes: ‘Hidden – my untidy diagram’. We also tick the box in the diagram properties to “Exclude image from RTF Documents”. Both the EA RTF generator and eaDocX will take this to mean ‘don’t print in any document’.
So now you’re free to create as many untidy diagrams as you like, and readers will know to ignore them.
7 Pick a meta-model, write it down, and stick to it
This final piece of advice is really a summary of all the others.
Each idea we’ve discussed above contributes to your meta-model.
If that sounds like a scary, super-technical idea, it isn’t.
All of your EA models already have a meta-model, whether you know it or not. The meta-model just says what kinds of ‘stuff’ is in your model.
- What kinds of elements have you used? e.g Requirements and Use Cases, but not internal requirements,
- How have you linked them together?
- What stereotypes have you used, and what does each one mean?
- How have you used things like Element Tests, the Glossary, or Project Tasks?
..so not really complicated. The meta-model is just your local modelling standards.
If you want to find out what your meta-model is, use the eaDocX Model Expert. It will draw a diagram of all the element types, stereotypes and links in your model. Be prepared for a surprise! Big models can be complicated!
This is a good reason to make your meta-model clear and simple. Pick a small number of elements, stereotypes and links, and use them consistently.
Communicating the meta-model is critical: one which only you understand is no use. It MUST be written down, preferably in the model itself, and taught to all of your team.
AND kept up-to-date, as your modelling style evolves, as it will certainly do.
|
OPCFW_CODE
|
What is IntelliJ IDEA?
Who uses IntelliJ IDEA?
IntelliJ IDEA Integrations
Here are some stack decisions, common use cases and reviews by companies and developers who chose IntelliJ IDEA in their tech stack.
I'm full stack with a focus on front-end, primarily React, and Angular. At my last company I was supporting both Java and open other source back-ends, IntelliJ IDEA met my needs perfectly. At my current company I need to support both open source and C# on the back-end. I have been provided a VS license and have been debating either using VS just for back-end C# work and continuing with IntelliJ for front-end, or switching to JetBrains Rider for fullstack? I've read that Rider is great for C# but I'm unsure if Rider will provide the same front-end capabilities that I currently enjoy with IntelliJ.
I'm currently working on a book about file structures. The text is written in LaTeX (with IntelliJ IDEA + TeXiFy) and the sample code is in Python (using PyCharm).
Since I use two IDEs, I have a distinct project for text and code.
I was thinking if I could join the projects in a single IDE, a that's my question:
- Should I use PyCharm and install the TeXiFy plugin,
- Should I stick to IDEA and install Python support to it, or
- Should I keep the two projects separated?
We are creating an IntelliJ IDEA plugin that uses JCEF web-view to show the UI by reusing the components from our earlier command line tool. Earlier we had created a command line tool where we had our frontend in React and backend in Spring Boot.
In order to create the plugin, we need a way to start both the backend (spring boot) and frontend (React) servers from the plugin itself. Basically, when the user clicks the plugin's icon in Intellij it should start both backend and frontend servers. Can anyone please suggest a way/resources to achieve this?
I have recently moved from C# and Xamarin to Python and IntelliJ IDEA. I finally have a grasp of python and want to start developing web applications with Django. Which IDE should I use?
Note: I have read that PyCharm is great but the community version only allows for basic web applications. Please help
My 2 questions: Does VS Code have Cucumber Plugins allowing me to write behave tests? And more importantly, does VS Code have the same refactoring tools that IntelliJ IDEA has? I love that I have easy access to a range of tools that allow me to refactor and simplify my code, making code writing really easy.
UPDATE: Thanks for the great response. I am going to start with VSCode based on the open source and free version that will allow me to grow into other languages, but not cost me a license ..yet.
IntelliJ IDEA's Features
- Smart Code Completion
- On-the-fly Code Analysis
- Advanced Refactorings
- Database Tools
- UML Designer
- Version Control Tools
- Build Tools
|
OPCFW_CODE
|
Copying and Replacing Values with Values Within the Same Variable Based on Condition
Sorry if the title is not specific enough, I'm imagining it in excel terms. I have a dataframe:
Product Group ... Score_Alpha Score_Beta
0 XXX0X1 Cinnamon ... 0.007598 0.007538
1 XXX0X2 Cinnamon ... 0.007598 0.007538
2 XXX0X3 Cinnamon ... 0.007598 0.007538
3 XXX0X4 Cinnamon Special ... 0.003343 0.002696
4 XXX0X5 Cinnamon Special ... 0.003343 0.002696
5 XXX0X6 Cinnamon Special ... 0.003343 0.002696
6 XXX0X7 Peach ... 0.003399 0.004444
7 XXX0X8 Peach ... 0.003399 0.004444
8 XXX0X9 Peach ... 0.003399 0.004444
9 XXX0X10 Peach Special ... 0.006677 0.006262
10 XXX0X11 Peach Special ... 0.006677 0.006262
11 XXX0X12 Peach Special ... 0.006677 0.006262
I need to replace the Score_Alpha and Score_Beta of lines where Group =='Cinnamon Special' with that of 'Cinnamon', the same between 'Peach Special' and 'Peach'. Basically, it should look like this:
Product Group ... Score_Alpha Score_Beta
0 XXX0X1 Cinnamon ... 0.007598 0.007538
1 XXX0X2 Cinnamon ... 0.007598 0.007538
2 XXX0X3 Cinnamon ... 0.007598 0.007538
3 XXX0X4 Cinnamon Special ... 0.007598 0.007538
4 XXX0X5 Cinnamon Special ... 0.007598 0.007538
5 XXX0X6 Cinnamon Special ... 0.007598 0.007538
6 XXX0X7 Peach ... 0.003399 0.004444
7 XXX0X8 Peach ... 0.003399 0.004444
8 XXX0X9 Peach ... 0.003399 0.004444
9 XXX0X10 Peach Special ... 0.003399 0.004444
10 XXX0X11 Peach Special ... 0.003399 0.004444
11 XXX0X12 Peach Special ... 0.003399 0.004444
My apologies if this sort of question has already been answered, my googling skills are questionable.
I have about 30+ unique values in Group with their own 'XXX Special' counterparts so I cannot manually group by specific values in the variable
Thank you for reading!
First, get the value of Score_Alpha and Score_Beta where Group =='Cinnamon'
scores = df.loc[df["Group"]=='Cinnamon',["Score_Alpha", "Score_Beta"]].iloc[0].tolist()
Second, put the scores to df where Group =='Cinnamon Special'
df.loc[df["Group"]=='Cinnamon Special', ["Score_Alpha", "Score_Beta"]] = scores
In the same way,
scores = df.loc[df["Group"]=='Peach',["Score_Alpha", "Score_Beta"]].iloc[0].tolist()
df.loc[df["Group"]=='Peach Special', ["Score_Alpha", "Score_Beta"]] = scores
Because there are 30+ unique values in Group with their own 'XXX Special', you can solve it with a function
# First, extract all group names without "Special"
names = [x for x in set(df["Group"].tolist()) if "Special" not in x]
# Second, define a function
def replace_values(df, name):
scores = df.loc[df["Group"] == name, ["Score_Alpha", "Score_Beta"]].iloc[0].tolist()
df.loc[df["Group"]== name + ' Special', ["Score_Alpha", "Score_Beta"]] = scores
# Third, iterate name in names
for name in group:
replace_values(df, name)
|
STACK_EXCHANGE
|
Here's an interesting edge case that I'd like to put out there in case anyone can help....
Lets say I have a secure feature service published by an arcgis server (for arguments sake let's say it's 10.4.1 if that matters).
I add that feature service as a stored item in AGOL with saved credentials. I can now add that item to web maps etc, so long as I'm logged into AGOL.
When you add a feature service to AGOL like this, it creates a proxy url, so to speak, that redirects to the actual underlying feature service url.
Now, let's say I'm using AppStudio to interact with that item/feature service. I log into AGOL using the 'Portal' object. I then try to do a 'fetch feature service info' using a 'ServiceInfoTask', using the proxy url as the ServiceInfoTask url. This fails because it doesn't have a valid token or authorization.
The proxy url it creates begins with "https://utility.arcgis.com/usrsvcs/servers/....etc". I tried adding that url to the identity manager using the portal credential object but that didn't work. Obviously I can't hit the source url of the actual portal feature service as the credential for that is stored in AGOL.
Any ideas how I might access the underlying feature service, via the AGOL stored item?
So basically you are referring to a hosted secured feature service. Did you end up sharing the item to public? If you did then your underlying secured feature will work as the credentials (for the actual arcgis secured service) are stored within the item. And you should be able to access it.
If you don't want this way. Then you have to go through the OAuth2 Authentication, because at this point it is not about ArcGIS Server secured service. I would recommend you to have a OAuth authentication login page for your app. Then you should be able to access the item
So, if I understand correctly, what you are saying is that the AGOL won't forward the appropriate request on to the arcgis server using the stored credentials, and that the app would also need to authenticate directly for the arcgis server (e.g. using OAuth). So the user would need to authenticate once for AGOL and once for the server - so no real point in adding the item to AGOL in the first place - is that correct?
Or, if the argis server and AGOL were both configured to use the same OAuth identity provider then we would be able to get away with a single sign on then....
(to clarify, sharing the data publicly is not the intent. The intent is for the the data to be consumed by logged in AGOL users).
Actually, there will be only single sign in using the Portal named user in your app and the secured service should work without any further sign in since the credentials are stored within the item. Just like it does using the ArcGIS online or Portal.
I don't have Portal and I am unclear why you are adding your secured feature service to AGOL, so I might be missing something, but maybe this will help. You can access an ArcGIS Server secured feature service directly from your AppStudio app and add a proxy to your web server to handle the security.
Here's directions on how to download and install proxy: resource-proxy/DotNet at master · Esri/resource-proxy · GitHub That's where you put your credentials, token url for the secured feature service, and allowed referers. (ArcGIS Server secured feature service credentials, not AGOL credentials)
I'm not sure what the best practice would be for implementing in AppStudio app, but right now I'm just using this format for the "featureServiceURL" in the appinfo.json file:
|
OPCFW_CODE
|
Attend the Linley Processor Conference November 1st
Listen to Flex Logix Co-Founder Cheng Wang’s talk November 1st: A High Performance Reconfigurable Neural Accelerator with Low DRAM Bandwidth. More information HERE.
New EFLX4K AI eFPGA Core Optimized for Fast, Deep Learning
>10K more gigamacs/second THAN any fpga/efpga!
FPGA chips are in use in many AI applications today including Cloud DataCenters. (see the next section for a tutorial on AI math)
Embedded FPGA (eFPGA) is now becoming used for AI applications as well. Our first public customer doing AI with EFLX eFPGA is Harvard University who will present a paper at Hot Chips August 20th on edge AI processing using EFLX: "A 16nm SoC with Efficient and Flexible DNN Acceleration for Intelligent IoT Devices."
We have other customers whose first question is "how many GigaMACs/second can you execute per square millimeter"?
The EFLX4K DSP core turns out to have as many or generally more DSP MAC's per square millimeter relative to LUTs than other eFPGA and FPGA offerings (For example, the Xilinx VU13P has 1 DSP for every 300 LUT4s; EFLX4K DSP has 1 DSP for every 75 LUT4s), but the MAC was designed for digital signal processing and is overkill for AI requirements. AI doesn't need a 22x22 multiplier and doesn't need pre-adders or some of the other logic in the DSP MAC.
In response to customer requests we have architected an new member of the EFLX4K family, the EFLX4K AI core, optimized for deep learning, which has >10x the GigaMACs/second per square millimeter of the EFLX4K DSP core! The EFLX4K AI core can be implemented on any process node in 6-8 months on customer demand and can be arrayed interchangeably with the EFLX4K Logic/DSP cores.
A single EFLX4K AI core has the same number of inputs/outputs of all cores in the EFLX4K family: 632 in and 632 out, each with an optional flip-flop.
The EFLX4K AI core has 8-bit MACs (8x8 multipliers with accumulators) which can also be configured as 16-bit MACs, 16x8 MACs or 8x16 MACs as required, reconfigurable. Each core has 441 8-bit MACs which can run ~1Ghz at worst case conditions (125C Tj, 0.72Vj, slow-slow corner) for ~441 GMacs/second for each EFLX core. This compares to 40 MACs at ~700MHz at worst case conditions for the EFLX4 DPS core which is 28GMacs/second. The EFLX AI core has >10x the MACs/second per square millimeter!
The EFLX4K AI core is the same width as the EFLX4K Logic/DSP cores and ~1.2x the height. A 7x7 EFLX4K AI array has >20 TeraMACs/second at worst case operating conditions. A 4x7 array of EFLX4K AI cores has more MACs than the largest Xilinx FPGA (which is probably multiple die in one package) but fits in 28 square millimeters. A EFLX4K AI cores can be arrayed up to at least 7x7. And they can be mixed interchangeably, by row, with EFLX4K Logic/DSP cores. A customer can design an EFLX array with the number of MACs and amount of control logic required for their neural network applications.
A target spec for the EFLX4K AI core can be downloaded HERE. This target spec is in discussion with customers and may change based on customer inputs and requirements.
Basics of Neural Network Math Operations
Below is a very simple neural network graph. The input layer is what the neural network will process. For example, if the input layer were a 1024x768 picture, there would be 1024x768 = 786,432 inputs each with an R, G and B component! The output layer is the result of the neural network: perhaps the neural network is set up to recognize a dog versus a cat versus a car versus a truck. The hidden layers are the steps required to go from the raw input to achieve a high confidence output: typically there are many more layers than this.
What are all the lines between the circles? A Neural Network is an approximation of the neurons in a human brain which receive inputs from dozens or hundreds of other neurons then generate their own output. In the example above, the first hidden layer has 7 "neurons": each neuron receives a "signal" or input from 5 inputs of the input layer.
Mathematically, the hidden layer neuron value is computed as follows: [input1*weight1n + input2*weight2n + input3*weight3n + input4*weight4n + input5*weight5n] -- see the red highlighted vectors to the right -- then this value is passed through an activation function which generates the final result for the first hidden layer neuron.
Converting all of the inputs to the first hidden layer can be represented then as a matrix multiply of the input vector times a matrix of weights. In the matrix multiply to the right, x is the input layer vector, A is the weights matrix and the result is the value of the hidden layer #1 which is then fed through the activation function.
In neural networks there are two phases: a training phase where the neural network is "trained" to produce the appropriate desired outputs from the inputs. Typically training is done using GPU and floating point math: training requires a very large database of inputs and very large processing power to achieve a neural network that can achieve the desired purpose. It is the training phase which generates the weights. For the neural network above, there is a matrix of weights for each layer.
Once the weights are generated, the neural network can be used to classify inputs: this is called inference. Inference is done using integer math with 16-bit value, 8-bit values or even less.
For each layer, there is a large matrix multiplication that takes place followed by an activation function operation. The mathematical operation that dominates is the matrix multiply, so that is why we often hear the question "how many GigaMACs/second can you do?". The matrix sizes and weights can be very large: millions of entries or 10's of millions. The hardware is not going to map the neural network of this size one-for-one, it would be too big. Instead, large matrix multiplies can be done by a series of smaller block matrix multiplies, which themselves can be done as a series of vector multiplies row times column, as shown below.
In the EFLX4K AI, the MACs are arranged in rows with one MAC having direct pipeline connection to the MACs on either side enabling a rapid multiplication/accumulation series which is equivalent to the row times column vector multiply above. The MACs can pipeline as well "jumping" from one EFLX AI core to the next for very long vector multiplies.
The biggest value of eFPGA in AI is the ability to reconfigure: different neural networks have different configurations, and algorithms are changing rapidly so the ability to evolve the hardware configuration is huge.
This is a simple summary of the matrix math in neural networks but serves to highlight the value of dense, fast, pipelined MACs in the EFLX AI.
|
OPCFW_CODE
|
namespace Endpoints {
export class AngularEndpointsService {
static $inject = ['$http'];
static $http: ng.IHttpService;
constructor($http: ng.IHttpService) {
AngularEndpointsService.$http = $http;
}
static call<TView>(endpoint: IEndpoint, data) {
var call = AngularEndpointsService.$http<TView>({
method: endpoint._verb,
url: endpoint.toString(),
data: data
});
return call.then(response => response.data);
}
public Test = {
Get: (args: Endpoints.Test.IGet): Endpoints.Test.IGetWithCall => {
var endpoint = new Endpoints.Test.Get(args);
return _.extendOwn(endpoint, {
call<TView>() {
return AngularEndpointsService.call<TView>(this, null);
}
});
},
Get1: (args: Endpoints.Test.IGet1): Endpoints.Test.IGet1WithCall => {
var endpoint = new Endpoints.Test.Get1(args);
return _.extendOwn(endpoint, {
call<TView>() {
return AngularEndpointsService.call<TView>(this, null);
}
});
},
GetSomething: (args: Endpoints.Test.IGetSomething): Endpoints.Test.IGetSomethingWithCall => {
var endpoint = new Endpoints.Test.GetSomething(args);
return _.extendOwn(endpoint, {
call<TView>() {
return AngularEndpointsService.call<TView>(this, null);
}
});
},
GetSomethingElse: (args: Endpoints.Test.IGetSomethingElse): Endpoints.Test.IGetSomethingElseWithCall => {
var endpoint = new Endpoints.Test.GetSomethingElse(args);
return _.extendOwn(endpoint, {
call<TView>() {
return AngularEndpointsService.call<TView>(this, null);
}
});
},
Post: (args: Endpoints.Test.IPost): Endpoints.Test.IPostWithCall => {
var endpoint = new Endpoints.Test.Post(args);
return _.extendOwn(endpoint, {
call<TView>(value: Interfaces.IDummyClass) {
return AngularEndpointsService.call<TView>(this, value != null ? value : null);
}
});
},
Put: (args: Endpoints.Test.IPut): Endpoints.Test.IPutWithCall => {
var endpoint = new Endpoints.Test.Put(args);
return _.extendOwn(endpoint, {
call<TView>(value: string) {
return AngularEndpointsService.call<TView>(this, value != null ? `"${value}"` : null);
}
});
},
Delete: (args: Endpoints.Test.IDelete): Endpoints.Test.IDeleteWithCall => {
var endpoint = new Endpoints.Test.Delete(args);
return _.extendOwn(endpoint, {
call<TView>() {
return AngularEndpointsService.call<TView>(this, null);
}
});
}
}
public Thingy = {
GetAll: (args?: Endpoints.Thingy.IGetAll): Endpoints.Thingy.IGetAllWithCall => {
var endpoint = new Endpoints.Thingy.GetAll(args);
return _.extendOwn(endpoint, {
call<TView>() {
return AngularEndpointsService.call<TView>(this, null);
}
});
},
Get: (args: Endpoints.Thingy.IGet): Endpoints.Thingy.IGetWithCall => {
var endpoint = new Endpoints.Thingy.Get(args);
return _.extendOwn(endpoint, {
call<TView>() {
return AngularEndpointsService.call<TView>(this, null);
}
});
},
Getty: (args: Endpoints.Thingy.IGetty): Endpoints.Thingy.IGettyWithCall => {
var endpoint = new Endpoints.Thingy.Getty(args);
return _.extendOwn(endpoint, {
call<TView>() {
return AngularEndpointsService.call<TView>(this, null);
}
});
},
Post: (args?: Endpoints.Thingy.IPost): Endpoints.Thingy.IPostWithCall => {
var endpoint = new Endpoints.Thingy.Post(args);
return _.extendOwn(endpoint, {
call<TView>(value: Interfaces.IMegaClass) {
return AngularEndpointsService.call<TView>(this, value != null ? value : null);
}
});
}
}
}
}
|
STACK_EDU
|
Why haven't the property of present-absentees not been returned?
In the essay, Israel-Palestine & the Aparthied analogy: critics, apologists & strategic lessons by Ran Greenstein, associate professor in the university of Witwatersrand, he writes:
First, the present absentees - about 25% of the Palestinian population in Israel itself who were removed from their original homes in 1948 but have become citizens - must be allowed access to their property and their confiscated land. This would have no demographic implications and would not involve changes in citizenship status.
What are the proportion of present-absentees now in 1948 compared to today? Are accurate figures actually available, or are there only estimated projections due to the political sensitivity of the nature of this information?
what rationale does Israel offer for not returning the property of present-absentees, given as Greenstein notes, that they are citizens of Israel?
has any property been returned, and if so - how much?
Is the first question really relevant as-is? I doubt that the numbers have changed much in 2 years. On the other hand, the essay is from 2010, not 2015 (and I doubt that the 25% refers to the exact numbers in that year; wikipedia has the same 1/4 claim, which is sourced to a number calculated using the original 30-40 thousand internally displaced people and the average Palestinian growth rate). Instead, you might want to ask for accurate numbers without restricting them to a certain year (it will make answering a lot easier).
"about 25% of the Palestinian population in Israel itself who were removed from their original homes" - is there any proof that they were "removed" as opposed to left on their own decision (due to their Arab brethren starting a war of aggression to destroy the newly created state of Israel)?
@tim: good points; the essay was taken from a book published in 2015 which is why that date; and for all I know, the situation might have changed substantially since then; even more so when you've pointed out the essay was actually published in 2010.
Moreover, the question doesn't explain what is being asked above what's covered on Wikipedia: https://en.wikipedia.org/wiki/Israeli_land_and_property_laws#The_.27Absentees_Property_Law.27
@user4012: isn't the original 'aggression' by the European Jews who formed a compact with the then British Empire to institute a 'National home for the Jewish People' in what was then the Ottoman province of Palestine? See the Balfour Declaration of 1919.
@user4012: feel free to answer the question using the resources you've identified...
@user4012: it said 'a national homeland' not a nation-state; I take it was a decision of the leaders of the Zionist movement to take this next step - why is that not understood as 'act of aggression'? Could you explain this - perhaps by analogy with what's happening in Catalonia?
@user4012: for example the preamble of the League of Nations mandate document for Mandatory Palestine has:"Whereas the Principal Allied Powers have also agreed that the Mandatory should be responsible for putting into effect the declaration made originally on Nov. 2nd, 1917 by His Britannic Majesty, and adopted by the said Powers, in favour of the establishment in Palestine of a natuonal home for the Jewish People, it being clearly understood that nothing should be done which might prejudice the civil and religious rights ...
... of existing non-Jewish communities in Palestine"
@user4012: It's interesting too that they used the term Palestine in this document, don't you think?
Because the "absentees" chose to abandon that property, often so as to assist the planned genocide of Jews?
The absentee laws were enacted in 1950, at a time when Palestinians in Israel lived under a military government. So it wasn't a given that they should enjoy the same rights as other citizens.
Furthermore, many Palestinian refugees tried to sneak over the armistice borders to return to their former homes. This lead to violence on both sides of the border and Israel retaliated to try and discourage this infiltration.
In that situation, it simply would have been impossible for Israel to determine which Palestinians were internally displaced persons and which were infiltrators. Therefore the laws were written as they were.
Today, it is not politically feasible to revisit and try to rescind provisions of these laws. Because if it was not right to confiscate "present absentees" properties, was it right to confiscate "absent absentees" properties?
|
STACK_EXCHANGE
|
Programmable Music Tools #
Before getting into this list, I also want to point out that there’s an endless number of programmable music sequencing tools which can be found built into VST plugins, larger audio software, and even video games. I’m particularly fond of the huge number of interesting sequencers for VCV Rack, such as Quad Algorithmic Rhythm Generator, Entropia, Fate, Marbles, and Orca’s Heart - just to name a few.
This list absolutely can not have everything. Still, I’ve tried to highlight some of the more novel ideas.
It’s also worth mentioning, there are very interesting hardware tools such as the monome norns, and Toroso T-1
Finally, if you’re into VJ work, you may want to check out The big list of generative art tools in the Design section of this website. If you’re looking for hardware for this role, you may want to check out the hypno by Sleepy Circuits or the Critter & Guitari Eyesy.
Sonic Pi is a code-based music creation and performance tool.
Extra Tools for ORCA:
Chuck is a programming language for real-time sound synthesis and music creation. It is open-source and freely available on MacOS X, Windows, and Linux. Chuck presents a unique time-based, a concurrent programming model that’s precise and expressive (we call this strongly-timed), dynamic control rates, and the ability to add and modify code on-the-fly. In addition, ChucK supports MIDI, OpenSoundControl, HID device, and multi-channel audio.
enables live coding in Ableton Live’s session view. Set up transformations that trigger whenever a source clip is changed, including arpeggiation, shuffling, and ratcheting/retriggering.
Max is much friendly and more useful than Pure Data, though it’s also not FOSS.
Both are visual programming environments which are rather low level (much lower level than VCV Rack, for example) and so are less useful in a live context; however, they’re extraordinarily powerful for making your own instruments which you can play live.
The reason I’ve grouped them together is that both Max and PD share the original author, Miller Puckette; however, it’s very clear that PD has more or less stagnated while Max has grown
This gif was ripped directly off of https://cycling74.com/products/max
ZeroBrane Studio is a lightweight Lua IDE with code completion, syntax highlighting, live coding, code analyzer, and debugging support for Lua 5.1
Moonlet: Lua live coding. It only works on Linux and Windows.
Live coding music with Algorithmic patterns
Tidal Cycles (or ‘Tidal’) for short is free/open source software written in Haskell. Tidal is using SuperCollider, another open-source software, for synthesis and I/O.
Tidal Cycles allows you to make patterns with code. It includes a language for describing flexible (e.g. polyphonic, polyrhythmic, generative) sequences of sounds, notes, parameters, and all kind of information.
Mosaic, an openFrameworks based Visual Patching Creative-Coding Platform
Collaborative Programmable Music
Overtone is an open source audio environment designed to explore new musical ideas from synthesis and sampling to instrument building, live-coding and collaborative jamming. We combine the powerful SuperCollider audio engine, with Clojure, a state of-the-art lisp, to create an intoxicating interactive sonic experience.
Synchronize your visuals and noise with ease. Overtone features seamless integration with both Quil, a Clojure front-end to Processing and ShaderTone, a Clojure version of ShaderToy an OpenGL GLSL shader programming environment.
An alternative and fun way to make interactive music in your browser.
FoxDot - a Python-based language and editor for making music
Siren, is a tracker interface that embodies abstractions where programming is realized as the medium for pattern sequencing in a modular fashion. It is based on a hierarchical structure that consists of scenes and channels. Separate channels have independent patterns; a complete song consists of a master list of repeated patterns.
Supported programming languages :
“Extempore is a programming language and runtime environment designed to support cyberphysical programming”
Nestup is an experimental markup language for musical rhythms. It’s specifically designed to break away from a fixed musical grid.
The name is a contraction of nested tuplets, which are hard to program on a piano roll but easy to notate with Nestup.
Audulus is a modular music processing app with unequaled ease of use.
|
OPCFW_CODE
|
Hacking the Ractiv Touch+
When we at Nerdiacs first saw the kick starter project for Touch+ I was more interested in the hardware for the project then the software. Wide FOV stereo cameras in a neat little usb bundle isn’t bad for doing some vision based research.
After receiving a few pieces of the touch+ hardware we realized that the project was stuck in development limbo, no drivers, no access to hardware and no response to the community and hence I decided to hack the touch+ video stream myself.
I started off with the only executable shared to the public by Ractiv, hidden under facebook comments to test the ractiv hardware. It proved that atleast my hardware isn’t faulty and something can be done with it.
I started off by figuring out what they were using to pull the video data from the device. My first guess was the UVC library which is used by a lot of devices under Mac and Linux, but a quick test by deleting the library and running the executable showed that it wasn’t using anything from the UVC lib. Just a remnent from experiments perhaps.
My second guess was Direct Show. Running the app and running GraphEditorPlus (which is an awesome tool for direct show first timers like myself) which has an option to connect to running direct show instances I figured out that it was using direct show indeed.
But if it was indeed using direct show to video streams, then why was by graph returning null data?
I figured the shortest way to figure out the video stream was to disassemble the Camera Viewer app. I used IDA by Hex Rays to disassemble the code and the cherry on top was the fact that the executable had debug symbols in the directory itself.
Going through the assembly code I noticed that the app was calling a function called do_unlock_software when opening the camera stream and calling functions like eSPAEAWB_SWUnlock.
Analyzing the disassembled file showed that the camera was actually unlocking the camera before creating the direct show streams. This made sense and was a reasonable explanation for the null stream i was getting.
To test my theory, I put a debug point at the end of the do_software_unlock function and ran the application. When it paused on the breakpoint, I ran my direct show graph I had created in GraphEditPlus and voila the stream started working.
Now the only task was to pull these functions into my own code before creating the direct show graph and we should have the video stream. I looked around at what exact functions where implemented in the unlock function and all calls were being done to 2 dll files. eSPAEAWBCtrl.dll and eSPDI.dll.
Now, all I needed to do was to implement functions from the DLLs into my code to unlock the stream before creating the direct show filters. Todo that I first exported the headers for the dlls using dumpbin which is accompanied in visual studio.
dumpbin /exports <name of dll>
Having the parameter made it easy to figure out parameter using just intuition, but the second DLL eSPDI.dll had undecorated functions and returned this:
So how do we figure out the parameters being passed, and even the number of parameters being passed and return from each function? To do that we need to go back to the disassembly and figure out hints from the assembly code.
This snippet on the left is calling the function EtronDI_Init and EtronDI_GetDeviceNumber.
When calling the function to pass the parameters the assembly code would need to push the variables onto the stack before going into the function. As we can see above before calling the function EtronDI_GetDeviceNumber the code is pushing just one variable the pHandle variable onto the stack.
Thanks to the debug symbols we already know pHandle is a void* pointer, so all the function definition for EtronDI_GetDeviceNumber would be:
But when calling the function EtronDI_Init the assembly code is calling push offset pHandle. The push offset function actually pushes the pointer to the object. Hence the definition would be:
With this logic, we can figure out the parameters alright but what about the return type? In assembly when we return from a function, the return object is always on top of the stack. Hence the line after calling EtronDi_Init:
add esp, 4
This offsets the stack pointer by 4 bytes meaning that the return type has a size of 4 bytes. This means the return is either a 32 bit pointer or an integer type, intuition tells us it should be an integer return type which fits quite nicely when implemented.
Using this flow of logic I was able to deduce the function calls to all the functions being used by do_software_unlock.
A note when calling the dll functions, the eSPAEAWBCtrl dll uses __stdcall calling convention whereas the eSPDI dll uses the __cdecl calling convention.
Even after calling all the functions in the correct order the stream continued to return null which I later found out was because the function eSPAEAWB_SWUnlock requires the value “263" to be passed to it instead of the device ID which would make intuitive sense.
After unlocking the video stream, I connected the Direct Show graph to the code and finally got access to the video stream.
The code for the hack is available here:
|
OPCFW_CODE
|
Penny: Today’s post is by Hein Aucamp, Director, WA Integrated Asset Management, and member of our Perth City Chapter, WA, He considers what infrastructure decision making today might gain from what he has learnt from years in the volatile area of Information Technology.
I spent many years developing Engineering and mapping related software. Rapid change, which is becoming apparent in infrastructure, has always been a feature of Information Technology, where the knowledge half-life is said to be 2 years. I trust that Infrastructure Decision Making will never become as volatile as that but it will certainly be less stable than it has been in the past.
Here are my suggestions of translatable lessons. One can cope much more easily with a volatile discipline by dividing it into two broad categories: first, timeless principles; second, current methods to achieve the timeless principles. We need to be able to distinguish between them, to hold tightly onto the timeless principles, and also to be lightly invested in what is temporarily useful.
To get our thinking started, I suggest that these are some of the timeless principles:
• Infrastructure must solve a human need. This sounds as if it is stating the obvious, but think about a situation where infrastructure solves a narrow need but creates a broader pernicious problem: environmental damage, economic hardship for generations, etc. The timeless principle here is a healthy understanding of the service needs we are trying to solve, and the trade-offs involved in our proposed solutions – and indeed a calculation about whether in some cases it would be better to live with the problems.
• Infrastructure creates a long-term responsibility, with unforeseen obligations that might emerge only over time – arising through legislation, reporting requirements, or safety issues. Think for example of the difficulties of managing asbestos, which was once very much in favour. Once built, society will tend to want to renew infrastructure.
• Infrastructure will always have a political aspect. Public infrastructure requires funding by governments and will be attractive during electioneering. Suggestion: an independent central banking system has immunised monetary policy from most aspects of political influence except commentary. Are some lessons for infrastructure possible here? I am of course not suggesting a command economy, but perhaps a broad consistent policy climate could be created – analogous to the way objective renewal programs are supposed to stop jockeying for resealing of roads for influential people.
On the other hand, here are some of the temporary, current methods at our disposal to deal with the timeless principles. We know they are capable of adjustment, because they have been adjusted in the past.
• Funding mechanisms.
• Contrasting promises during elections.
• Best practices.
Do you agree with this division? With the ‘timeless principles’?
How can we apply these lessons from Information Technology?
What other areas might we draw upon for valuable lessons?
|
OPCFW_CODE
|
I might switch from WHMCS to Blesta, but I want to know if it worth it ?
I use WHMCS for Resellerclub, cPanel and SolusVM only.
Also is there a script to transfer everything from WHMCS to Blesta ?
Blesta is more secure than WHMCS and allows you to use otp with google authenticator on for extra security. It is getting better and more advanced but it is still limited but whmcs is a lot older.
Theres a migrator script and all of the extensions are avaliable. I was sketchy with moving at first and stayed with WHMCS but I am finally at Blesta.
I do think it is good though
★ 120Gbps DDoS Protection on Shared Hosting, Reseller Hosting, XEN Windows and Linux VPS, Domains, SSL Certificates
★ PayPal | BitPay (BitCoin) | UK Bank Transfer | International Bank Transfer
Like many here would say, all software are vulnerable if exploiters purposely exploit these software and it depends on how actively/quickly the developers fix the vulnerabilities. However, as far as I know, Blesta's developers were very quick to fix the previous vulnerabilities.
Is more secure in the sense that it is 99 open source (or around 90).
I don't think Blesta being more secure than their competitors has anything to do with how much of the code is un-encoded.
The biggest factor is that the developers of Blesta understand security, they know safe coding practices, they know how to prevent SQL injection, how to sanitize input, etc. They write beautiful code, they are responsible developers by every definition of the phrase and should be given a ton of credit for that.
Security doesn't come from whether or not the code is viewable to the general public, it comes from the person(s) who wrote the code in the first place that made it secure.
We're actually using Blesta for our upcoming website in regards to our software security auditing business. That says a lot.
Certainly we are a huge target in this industry and will never claim we are "hacker proof" but it was a no brainer for us when it came time to decide on a billing / support platform accessible to the public in regards to current security worthiness.
What you wrote is very true, but my point is not that.
I think that we all know that when developers protect /hide their code, it raise the possibilities of having bad coding practices and flawed logic. That's why normally open source projects tend to be more clean, more polished, more secure. I think that I don't have to explain the previous...
While you may contradict the following, I'll say it: Security is relative. What we know today as secure tomorrow is not. Therefor a developer might think that his code is awesome, top notch secure, but you may differ because maybe you know a lot more, but he doesn't realize that, so for him the code is what you said "Beautiful".
+1 for Open Source at least for billing systems ...
I'm quite impressed with Blesta. We were looking at developing our own billing solution because we're using WHMCS and it's actually preventing us from automating a lot of features since we already use our own VPS control panel.
|
OPCFW_CODE
|
new WOW().init();
var typed=new Typed(`#type`,{
strings:[" Triangle Tracker "," An application that allows Users to add triangle measurements and returns the type of triangles "],
backSpeed:70,
typeSpeed:80,
smartBackspace:true,
loop: true,
})
// Animations init
new WOW().init();
function result() {
var confirmation=[];
confirmation.push(parseFloat(prompt("Enter first side: ")));
if (isNaN(confirmation[0])=== true){
alert("Please enter a number in the field");
}else{
confirmation.push(parseFloat(prompt("Enter second side: ")));
if (isNaN(confirmation[1])=== true){
alert("Enter a valid number in the field");
}else{
confirmation.push(parseFloat(prompt("Enter third side: ")));
if (isNaN(confirmation[2])=== true){
alert("Enter a valid number in the field");
}else{
if((confirmation[0]+confirmation[1])<=confirmation[2] || (confirmation[1]+confirmation[2])<=confirmation[0] || (confirmation[0]+confirmation[2])<=confirmation[1]){
alert("Not Triangular in Shape");
}
else if(confirmation[0]===confirmation[1] && confirmation[0]===confirmation[2] && confirmation[1]===confirmation[2]){
alert("The triangle is Equilateral");
}
else if(confirmation[0]===confirmation[1] || confirmation[1]===confirmation[2] || confirmation[0]===confirmation[2]){
alert("The triangle is Isosceles");
}
else if((confirmation[0]+confirmation[1])>=confirmation[2] || (confirmation[1]+confirmation[2])>=confirmation[0] || (confirmation[0]+confirmation[2])>=confirmation[1]){
alert("The triangle is Scalene");
}
else{
alert("Please enter a number");
}
}
}
}
};
|
STACK_EDU
|
Let’s begin the list with a look at the top emulators for desktop computers. All of these support Windows out of the box, and a few even come with macOS support. In theory, you must own the game in order to have a ROM legally. Nowadays, people have thousands of ROMs on their computers without any problems. It is the same as downloading music or watch movies on the internet.
This version is a bugfix release, which contains many stability and accuracy fixes. Notably, an issue leading to stuttered rendering and eventually a crash, mostly on AMD GPUs, has been fixed. However, there is an outstanding bug in all 0.8 versions that causes flickering in Advance Wars games. This can be worked around by using a dump of the official BIOS, or using 0.7.3 until it is fixed.
How To Open A Gba File
It is only legal to play games you actually own on VGBA. You will first need to read a game from its cartridge into a file. This can be done with an inexpensive gadget such as Flash Advance Linker.
However, if you want to protect your privacy, you can use a VPN application. At this point, you are supposed to decide which emulator to download. We ask you to unzip the file using a program called WinRAR, which is very easy to operate.
It is better to use an official trial version for free. Note that this trial version lasts indefinitely, so it makes no sense to purchase a full version. Sometimes when you install the emulator, you get a .exe file downloaded, which is self extracted. In order to launch the installation process, you just have to click it two times. Then you will need to select the folder where you want the emulator to be installed.
The app is starting to get a little old, and it hasn’t been updated in a while. EmuBox supports Nintendo DS, Playstation, SNES, Game Boy Color,andGame Boy Advance games, just in https://romsdownload.net/roms/nintendo-pokemon-mini case you need to addCrash Bandicootto the mix. It’s also designed for Tool-Assisted Speedruns, so you can use slow-motion, frame-by-frame advance, and save states to record yourself playing the perfect game.
- They are commonly played now on emulators, which use files taken from the original ROMs of the games.
- The next are the sections for who love old-school games (GB/GBC) and PC games.
- The original versions were released on the Nintendo GameBoy.
I did not implement the BIOS protection in my emulator so when the game tries to access the BIOS region, it actually succeeds in reading what’s in there. The value returned in this case happens to make the game freeze. Other than that, the BIOS also provides system calls for GBA programs via software interrupts, so most emulators require the user to have it as well as their game ROM.
As I said in the beginning, a tremendous amount of time is spent on Reverse Engineering GBA games. One popular approach is to implement the CPU as a machine-code interpreter.
Download Emulator For Pc: Bluestacks
Please, do make sure you own a physical cartridge for every game you play with VGBA. It is the right thing to do, both legally and ethically. NULL is pointing to the BIOS region, which as I mentioned earlier is protected so games can’t read its real contents. What would be returned instead is the last fetched BIOS opcode, and it just so happens that this value doesn’t cause any crash and the game continues as if nothing happened.
|
OPCFW_CODE
|
In the digital age, inclusivity and accessibility are fundamental principles that shape the way we interact and communicate. Telegram, a versatile messaging platform, is leading the charge by providing a range of accessibility features that empower individuals with diverse needs to participate in conversations,
connect with others, and fully engage in the digital realm. In this article, we’ll delve into how Telegram’s accessibility features are transforming communication, making it more inclusive and meaningful for everyone.
Telegram’s Accessibility Features: Bridging the Digital Divide
In an era where technology is a universal connector, Telegram’s commitment to inclusivity shines through its innovative accessibility features.
Accessible User Interface
Customizable Text Size: Telegram allows users to adjust text size, making it easier for individuals with visual impairments to read messages comfortably.
High Contrast Mode: The platform’s high contrast mode enhances visibility for users who have difficulty distinguishing colors.
Enhanced Auditory Engagement
VoiceOver Compatibility: Telegram is compatible with Voice-Over, a screen reader that assists visually impaired users in navigating the app.
Captioned Media: Videos and audio messages can be captioned, ensuring that users with hearing impairments can engage seamlessly.
Keyboard Shortcuts: Telegram’s keyboard shortcuts enable individuals who may have difficulty using touchscreens to navigate the platform more efficiently.
Simplified Menus: Streamlined menus and navigation options ensure a smoother experience for users who prefer simplified interfaces.
Voice and Speech Recognition
Voice Input: Users can dictate messages using voice input, benefiting those who have difficulty typing or navigating a keyboard.
Voice Messages: Telegram’s voice messages feature facilitates communication for individuals who find typing challenging.
Text-to-Speech (TTS): Text messages can be converted to speech using TTS technology, allowing visually impaired users to “listen” to messages.
Multilingual Support: Telegram’s multilingual capabilities assist users with diverse language preferences, bridging language barriers.
Gesture Navigation: Telegram’s intuitive gesture navigation ensures that users with mobility impairments can interact with ease.
One-Handed Mode: The one-handed mode feature aids users who may have limited dexterity or prefer single-handed interactions.
International Accessibility: Telegram’s accessibility features extend to users worldwide, addressing various accessibility needs.
Inclusive Engagement: These features foster a sense of belonging for individuals with disabilities, enabling them to participate fully in digital conversations.
Promoting Digital Inclusion
Awareness and Education: Telegram’s commitment to accessibility extends to awareness campaigns that educate users about the importance of inclusivity.
Continuous Innovation: Telegram’s dedication to enhancing accessibility features reflects its commitment to ongoing innovation in the field.
Telegram’s accessibility features stand as a testament to its commitment to inclusivity and empowerment. By providing tools that cater to diverse needs, Telegram is championing a digital realm where everyone can communicate, connect, and share experiences, regardless of their abilities.
In a world that is rapidly becoming more digital, Telegram’s accessibility features reinforce its position as a platform that values diversity, strives for equality, and believes that technology should be a conduit for unity and inclusivity.
|
OPCFW_CODE
|
Could not find the root block device (in Gentoo)
There are some trivial troubles that always obsess me.My Gentoo always complains 'Could not find the root block device in UUID=5f7c7e13-2a46-4ae4-a8c0-f77f84e80900' and stuck,once I try to boot. However, if I type the same device name /dev/sda2 in, the system goes on.I don´t know why. My Gentoo was installed in one partition /dev/sda2 and I mounted / into /dev/sda2.
I also have found some posts on the internet. Most posts say it is caused by kernel config,and compiling the corresponding fs as built-in into the kernel ,not as module can solve it.some says rootfs should be specified in grub after the kernel command, device name after root command in grub should be substituted by the UUID. I did it all, but those didn't work.
Here is my configuration in grub.
533 menuentry 'Gentoo (on /dev/sda2)' --class gentoo --class linux-gnu --class os $menuentry_id_option 'osprober-chain-225E1F815E1F4D43' {
534 insmod part_msdos
535 insmod ext4
536 set root='hd0,msdos2'
537 if [ x$feature_platform_search_hint = xy ]; then
538 ¦ ¦ search --no-floppy --fs-uuid --set=root --hint- bios=hd1,msdos2 --hint-efi=hd1,msdos2 --hint-baremetal=ahci1,msdos2 5f7c7e13-2a46-4ae4-a8c0-f77f84e80900
539 ¦ else
540 ¦ ¦ search --no-floppy --fs-uuid --set=root 5f7c7e13-2a46-4ae4-a8c0-f77f84e80900
541 ¦ fi
542 ¦ ¦ echo 'Loading Linux x86_64-4.4.39-gentoo ...'
543 ¦ ¦ linux /boot/kernel-genkernel-x86_64-4.4.39-gentoo root=UUID=5f7c7e13-2a46-4ae4-a8c0-f77f84e80900 ro
544 ¦ echo 'Loading initial ramdisk ...'
545 ¦ ¦ initrd /boot/initramfs-genkernel-x86_64-4.4.39-gentoo
546 ¦ boot
547
548 }
The Gentoo coexists with Ubuntu.
My /etc/fstab.
1 # /etc/fstab: static file system information.
2 #
3 # noatime turns off atimes for increased performance (atimes normally aren't
4 # needed); notail increases performance of ReiserFS (at the expense of storage
5 # efficiency). It's safe to drop the noatime options if you want and to
6 # switch between notail / tail freely.
7 #
8 # The root filesystem should have a pass number of either 0 or 1.
9 # All other filesystems should have a pass number of 0 or greater than 1.
10 #
11 # See the manpage fstab(5) for more information.
12 #
13
14 # <fs> <mountpoint> <type> <opts> <dump/pass>
15
16 # NOTE: If your BOOT partition is ReiserFS, add the notail option to opts.
17 UUID=5f7c7e13-2a46-4ae4-a8c0-f77f84e80900 / ext4 noatime 0 1
18 UUID=B66EAE686EAE215B /mnt/D/ ntfs errors=remount-ro
19
UUID of the corresponding name
/dev/sda2: UUID="5f7c7e13-2a46-4ae4-a8c0-f77f84e80900" TYPE="ext4" PARTUUID="000e21f3-02"
/dev/sda4: UUID="B66EAE686EAE215B" TYPE="ntfs" PARTUUID="000e21f3-04"
Is there anyone has some ideas? thanks.
Finally, I figured it out after several days have gone.It is caused by driver problem. My Gentoo is installed in my external hard-disk connected with my laptop by a USB cable.However, the USB Mass Storage Support option wasn't masked build-in when I built my kernel.Hence, it always blocked in that way.If some are in the same boat with me, and you make sure you have compiled all the referenced file system as built-in, please check if the options as follows are built-in in your kernel.
Device Driver-->USB Support -->USB Mass Storage Support
Device Driver-->USB Support -->xHCI HCD (USB 3.0) support
Device Driver-->USB Support --> EHCI HCD (USB 2.0) support
Device Driver-->USB Support --> UHCI HCD (most Intel and VIA) support
Device Driver-->USB Support --> Support for Host-side USB
If they don't, check it on.
Maybe it's the wrong hard drive in your grub.cfg
bios=hd1,msdos2 --hint-efi=hd1,msdos2 --hint-baremetal=ahci1,msdos2
hd1,msdos2, ahci1,msdos2 etc. would refer to the second disk. Usually it's the first hd0,msdos1 having the grub installed on /dev/sda
check this with grub-install --recheck /dev/sda
Your partitions would look like this. (boot on primary)
sudo parted -l
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sda: 107GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 1049kB 103GB 103GB primary ext4 boot
2 103GB 107GB 4394MB extended
5 103GB 107GB 4394MB logical linux-swap(v1)
First of all, thanks @Michael D . I don't know what exactly the difference between ahci and hd, can you explain it more? And I have tried grub-install --recheck /dev/sda, not any charm present. Do you mean I should replace /dev/sda2 or specified UUID to hd1,msdos2?
I think the first line (from my answer) covers all cases - whether you boot via ´bios´, ´efi´, ideor ahci. AHCI stands for advanced host controller interface and is for sata hard drives which you need to set in the bios.
As far as I know, you don't edit the grub.cfg manually. You install grub straight to /dev/sda (or /dev/sdb?) - no numbers. update-grub or grub-mkconfig -o /boot/grub/grub.cfg "$@" generates the file for you
finally - here is a tutorial how to re/install/update grub. It's for ubuntu but that might work for gentoo, too. Actually you just need to boot (if you can fix it to boot), login as root and start with the grub-install line. http://howtoubuntu.org/how-to-repair-restore-reinstall-grub-2-with-a-ubuntu-live-cd
To be honest I think the easiest way to do this is to just make a generic kernel.
I simply grab Ubuntu's kernel config, issue a make oldconfig and let genkernel --no-clean --menuconfig all do the rest.
Emerge the kernel as Quick Install Guide tells you to.
Grab the generic kernel config:
1) Find the file you want. The most recent kernel version basically: http://kernel.ubuntu.com/~kernel-ppa/configs/
2) wget -O /usr/src/linux/.config kernelconfigurl
Run make oldconfig. Just hold "Enter" if you don't know how to answer these. No, nothing bad will happen, it will default to the default answer, which is what you should pick in a generic kernel anyway.
Run genkernel with genkernel --no-clean --menuconfig all. In the menu you can modify things if you wish, or you can just exit. And the install will commence.
Generate your GRUB config with grub-mkconfig -o /boot/grub/grub.cfg
This kernel will contain almost all modules and whatnot. So everything you plug in will work. Some Unix veterans frown upon generic kernels. If you ran Ubuntu, Fedora, or basically any distribution whatsoever - you used generic kernels.
Do you want to make a minimal kernel without messing up?
No problem. After you boot this kernel, simply plug in all devices you will ever need. Once done, go into /usr/src/linux and issue make localmodconfig. Great, now you have a kernel with only the stuff you will need. Use genkernel to compile the new minimalistic version and install it the same way.
Good luck.
First of all, thank you for your long answer. However, I used the generic kernel option to compile my kernel, but it seems that gen kernel doesn't switch on the USB Mass Storage Support kernel option by default, even though I plugged my external USB hard disk in my computer. I think this is the crux which causes this problem.
There is no generic kernel config on Gentoo, that's why I used Ubuntu's configuration. Using make localmodconfig only works once you are on the booted machine (not chroot/livecd). ** Even though the Gentoo wiki says it should work through the LiveCD it failed for me to do so.
In my case, I boot a VMWare Fusion virtual machine with Gentoo. I had to set CONFIG_FUSION_SPI=Y.
For me GRUB was searching for LABEL=FUNTOO, and either entering /dev/sda3 at the prompt that the isolinux prompt or temporally editing the GRUB boot option (the real_root argument) to that worked, so the solution was to label my /dev/sda3 partition to be FUNTOO.
|
STACK_EXCHANGE
|
① Auto trading bot crypto reddit india
Auto Trading Bot Crypto Reddit India
As crypto markets continue to recover, we’re noticing an increased trend of users interested in automated crypto trading bots and services. TradeSanta is a cloud software platform that automates crypto trading strategies. binary options trading ig markets South Africa Support crypto auto trading software India this project by becoming a supporter First of all I want to say that I made some small changes to my RSI, the crypto auto trading software India period is now 3 instead of 8 in close , the overbought zone is 92 instead of 70 and. Cryptohopper is a crypto trading platform that focuses on automated bot and copy trading.Cryptohopper allows users to trade based on either their own personal indicators auto trading bot crypto reddit India or copy other traders’ strategies. In all, the Indian government’s unsympathetic attitude and apprehension toward cryptocurrencies is crippling amber crypto trading Malaysia the growth of Bitcoin and crypto trading bot reddit 2018 India other digital Auto trading robots connect to online brokers in order to function and through the robot, you can choose the broker you want to trade with. The infographic below breaks down 8 of. crypto auto trading bot India; As a result, brokers crypto real trading volume crypto South Africa auto trading bot India can profit directly from the difference between what. Cryptohopper is the best crypto trading bot currently available, 24/7 trading automatically in the cloud.
In this article we’ll be exploring the top crypto trading bots that are currently available in the market. Trade your cryptocurrency now with Cryptohopper, the automated crypto trading auto trading bot crypto reddit India bot Discover best bitcoin cash usd investing India crypto trading bots overviewed for 2020 ️. it gives you more control over your portfolio and "does it for you" but still. A Bitcoin robot is an auto-trading These tools are not only customized for this type of trading but also gives access to crypto trading When choosing a automated trading bot,. Bitcoin bot & Binance Bot Our Cryptocurrency robot allows you to trade (buy/sell) our crypto robot signals direct to your compatible CFD broker. Well, day trading crypto trading bot reddit 2018 India is a trading style where you open short-term trades, most lasting. the main benefit of having a bot is being able to execute a strategy 24/7. you can blindly follow settings that people have put together but its still better to understand what it is doing and have a strategy of what you want to do.
R/NapBots: Autopilot Crypto Trading Bots www.NapBots.com. +15 crypto trading bot, Cloud-Based. Users can build a trading strategy using over 130+ indicators & candlestick patterns in Cryptohopper’s trading strategy designer Crypto auto trading bot malaysia Once you've decided which cryptocurrencies to purchase, you'll want to check up on how they're crypto trading bot how mucb Malaysia doing Best crypto trading bot strategy indiaWhile not a complicated equation, it is slightly more complex than the straight forward over the best crypto trading bot strategy India counter option Auto trading crypto bot png india.you can blindly follow dark pool trading platform India settings that people have put together but its still better to understand what it is doing and have a strategy of what you want to do. the main benefit of having a bot is being able to execute a strategy 24/7. For example, you can see in sklearn. Free Demo, no credit card needed. With paper trading, you could test your strategy before you put the real money for trading Simple Crypto Trading Bot India. Cryptocurrency trading bots are available for Binance, HitBTC, OKEx, Huobi, Upbit Cryptohopper is the best crypto trading bot currently available, 24/7 trading automatically in the auto trading bot crypto reddit India cloud. Easy to use, powerful and extremely safe.
Day Trading Platform Compare Singapore
They work but they require significant tweaking or understanding how to trade. It supports all notable popular cryptocurrency exchanges, and you can auto trading bot crypto reddit India trade in altcoin pair Another feature which makes 3commas widely popular among beginner is its paper trading feature. Easy to use, powerful and extremely safe. Trade your cryptocurrency now with Cryptohopper, the automated crypto trading bot Crypto trading bot by Cindicator. crypto trading bot results India BTCUSD 30min time frame used ,Tested with Forex ,Equity results are >80% when right entry is predicted. Stoic is automated Bitcoin trading bot for everybody. they work but they require significant tweaking or understanding how to trade. It’s the best cryptocurrency investing app: in 2020 Stoic’s auto crypto bot trading made +93% How to set up your crypto trading bot in 3 easy steps. it gives you more control over your portfolio and "does it for you" but still. Get full info about free and paid bitcoin bots 📈 to automate your crypto currency trading, 💸 top exchanges, features and prices, 💰 the cons and pros of using these tools Crypto auto trading software india. Trade Bitcoin, Bitcoin Cash, Litecoin, Dash, Ripple, Monero, Stellar, Zcash, ETC and Ethereum.. 3Commas is the idle crypto trading bot for hobbyists, enthusiasts, and professional traders.what is binary option trading scam Singaporehow to invest in bitcoin in kuwait Indiaexpert bitcoin trading books Singapore
- Speaking of trading with crypto bots in general, we. auto trading bot crypto reddit India
- Let’s take a look at the auto trading bot crypto reddit India top players.
|
OPCFW_CODE
|
Host Validation Doc
A document outlining proposed host validation tests.
Is this suppose to be a feature enhancement proposal?
@gardlt Kinda --- this is from a conversation @mark-burnett and I had yesterday regarding the best way to tackle smoke testing to validate that hosts come up properly during a Promenade Kubernetes deployment. The goal of this document is to provide insight into what will be in scope for Promenade vs other tools like Armada, Prometheus, or other configuration management platforms which may run in conjunction with Promenade. The idea of this document is to get the conversation started and iterate on it.
One further thought.
We should evaluate some of the code in the Kubernetes CI/CD gating system for this purpose. We obviously know that in addition to very specific code paths and complex testing to ensure they haven't broken edge case code, they also likely have tests for some of the high level elements we're attempting to validate here.
We should explore what they have and how easy it would be benefit our self-contained version based on tests they've already outlined.
At the very least, much of what they test can educate us on what we need to evaluate. For example, this document isn't low level enough to suggest, but eventually I expect this component (or external project) to articulate use cases along the lines of:
https://github.com/kubernetes/kubernetes/blob/master/test/e2e_node/kubelet_test.go#L40-L71
Create a pod, retrieve its logs, validate they have what we expect.
I am not suggesting that we recreate all of the integration tests Kubernetes already does, but these tests like these help validate basic operations with the full stack (our k8s, our docker version, our operating system configuration, where we are storing docker logs, etc.) in running environments.
@intlabs @wilkers-steve @aric49 @alanmeadows @v1k0d3n
CC: @bryan-strassner
I agree with a lot of the good points made here, especially regarding the focus of responsibility of particular services/code.
It feels like the three main categories of validation that have come up are:
Host configuration validation
Presence/content of particular files on nodes
Versions of installed packages (in particular docker since Promenade is currently installing that directly, as called out by @alanmeadows)
Overall cluster validation
Exercising the Kubernetes API/use some Kubernetes integration tests
Reachability among pods running on all nodes to one another
Ability to allocate cluster resources to all (appropriate) nodes, e.g. PVs
Critical application validation
Primarily etcd cluster health?
Other applications Drydock, MaaS, Ceph, Airflow, etc.?
It actually seems that these categories could each reasonably be delegated to different components. There's no reason to really expect a general cluster validation tool to check packages and files on nodes. Likewise, while tools like ansible are pretty good at doing tasks in category 1, I don't really love the idea of writing an ansible module to check etcd health/configuration.
To me the questions become:
How should we prioritize these different aspects of validation? It seems to me that 2. is more likely to expose serious problems and probably should be implemented first. On the other hand, the story @aric49 was working on seemed to be more about 1 (I might be mis-interpreting the intent).
How much should we worry about separating these concerns now vs. later?
Will other applications need to initiate these various kinds of checks besides Promenade at cluster formation?
Closing as we have moved to gerrithub.
|
GITHUB_ARCHIVE
|
Is overpressure really an appropriate criteria for blast damage?
So, I went on Nukemap, and they gave me the following values for blast effects.
1 psi (7 kPa): windows break
5 psi (34 kPa): most residential buildings collapse
These numbers are way too high. 1 psi corresponds to 237mph (381 km/h, 66 m/s) at sea level. This is enough to destroy every single building.
What this tells me is that there is some other factor at play. Specifically, the blast wave is of such a short duration that no building has sufficient time to react. In effect, the blast wave is an impact force, not a wind or pressure force. Therefore, it would seem more appropriate to quantify destructive potential in impulse, rather than pressure.
L The implications of this error are serious, since for a given overpressure, a larger blast produces greater impulse. Hence Nukemap and all other calculations that use the overpressure to determine destruction are wildly wrong.
Is that 237mph a constant? or only within a specified distance of the blast origin?
After some distance, being hit by blast debris or 'wind' is almost impossible, and there are just 2 things that can reach you; pressure waves and radio waves. So pressure is actually quite important parameter.
@TomášLétal no, pressure and wind are inseparable parts of the blast wave. Pressure cannot change without density change, and density cannot change without air movement.
@Abdullah You are right some air movement is required, but I would not call that wind. A good example of pressure wave is sound. When you hear a sound, do you also sense wind?
@TomášLétal no, because the pressure is quite low, 2Pa.at 100Db
@Abdullah Ok, so what would be the wind speed and orientation during a 5Pa sound?
In this context, if you're talking about "pressure" you have to specify some frame of refrence..... commonly "outside the building" vs "inside the building" for example. Alternatively, in the direction of the shockwave's movement. Then consider the forces generated over the area of windows, or walls . . . .
Overpressure isn't dynamic pressure. Overpressure is the change in pressure across the shock front. The velocities in front and behind the shock front are the same. A shock driven by a jet of fluid will normally have greater dynamic pressure in the jet than it's associated shock overpressure. Of course, things get complicated when the shock and jet impact an object. Scale is very important. Maximum blast effect occurs when the overpressure has a rise distance about the same as the object's size. Blast energy runs about as the cube of rise distance normal to the shock at constant overpressure.
@PhilSweet I don't know about shockwaves, but for ordinary waves, the dynamic pressure is equal to the static overpressure.
@TomášLétal a speaker produces both positive and negative air pressures to generate sound
@jsotola I know, I was just trying to show Abdullah, that pressure wave is not wind.
@TomášLétal also I forgot to mention. The audible frequencies are so high that we can't resolve the back and forth wind.
@TomášLétal 15m/s peak alternating to and fro at no more than 0.02s intervals
@DanielHatton I simply used the dynamic pressure equation
@Abdullah There might be some issue with this "for a given overpressure, a larger blast produces greater impulse". What would be the range of blast magnitudes which could actually be produced for a given overpressure? Also, how would you measure blast magnitude? Here for example, it is measured using overpressure.
@TomášLétal all blast magnitudes can produce a given overpressure, just at different radii
237mph might not knock down a building if the "wind" only lasts for a few milliseconds. Hence it is not accurate to equate blast pressure and windspeed.
Why do you not use metric units, so people could understand your post without converting it first?
@12431234123412341234123 because most (good) engineers can deal with many units - there are 4 for temperature for a start.
@SolarMike Good engineers don't use non-metric units. They are a pain and we should get rid of them as soon as possible. Use metric units.
@12431234123412341234123 Real engineers are competent in many sets of units - if you are explaining that you are not competent in other units then that's fine.
@SolarMike Non-metric units are objectivity worse than metric units (well, except the Planck units, but we don't talk about them right now). Only stupid people choose them over metric units. I know enough about this "units" (probably more than you) that i know we should not use them. I used "" for units because they are not really units. For example a inch can vary from about 2 cm to 3+1/3 cm, and yes, a inch of 3+1/3 cm is still used today (mainly Taiwan).
@12431234123412341234123 " objectivity " ? or objectively? Those units have been around for a long time and some of us are capable in both sets of units even if you are not. As many people in many countries are still using them, your opinion won't get them to change.
@DanielHatton Will you prefer Rayleigh or Buckingham? :)
@SolarMike This are not just 2 sets of units, the ancient units are incompatible with themself and units with the same name often have different, incompatible definitions. There is not just one inch and one pound, there are a multitude of different inches with different lengths and a few 100 different pounds. For today, it does not matter what we used 1000 years ago.
|
STACK_EXCHANGE
|
»Terms of Service«
§1.1: In the following contract
("contract") the creator ("service
operator") of this site ("ENC Records", "Web site", "Website") places conditions to you ("user", "users", "you")
and specifies a non-liability.
§1.2: This website enables the user to read information about the Label (ENC Records), hear the music of our artists und watch their videos.
§2: This is a contract between you and
the service operator.
§2.2: The contract regulates the
adhesion of the service operator (ENC Records). Read of it
in the further paragraph §4.
§3: Using ENC Records
§3.1: You are responsible for all
actions you take and take all responsibility
for all actions on ENC Records.
§3.2: You arn't granted to use ENC Records for any other purposes than
§3.3: You arn't granted to copy or publish pictures, music and videos from the Web site ENC Records on any other media. You
need a written permission. Read of it in the
§3.4: You accept that ENC Records
will log all your activities with your IP Address.
§3.5: You insure that you dont cause
damage in form of Spam, illegal
statements and contents or other immoral
actions on ENC Records.
§3.6: ENC Records can not be held responsible for any damage (ear damage, e.t.c.) caused by the music or videos on the web site.
§4: Restriction of the adhesion of the
§4.1: Because ENC Records (Web site) is
completely for free, you cant take any compensations for eventual damage,
random damage, following damage or damage
because of escaped profit. ENC Records cant further take the responsibility for the
content of linked pages.
§4.2: THE SERVICE OPERATOR CANT TAKE
ANY RESPONSIBILITY for the functionality,
usability, accesibility or something else
and dissociate from all damage which can be
caused by ENC Records and damages of
§5.1: The copyright for ENC Records is
owned by the service operator ("Lukas Wojcik").
§5.2: Possibly shown pictures, played music,
mentioned marks and shown brand names are
subject to the copyright of their respective
§5.3: Copying requires written
permission. Copyright © Lukas Wojcik 2022.
All rights reserved.
§5.4: All data uploaded by users and
printed on ENC Records is copyrighted by
the respective owner. The service operator
has no influence on user specific data and
can not be held responsibly for any damage
made by a user. If the user takes use of
register label names, the user makes
himself/herself punishable in some cases.
§5.5: All contents and contributions
of users are copyrighted and owned by the
§6: Changes of the contract
§6.1: The contract can change without
notification. If you dont agree to a changed point so
you have to quit this contract immediately.
§6.2: If you want to quit this
contract, you arn't allowed to use ENC Records anymore.
§6.3: The service operator is allowed
to cancel the contract at any time. He
is also allowed to disallow you the access
to some sites of ENC Records and to exclude
you for a period of time. If the service operator
cancels the contract, you aren't allowed
anymore to use any service from ENC Records.
§6.4: If this contract gets quited, all information collected by ENC Records about the User stay saved.
§7: Final clauses
§7.1: By using this website you
agree that all mutualnesses will be operated
in Austria, Vienna.
§7.2: Should one or more regulations
of this agreement become ineffective, so the
effectiveness of the other regulations dont
gets influenced in any way.
§7.3: Applicable right is the right
of the Austrian Republic.
|
OPCFW_CODE
|
Oil & Gas
DataGeometry BPA – Digital Oilfield Enabler with Intelligent Data Wrangling, Data Integration, Data Processing, Data Visualization & Reporting
Ironically, the digital oilfield still runs on paper. This makes it a challenge to extract important data from land records, well files, and oilfield transactions.
Energy organizations are currently embracing digital disruption yet ironically, the digital oilfield still runs on mostly paper based manual processes.
Companies are reshaping their futures, and our industry and technology practitioners are working with them to unleash data previously locked in silos and legacy documents, embracing cloud adoption, delivering digital innovation, and streamlining operations with flexible, agile product and service commercial models.
Our data wrangling services digitize and extract metadata, perform quality review, de-duplicate and file digitized content directly into a company’s target data stores within an agreed time frame. We apply an end-to-end service that involves artificial intelligence, automation, and analytics to unlock valuable information from a company’s data assets. You can accelerate the decision-making process, generate accurate and searchable data, enhance engineering with reuse of data assets and increase revenue. Each component is designed to run autonomously, allowing for rapid configuration and deployment
The Open Group OSDU™ Platform is a standard data platform that is being developed for the oil and gas industry, which will reduce data silos to enable transformational workflows, accelerate the deployment of emerging digital solutions for better decision, and put data at the center of the subsurface community.
The OSDU Data Platform Will:
- Enable secure, reliable, global, and performant access to all subsurface and wells data
- Reduce current data silos to enable transformational workflows
- Accelerate the deployment of emerging digital solutions for better decision-making
- Create an open, standards-based ecosystem that drives innovation
Data at the Center: All Subsurface and Wells Data Stored in a Single Data Platform
Standardizing Data Formats
Data (structured/unstructured/real-time) to be stored using standard data formats. They move away from proprietary formats and focus on formats that they can use more broadly
Leveraging on Metadata
Master data ensuring that we have a single set of definitions across OSDU and adjacent Data Platforms
A Single Source of Truth
Extracting Metadata to ensure that all data in the OSDU Data Platform can be located using Search or latest Graph technologies: They should be able to do shallow and deep search
Access At Ease
A well-defined set of APIs providing a standard way of accessing the data in the OSDU Data Platform. Make sure that all applications can use a single set of APIs to access all data sources in the OSDU platform
Connect with Us
Suites C-5-16, Metropolitan Square Commerce, Damansara Perdana, 47820, Petaling Jaya, Selangor, Malaysia
+60 37625 3153
|
OPCFW_CODE
|
The NYSE has been running Red Hat Enterprise to manage its operations since 2008.
Since 2006, open source computing via Linux has been in
full swing showing everybody using a desktop that computing can be a
freeshared resource environment as good as branded licensed software.
Linux community and corporate developers quietly put together the best
tools and apps to run Linux and the perceived overwhelming advantage of
MS Windows as an OS over Linux is now nil. Linux has a huge cost
advantage and the perpetual support of a dedicated user community with
no accruing paid license fees for every OS upgrade. For the branded versions of Linux, technical support is all you pay if you want professional dedicated product support, but there are no perpetual licensing fees for each product upgrade per desktop installed with Linux.
Today's number of Linux distros available to Linux users
Even if Linux might seem to have a very small share of the desktop user market with just beneath 1% of the overall market share worldwide, Linux is actually being used by EVERY business entity that matters. For business users, especially tech start-ups, bootstrapping costs are important to the decision running your business. Given that Linux can reliably handle most of what MS Windows offers, the trade-off for a shift to open source became a no-brainer.
Here are a few of the best companies using Linux..
Online Retail Giant Runs Linux
Since 2000, Amazon.com has been using Linux as part of its operations. First trying out Linux as a desktop alternative for basic office apps among its employees, Amazon has configured their entire database and inventory system to use Linux. According to ZD Net, in 2001, Amazon filed a document with the Securities & Exchange Commission stating that switching to Linux had saved the company $17 million. Linux runs everything in your favorite Amazon shopping site.
The World's Search Engine Runs Goobuntu
You wouldn't know that the ever-growing rooms of servers powering Google runs Linux. Google actually pays Canonical for technical support for its Linux servers. Not content with the UNIX server side, Google had its own software engineers tool up a home-brewed, Ubuntu-flavored distro dubbed "Goobuntu" for use in-house among employees running desktop machines.
Big Blue Runs Power Servers With Linux
IBM is known to be a dedicated supporter as well as customer for Linux. The company runs Linux internally on desktops and servers. In the last decade, IBM may have been the biggest financial contributor to Linux, and has committed huge technical resources to develop open source with the Linux community. IBM Power Systems are dedicated products that run Linux.
Your Favorite Free Encyclopedia Runs Linux
Everyone's favorite free encyclopedia, Wikipedia strongly supports Linux and converted to Ubuntu in 2008. The open source, information-advocacy organization is said to be running over 400 servers and needs to keep its data safe, secure and glitch free.
The World's Stock Exchanges All Run Linux
The world over, stock exchanges are either getting Linux to run their operations, or developing their own Linux based systems to run a very time intensive data crunching business. As of 2008, the New York Stock Exchange has already been using Linux to run its trading platform, choosing Red Hat Enterprise Linux. Meanwhile, an unexpected glitch in its Windows-run system shut down trading on the London Stock Exchange in February of 2011, for a full seven hours, causing huge losses for traders and listed companies. The London Stock Exchange has since abandoned the Windows platform for its operations and is considering Linux too. All the other major stock exchanges, from the Shanghai Stock Exchange to Germany's own Deutsch Borse, to the Chicago Mercantile Exchange, use Linux powered systems that allow fast, error-free, and secure trading.
A Look at Linux Technical SupportFor Businesses
Buying into Red Hat's Enterprise Linux product comes with free technical support while with Canonical's Ubuntu, tech support is an optional cost. Pay-as-you-go support is becoming one favorable option as well for bootstrapped start-ups. If you do not have a Linux expert on hand and if you really need an expert show you how to figure out or fix something, there are Linux consultants who commit as tech support specialists for a fee.
Almost all major Linux distros have a level of subscription technical support, aside from the community of techs who can pitch in and help out as consultants.
In spite of the cost of keeping a Linux consultant available at all times, numerous studies have
confirmed that cost wise, Linux computing is still competitive with brand software licenses. Free software for businesses plus nominal paid tech support will never exceed the
price of paid-license software plus paid support. And Linux rarely conks out or bogs down like branded software (aka Microsoft's Windows).
|
OPCFW_CODE
|
Migrating MFA configurations to a new (i)Phone can be tricky: this article shows how to migrate the tokens/accounts from the most common authenticator apps.
I recently got a new iPhone. All-in-all it was relatively simple to migrate my account credentials and tokens. However, it did take me a while to figure out how to provide additional verification for my Microsoft accounts at other organizations besides my work (using the Microsoft Authenticator). Also, I had to add my new phone to the native OTP of our Citrix ADC. This article describes my "MFA migration journey".
- Google Authenticator app
- Microsoft Authenticator app
- Citrix SSO app
- Citrix ADC native OTP: enroll your new phone
Google Authenticator app
Migrating the MFA tokens from the Google Authenticator app is actually really simple: you can export them to your new phone.
1 - Go to the official Google support article:
2 - Select your phone (Android or iPhone/iPad).
3 - Scroll down, expand the section Transfer Authenticator codes to a new phone and follow the steps.
Direct link for iPhone and iPad:
Direct link for Android:
You can choose to have the same codes on multiple devices if you so choose. If you are not planning to use your old phone then it is best to delete the codes from that device.
Microsoft Authenticator app
The account credentials in your Microsoft Authenticator app can also be exported and imported to your new device. This official Microsoft article actually explains the process quite well, both for iOS and Android devices: Back up and recover account credentials in the Authenticator app. Just follow the steps in the article.
You can choose to have verification codes for the same accounts on multiple devices, but the verification codes will be unique on each of these devices (all are valid). If you are not planning to use your old phone then it is best to delete the codes from that device.
What the article did not explain very well is how to provide additional verification for account credentials that are outside your organization. I am talking about the situation as shown in the following screenshots.
An account requires additional verification.
You have to (re)scan the QR code, but where?
The solution is as follows. Go to the following website:
On the top right, click on the organization icon (if you do not see this icon, continue here):
You now see the current organization you are signed in for as well as a list of other organizations that you are a member of.
Now you can select another organization for which you need to re-verify your account.
If you do not see the organization icon, you may experience the new MyApps website layout. In this case, you see the name of the organization you are currently signed in to directly under your name. When you click on the profile icon you can select another organization for which you need to re-verify your account.
Remember, you are a member of one main organization (your work or school most likely) and you are using one e-mail address. But your work or school-related Microsoft Azure user account can be a member of multiple (external) organizations.
After changing the organization, click on your profile icon in the top right corner and click View Account.
In the left menu pane, go to the section Security info. Here you can add a sign-in method. After adding your new phone you can also delete your old phone.
You can also click on Update info to modify your security methods.
Citrix SSO app
Citrix also has its own TOTP authenticator app called Citrix SSO. I was using this application on my previous iPhone, but during the migration to my new iPhone, I realized that there was no way to export or migrate the tokens. I then decided to re-enroll all of my tokens to Google Authenticator and stop using the Citrix SSO app. Unfortunately, I did not find another way.
Citrix SSO app on the Apple Store: https://apps.apple.com/us/app/citrix-sso/id1333396910.
Citrix ADC native OTP: enroll your new phone
I had to add my new phone to our Citrix ADC native OTP. In most cases, the URL will be https://ADCgatewayurl/manageotp. The URL can be different. Check the exact URL with your organization.
On the OTP enrollment website, click the plus sign to add another device.
Enter a name for your new phone.
Click Go and scan the QR code with the authenticator app of your choice.
I hope the information in this article was of some help to you.
Good Doc Dennis.
Thanks a lot Ray!
Thanks a lot, impressive. MS advised to contact Azure Admin in order to get QR code, having access to 40 organization it is impossible.
|
OPCFW_CODE
|
How should we improve our feedback and product management tools?
We are listening and use this feedback to prioritize our roadmap!
267 results found
Search should work on the entire feedback including sender name and email address, meta data, idea content etc.8 votes
Add an Idea ID or identifier to the idea properties
With no identifier on the suggestions or ID listed in the properties, there is no quick way to track ideas or send one to a coworker for easy search other than the URL.1 vote
Hello – We include “ID” as a property on an idea. This can be found in the “Properties” section on an idea’s detail sheet (screenshot attached). Admins can search for this ID from the Ideas page to locate the idea.
Gamify Suggestions & Votes
Create a feature to incentivize forum engagement via gamification1 vote
Allow .zip file attachments
Allow users to attach .zip files to ideas and comments1 vote
We appreciate your feedback on this!
.zip attachments can contain any file type, and since most browsers immediately download and open .zip files this is a security concern. For this reason, we are not able to allow .zip attachments.
For a list of file types we do support, please, check out our Knowledge Base article: https://feedback.uservoice.com/knowledgebase/articles/39139
Be able to choose a page when navigating suggestions
On a list of suggestions, could we have the possibility to choose a page and to go to the first or the last page of the list.
This would also help you keep track of where you are, in case you refresh the page you will not be taken back to page 1.3 votes
Transcribe voice to text
Provide a way to capture feedback via audio channels (ie Alexa, Google Home, etc) and transcribe the voice to text for submission into UserVoice.1 vote
Have search respect localization
Improve the search functionality of UserVoice!
Currently UserVoice search does not understand that color and colour is the same word.
There must be many, many examples of this.2 votes
The Ability of Voting Down on an Idea
We would like to have the ability to "vote down" against a suggestion. This functionality will help us understanding if the number of people that doesn't want that particular feature is bigger or smaller than the number of people that want a particular feature.1 voteDeferred · AdminMatt Young (CEO, UserVoice) responded
Downvoting is a feature that has been requested over the history of UserVoice. We believe that upvoting keeps the public conversation positive. Furthermore, voting is only one important metric in assessing the end user’s value of an idea.
We have found that strong, positive signal on a UserVoice idea will be one trigger that a product team will use to investigate the Idea further. Downvotes should not dissuade a product team from taking action if it is the correct strategic move for a product.
I know that this answer won’t satisfy everyone, but we have thought about it extensively, debated it internally at length over the years, and have arrived at this conclusion.
We thank you as always for sharing your ideas with us!
Do not close/archive suggestions here
I've noticed that some of the suggestions I've posted here are archived. IMHO, they are still relevant. IMHO, archiving suggestions does not make any sense - this would just result in duplicates created by others in a while.
Thus, please stop archiving/hiding/closing old suggestions and restore those that are archived.3 votes
Thank you for posting and sharing your feedback on this.
You are right. Silently archiving suggestion is not a good customer experience.
While we will need to decline ideas that don’t align with our Product Team’s vision, we need to communicate that to our customers.
We are declining this idea because we will have to close out suggestions, but we are going to work to communicate much more openly around this going forward.
If you have any questions around any feedback you have submitted, please reach out to me directly at firstname.lastname@example.org. I will be happy to look into this for you.
Give the option to remove attachments when merging
We have had to do a lot of merging of our top ideas and so those ideas are getting overloaded with attachments that are very similar images.
While we can manually edit ideas to remove attachments, it would be nice if we could pick and choose which attachments were retained when we perform the merge.1 vote
Allow bulk updates of texts in title and description for ideas for multiple forums
I would like to be able replace a word or phrase in the title and description fields for ideas within multiple forums. There is no way currently to accomplish this other than manually editing each ideas to change a wording. This can be very painful when you have hundreds of ideas that needs to be changed.2 votes
Ideas visible to Administrators and Contributors only
Allow suggestions that are not visible by the entire public, but live in a public forum. For example, if a large partner is requesting a feature that would only be available as part of a custom agreement, and the functionality would not be for public use. It's possible to enter this idea in a seperate private forum, but would be easier to just flag the idea itself as internal only.1 vote
When choosing "All Activity" under Ideas and Feedback main nav, the radio button "needs review" should be checked by default1 vote
Allow anonymous suggestions
Don't require users to enter an email address when posting suggestions or comments on specific forums.7 votes
Thank you for your feedback around this!
A core value of UserVoice is communication, allowing Product Managers not only to hear from users, but communicate back to them around their feedback.
We also help Product Managers when evaluating ideas, not only see how many people want it, but who those people are, enabling them to make data driven decisions.
Anonymous suggestions don’t allow either of those, and aren’t part of our vision for the product at this time.
While we are saying “no” to this particular idea, please keep sharing. We do appreciate you sharing your thoughts and product needs with us!
Referrer URL for votes
Please capture the referrer URL of a vote. I see you capture them for Suggestions.1 vote
Remove a user's support (or vote) from an idea
Would like to be able to remove votes from a suggestion.12 votes
When selecting labels, auto select the upper tiers if a lower tier is selected
When selecting labels, auto select the upper tiers if a lower tier is selected.
If I choose the Sub-Category label, then the Category level should autoselect. This way no matter if I filter by Category or Sub-Category labels, I will find what I am looking for.1 voteClaire Talbott responded
Thank you for sharing this idea and great feedback on labels! We are looking forward to getting additional feedback on this as well.
Please move the ''Mark as Spam'' much lower than the ''Merge'' button from Options
It's too easy to accidentally click on it, and then we have to go to Spam to unmark them.2 votes
- Don't see your idea?
|
OPCFW_CODE
|
You may well be wondering how a something like an online software store can have political implications. The answer is that the infrastructure that makes it work can, at least in the ongoing war on general purpose computing. If you aren't familiar with this war, read the article.
By delivering iOS with the App Store as the only authorized means of adding software, Apple has shown that it's possible to create a popular computing platform that isn't a general-purpose computing platform. I don't believe they did this by - as Cory Doctorow claims in the above article - delivering a device with malware preinstalled. Instead, they heavily censor the software that can be installed, turning iOS devices into platforms for running a small set approved software appliances.
Implication for Apple
Computer manufacturers are normally shielded from liability for any crimes committed using their products because the computers have so many legal uses, and they can't control the software that the customer installs. Apple has lost this last claim - they exercise absolute control over the software their customers install.
At some point someone will use an iOS device and an app from the store in committing a crime. Given the deep pockets theory - which says you pay for damages depending on your ability to pay, not your responsibility - and Apples very deep pockets, some bright lawyer will decide to try and show that Apple bears some responsibility for this crime because they allowed the app to be sold in the store.
Whether Apple will lose and have to pay, or pay to make the lawsuit go away, or fight and win is yet to be seen. But the lawsuit is certainly bound to happen.
Implication for users
The unauthorized way to turn an iOS device back into a general purpose computer is to jailbreak it. That is a violation of the DMCA, but jailbreaking is currently legal because the US Copyright Office provides an exemption to the DMCA for actions required to unlock phones for use on other carriers. Since jailbreaking is required for that, it's covered.
The question at hand is - how long will that stay true once the government realizes that that exemption allows people to use iOS devices for all the illegal activities that Apple prevents? Again, the answer is unknown, but the continued existence of the exemption - which must be reviewed at regular intervals - will get less and less likely as the war goes on.
Implications for other manufacturers
Those fighting to keep the public's computers from being able to perform actions that government has been convinced are bad for the country - basically, to keep the public's computers from being general purpose computers - will love iOS and the App Store. They demonstrate that it's possible to make money building and selling computing platforms which can have arbitrary restrictions placed on what they can do.
Since Apple can do this, why can't Dell, Gateway, Microsoft, HTC, and everyone else who makes computers or operating systems? If they try and resist - which doesn't seem likely, as such stores are profit centers - I'd expect it to become a legal requirement in order to sell computers.
If you like not having the government dictate what you can and can't do with your computers, you need to support platforms that don't let anyone do that. Avoid iOS. Avoid buying software through manufacturers stores. Avoid hardware that won't let you run alternative operating systems.
In a sentence: protect your right to choose by making sure you buy and use systems that preserve it.
|
OPCFW_CODE
|
If you have got the hang of Beginner’s Guide, and wish to model practical problems and build your original networks, this section will provide you with some detailed operations:
This section collects several documents arranging from the simplest to the most challenging, which will guide you through the basic deep learning tasks in PaddlePaddle.
The documentation in this chapter covers a lot of deep learning basics and how to implement them with PaddlePaddle. See the instructions below for how to use:
Simple Case :introduces basic cases of Paddle
Computer Vision :introduces cases of using paddle to realize Computer Vision task
Natural Language Processing:introduces cases of using paddle to realize Natural Language Processing tasks
Recommend:introduces cases of using paddle to realize Recommend tasks
Models Zoo:introduces the models zoo of Paddle
We packaged Jupyter, PaddlePaddle, and various dependency softwares into a Docker image. It frees you from installing these softwares by yourself, and you only need to just install Docker. For various Linux versions, please refer to https://www.docker.com . If you use docker on Windows or Mac , consider allocate more Memory and CPU resources to Docker .
This book assumes you are performing CPU training by default. If you want to use GPU training, the steps will vary slightly. Please refer to “GPU Training” below.
Just run these in shell:
docker run -d -p 8888:8888 paddlepaddle/book
It downloads the Docker image for running books from DockerHub.com. To read and edit this book on-line, please visit http://localhost:8888 in your browser.
If the Internet connection to DockerHub.com is compromised, try our spare docker image named docker.paddlepaddlehub.com:
docker run -d -p 8888:8888 docker.paddlepaddlehub.com/book
To ensure that the GPU driver works properly in the image, we recommend running the image with nvidia docker . Please install nvidia-docker first, then run:
nvidia-docker run -d -p 8888:8888 paddlepaddle/book:latest-gpu
Or use a image source in China to run:
nvidia-docker run -d -p 8888:8888 docker.paddlepaddlehub.com/book:latest-gpu
modify the following codes
use_cuda = False
use_cuda = True
Contribute to Book¶
We highly appreciate your original contributions of new chapters to Book! Just Pull Requests of your contributions to the sub-directory in
pending . When this chapter is endorsed, we’ll gladly move it to the root directory.
For writing, running, debugging, you need to install shell to generate Docker image。
Please Note: We also provide English Readme for PaddlePaddle book
|
OPCFW_CODE
|
How to create a role and give access to only certain users having that role to the block in moodle?
I have a block, now I want some certain users to access that block. Those certain user will have a role created for them. My question is, how to create a role, assign users to it, and that role will enable the users to see a certain block that I created.
Thanks
Here's how you can do it in moodle:
How a block can be made visible only to certain users?
1) Create your custom role from Site Administration > Users > Permissions > Define Roles
2) You can select an archetype which means selecting one them will allow you inherit that archetypes capabilities.
3) Also select the context as a block. So that you can assign this role from block settings(i.e. local settings) level.
4) Now go to home, turn edit on so that you can see the local block settings cog wheel on right corner of the block, click it.
5) Click assign roles to this block.
6) You are now in 'assign roles' page, see left column, under Administration you'll see Block: , and
under that there are settings like: 1) Assign roles, 2) Permissions, 3) Check permissions.
7) Click on Permissions, you'll see view block under Block. There is a plus sign beneathe, click it.
8) Now you can edit who can view this block from here. Just keep the role you created and delete others.
9) Now go to 'Assign roles'.
10) You are seeing a table with Role, Description and User with role column. Click on the role name in that table.
11) You'll arrive to a page, where you have bulk assigning user to that role option. Upon assigning users to that role, you complete the process.
I would create a capability for the block in blocks/yourblockname/db/access.php
'block/yourblockname:view' => array(
'captype' => 'read',
'contextlevel' => CONTEXT_BLOCK,
'archetypes' => array(
'manager' => CAP_ALLOW
)
)
You'll also need a language string for it in /blocks/yourblockname/lang/en/block_yourblockname.php
$string['yourblockname:view'] = 'View this block';
Then in your block class in blocks/yourblockname/block_yourblockname.php
Check the capabilitiy
function get_content() {
...
$this->content = new stdClass;
$this->content->text = '';
$this->content->footer = '';
...
if (!has_capability('block/yourblockname:view', $this->page->context)) {
// Return blank content so the block isn't displayed.
return $this->content;
}
You will need to bump the version in version.php for the capability to be installed.
Then go to roles and set the capability to allow to the required role.
|
STACK_EXCHANGE
|
openSUSE:Board best practices
This page collects so called best practices for common board tasks plus templates for handling stuff, mostly by email.
If you file a complaint, there is a Formal Complaint Process. The formal complaint needs to be filed at https://code.opensuse.org/project/coc/new_issue Either public or private
Submitters are asked to
- List all the facts and reasoning for the complaint
- Provide context, text and screenshots if neccessary
- Provide a self reflective paragraph of any interactions with the person violating the code of conduct
- Provide a potential course of action to resolve the situation
Please list all the facts and reasoning for the complaint. Provide context, text and screenshots if necessary. We ask that you provide a self reflective paragraph of any interactions with the person violating the code of conduct. Provide a potential course of action to resolve the situation.
- Your contact info (so we can get in touch with you if we need to follow up)
- Names (real, nicknames, or pseudonyms) of any individuals involved. If there were other witnesses besides you, please try to include them as well.
- When and where the incident occurred. Please be as specific as possible.
- Your account of what occurred. If there is a publicly available record (e.g. a mailing list archive or a public IRC logger) please include a link.
- Any extra context you believe existed for the incident?
- Do you believe this incident is ongoing?
- Any other information you believe we should have?
We cover complaints against a board members. In such a case, consider reporting via a proxy (i.e board member of your choice) should you feel uncomfortable reporting it in this format.
Two days before regularly scheduled board meetings, board@ and project@ receive an invitation/reminder which should include
- date, time (and time zone),
- virtual location,
- the current tentative agenda/topics, and
- how to submit topics.
(You can easily adjust that reminder via https://github.com/openSUSE/mail-reminder .)
- Everyone arriving, a few minutes of chit-chat
- Identify minutes taker (round-robin based on board members' first names)
- Verify previous minutes were published
- Review list of agenda items (which should already be in the notes skeleton in Etherpad, though still may be useful to double check the issue tracker)
- Check whether there needs to be a private part of the meeting (e.g. conflict resolution items)
- Regular items
- Private items (if applicable)
Board members alternately take minutes, in alphabetical order based on first names; by default the chair moderates the meeting.
Minutes should include:
- Heading with date of the meeting and list of participants
- List of topic discussed and their content
- An early invitation for the following meeting
Draft minutes should be sent to firstname.lastname@example.org and guests who attended the meeting within a day or two after the meeting. Drafts are kept on an etherpad: https://etherpad.opensuse.org/p/BoardMeeting
The final version of the minutes should be shared with email@example.com mailing list when reviewed and approved by at least two other board members. Provide the other board members at least 24 hours to respond before sending the final version out. Afterwards it should also be added to the openSUSE Board Meeting Wiki page as archive at https://en.opensuse.org/openSUSE:Board_meetings.
The deadline should be by the end of week the board meeting occurred.
The minutes taker shall also
- create code.opensuse.org tickets for AIs that came up, and
- close resolved tickets (such as agenda items fully covered).
Like other trademark-related topics all openSUSE Internet domains such as country-/language-specific domains and domains used by the marketing team are handled by the board.
After the board has agreed, domain will be registered by the trademark owner (currently SUSE, in the future this might be the openSUSE Foundation). SUSE IT is managing those domain registrations. Cost is covered by SUSE. DNS is managed by the openSUSE Heroes.
openSUSE country-/language-specific domains will only be made available to known and trusted openSUSE members, and ideally a group of those to avoid singe points of failure.
They are intended for language- or country-specific presence and use cases and should not "compete" with contents and services provided via our general openSUSE.org infrastructure. Two possible examples are opensuse.jp for a presence in Japenese language and opensuse.mu for a presence of the Mauritian community.
Such domains are to serve subsets of the global openSUSE community, which entails abiding by the openSUSE Code of Conduct and all other rules and practices.
Practical steps to register a new domain:
- Discuss among the group of people later using that domain, which can be the openSUSE marketing team or a local community/group of volunteers.
- Approach the board, either directly or via progress.opensuse.org.
- When/if the board has approved, create a ticket in progress.opensuse.org if there isn't one yet.
- The chair of the board (or possibly another board member) is going to liaise with SUSE IT.
- They also confirm the openSUSE ticket.
- openSUSE Heroes set up DNS for the new domain.
- SUSE IT registers the domain pointing to DNS servers provided by openSUSE (currently ns1/ns2/ns3.opensuse.org).
- Have fun!
Joining the Board
- After the handover meeting, send an email to firstname.lastname@example.org to be provided with Wiki admin access to be able to edit the Board pages.
- Subscribe to email@example.com by sending an email to firstname.lastname@example.org. (Alternatively, as of 2021-03-08 Gerald has admin access.)
- Add yourself to the Board and Board history Wiki pages.
- Give the Board pages a thorough read.
- Get access from one of the follow board members for https://code.opensuse.org/board/tickets to track board ticket items.
- Determine with other board members when is an optimal time to have a meeting with availability.
- Find the weekly meeting notes at https://etherpad.opensuse.org/p/BoardMeeting
- Ask about exceptions/responsibilities of being a board member. E.g. Annual openSUSE board meeting, initiatives to drive, etc.
- Gradually make yourself familiar with teams, projects, people and infrastructure associated with the openSUSE Project.
- Edit the board section of the openSUSE Wikipedia at least in English, if possible in German; other languages optional.
First Warning + 3 months ban
the board has received a complaint about your behavior [place where it happened].
We have reviewed the incident during our last board call and came to the
conclusion that your behavior was not in line with the way we want to
interact with each other and is a breach of our code of conduct.
The board has decided to issue a *warning* to you and ban you from the openSUSE
mailing lists and social media channels for three months.
After that period you are welcome to participate again.
If you continue to misbehave, you will be completely banned from the openSUSE
mailing lists & social media channels.
On behalf of the openSUSE Board
[Board Member's name]
|
OPCFW_CODE
|
import { v4 as uuid } from 'uuid';
import { ApiError, Client, Environment } from 'square'
const environment = process.env.NODE_ENV === 'dev' ? Environment.Sandbox : Environment.Production
// Create an instance of the API Client
// and initialize it with the credentials
// for the Square account whose assets you want to manage
const client = new Client({
timeout: 3000,
environment,
accessToken: process.env.SQUARE_ACCESS_TOKEN,
})
// Get an instance of the Square API you want call
const { ordersApi } = client
// Create wrapper async function
const placeOrder = async (order) => {
// The try/catch statement needs to be called from within an asynchronous function
try {
const { body } = await ordersApi.createOrder(order)
return JSON.parse(body);
} catch (error) {
if (error instanceof ApiError) {
console.log("There was an error in your request: ", error.errors)
} else {
console.log("Unexpected Error: ", error)
}
}
}
export {
placeOrder
}
|
STACK_EDU
|
How to create rhythm parts on piano?
I am trying to self learn piano. I can play some melody (right hand), can play some chords (left hand) and can play some melody with chords (right + left) with right doing all the complex finger movements for the melody and left just pressing the chord keys once at each change.
I am trying to play something more complex with left hand and interleave the notes with those on the right hand melody. But I do not know how to come-up with the rhythm parts. I try arpeggiating the chords instead of "playing" them but result does not fit in and sounds muddy. Also been trying a few other patterns for a while now, but nothing seems to stick.
My question: are there some patterns (rule of thumb etc.) for coming up with rhythm patterns given a melody? For example, in case of chords, one can rarely go wrong playing the chord for the root note of the melody and a progression thereof.
PS: Not sure if this is classified as "improvisation", but tagging it so.
Not an answer (as I don't have much more to add), but something like this: http://musingsofaministerswife.com/music-2/10-standard-left-hand-patterns-for-piano-explained/ might be worth a read.
@ James: good link. I've also found somthing: https://www.helblingchoral.com/media/catalog/products/S7965/doc/keyboard_acc.pdf and https://s3.amazonaws.com/tjtassetdelivery/Left+Hand+Pattern+Worksheet+-+Freebie.pdf but there are lots of them Looking for piano patterns left hand:
http://keyboardimprov.com/wp-content/uploads/2016/01/PopBalladAccompaniment_Complete_Book.pdf
@AlbrechtHügli Thanks for the links! Going to check them out in a bit. Note: it is not just about left hand patterns but how to "fuse" them with the melody that is where I am having the most trouble.
I learnt this just when I bought song arrangements with sometime bass and accompaniment in the left hand and others with bass left and melody and chords in the right hand. With the latter I struggled more but as you mean to say: for improvisation this is good to learn both. And you have it much easier today as you can just google and download ... ;)
I recently made a two-minute video to illustrate an answer on this site, https://www.youtube.com/watch?v=78-Ggxq6868 Is the rhythm aspect anything like what you mean? If so, I could try writing an answer and perhaps a more advanced video. The question for which I made the video is this one: https://music.stackexchange.com/questions/79638/transitioning-from-arranger-keyboard-to-piano/79658
I think 'improvisation' connotates for many people improvising a melody over chord changes like in jazz or a rock guitar solo. But, you are asking about improvising the rhythm part - the accompaniment part. You can try finding resources using keywords like 'accompaniment' or 'comping'. You can also look for guides along the lines of 'how to play from a lead sheet.' A lead sheet is just the song melody with chord names above the melody.
Here are two samples that seem good. Both show how a simple pattern can be adapted through small variations.
Jazz waltz using bass on beat 1, chord on beats 2 and 3...
https://mascaripianostudios.com/music-theory/jazz-waltz/
A synopated rock rhythm...
http://www.musicarta.com/beat-and-rhythm_6xx.html
The value of these kinds of resources will depend a lot of the style you want to play and the kind of learner you are. (For example, some people can't read notation and so notated sources aren't so helpful.) Look around for what fits your interest. Some items have previews online, but if not, you can try to get material through your library before committing to buy anything.
The best way to learn to improvise was for me:
left hand: bass
right hand: chords
and the melody: singing by my self!
(or accompanying someone else)
so my hands were free for the rhythm and later able to play the melody while the chords
were added to the bass or to the tune in the right hand. This gave me the greatest flexibility and this is the basic of improvising.
|
STACK_EXCHANGE
|
After some time playing with the AugmentedFarm I made some annotations and I will pass to everyone, hope that your start will be easyer than mine... I'm not expert and if I say something wrong please let me know.
Let's Go ...
1º - Create a new unity Project
Img - Empty unity project
2º - Import The Plugin Files from demofolder on the SDK folder... i get them from the from AugmentedFarm:
( C:\Program Files (x86)\Intel\PCSDK\demo\AugmentedFarm\Assets\Plugins )
- tracker ( Folder )
obs: the plugins from the others demo are different , they dont have the AR capacity , but this one have the AR and the others
Img - Drag the Plugins Folder
3º - Import The Pipeline folder
( C:\Program Files (x86)\Intel\PCSDK\demo\AugmentedFarm\Assets\Pipeline )
- SDKPipelineObject.prefab ( just a prefab of a empty object with the SDKPipeline.cs code )
Img - Drag the Pipeline Folder
4º - Import The Augmented Reality Scripts
( C:\Program Files (x86)\Intel\PCSDK\demo\AugmentedFarm\Assets\Book\scripts\AugmentedBook )
Img - Drag the AugmentedBook Folder and place on Scripts Folder
Now we got the basics to start the Augmented Reality...
1º - Reset the Transform from the Camera and Rename it to backgroundARCamera
2º - Drag the SDKPipelineObjectprefab to the Hierarchy on unity Editor, and reset the transform
3º - Create a new GameObject Plane so we can render the background ( RGB Camera ), reset the transform and rotate on X axis -90 and rename it to backgroundARPlane
We have now our background RGB image rendered if we press Play button
Lets Organize the things a bit ...
obs: dont make modifications while on playmode , because they back to previous when you exit playmode...
2º - Scale the plane for : x = 0.4 / y = 1.0 / z = 0.3
Now we can Make the Tracked Object!
0º - Move the SDKPipelineObject to another position
1º - Create a new Camera , call it ARCamera , and reset the transform
2º - Create a new Empty GameObject , call it ARObject, and reset the transform ... you are going to put your models on this gameObject
3º - Create a new Empty GameObject , call it ARVirtualObject, reset the transform and drag ARCamera and ARObject to it
4º - On the ARVirtualObject put the script ABVirtualFarmScript.cs
5º - On the ARCamera put the ABARCameraScript.cs
ps.: the Script look for "AugmentedBookCamera" ... open the script and change to "ARCamera"
6º - On the ARObject put the ABFarmScript.cs
7º - Create a new Layer for the ARObject, call it objectAR
8º - Change Culling Mask for from ARCamera to objectAR
9º - Change the Clear Flags from ARCamera to Depth Only
10º - Change the Depth from ARCamera to 1
10º - Create a new Object Cube and reset the transform ...
11º - Move the cube to ARObject
12º - Assign ARObject to objectAR Layer, hit "Yes, Change Children"
13º - Move The ARobject away to the ARCamera for better visualization
14º - Rotate the ARobject to x=90 / y=180 / z = 0 , for better visualization on the editor ( thats how object will stay on tracked )
z vector downwards , x vector horizontal , and y vector will look at que camera
We have the object being tracked , but its not on the right position / scale
Lets Organize the things a bit ...
1º - Scale ARVirtualObject to x = 20/ y = 20 / z = 20
2º - Scale ARCamera and ARObject to x = 0.05 / y = 0.05 / z = 0.05
Now we start to see the object being tracked almost on the right position from the marker... Notice that if you move the marker around , it looks like the cube is moving on the marker ...
To fix that , we have to calibrate the cameras Field of view, you will have to play around with this values:
1º - ARCamera Field of View
2º - Plane (from background) position
for this example, change ARCamera Field of view for 45 and it shold be fine ( if your plane is on the same place / scale from mine )
see now how the cubo remains over the sheep ...
Keep rotating the marker and moving from one side of screen to the other and see if it remains on the same place ( from marker ) thats where it should be... if its not , keep changing the values
Here I just create another cube and make a 'plane' for better visualization... I recommend you to do this, make a basic cube that indicate the size of the marker and another showing the rotation... To avoid losing the marker position, you can always uncheck the mesh renderer if you don't want to view.
I just duplicated the cube , moved and scaled a bit... See now it has the almost same size from the marker, but its not on the center of the marker.
Go to ARObject and change the Shift:
x = -0,07
y = 0,85
z = 0,14
See that now it has the same size and its centered = )
ps.:: It's not on the exactly right field of view ... you have to find the right value ( 48 / 48,5 it's a good start on this example )
The project will be here to download as soon as I discover how to attach it here, sorry.
And you can now start your Augmented Reality Apps...
obs : if you build the project it will not work ... but it will be fixed soon, do your things on the playmode for now
I would like to Thanks to RApp´s Studio (http://www.rappstudio.com/) for "free" me some time and space so I could start on Perceptual
Thanks to my Girlfriend for all the support s2 Love you xuxu s2
|
OPCFW_CODE
|
by Stuart McKee on March 04, 2010 07:45am
It's no understatement to say that governments today are stressed to deliver more with much less. One difference with the private sector is the government's ‘inverse' relationship to the economy - the ‘worse' it gets, the more demand for services rises ... tax revenues go down, while demand on the system goes up.
State and local governments are really feeling the impact of the recession and are losing valuable resources. Many of the "easy" cuts have already been made, and tough decisions like layoffs, delayed projects, and reduced services are being implemented across the country. Yet even in this somewhat grim picture, there are people finding ways to improve government, providing services 24 hour a day, more efficiently and with greater impact.
As Microsoft's National Technology Officer for U.S. State and Local government's, I get the unique opportunity to work with a broad array of our customers, and see some of the creative approaches they are taking to solving very hard problems. So when I see someone doing it well, it really sticks with me
Last week, Microsoft hosted our annual Public Sector CIO Summit. More than 300 CIO's from across the US federal, state, local, and education leaders spent two days learning and listening to one other and discussing how Microsoft's technology strategy and roadmap helps them solve hard problems. There were several great stories, but there is one in particular I want to call out.
Like a number of cities, the City of Miami had implemented a 311 system. It started out as a phone based system allowing Miami's residents to report non-emergency issues around the City. Citizens could dial 311 to report issues such as potholes, street light outages, or missed trash pickup.
Of course those same citizens wanted to know that progress was being made, and started calling the call center to inquire about their issues. Since this can decrease efficiency, the city took a big step and decided to put it all online. Now, people can view the status of the request and monitor the progress of the request resolution. In addition, citizens have full visibility into the progress of other issues being resolved around the city.
Here's the really stunning part. The City of Miami, two people actually, was able to build a new system in less than eight days over the holidays, with no up-front costs - from inception to running. By deploying it in the cloud, they not only sped up development, but eliminated the need for costly infrastructure.
The solution takes advantage of virtually unlimited storage and processing power, provides the ability to quickly address service requests and implement updates even during peak times such as hurricane season. If things change, the City can bring the solution on site or move to a physical facility, all based on need and cost-effectiveness.
As a result, residents logging on to Miami 311 can see on average 4,500 issues in progress - not represented as a ‘list', but located on a map in relation to other projects in their neighborhood . A simple click on the map allows them to easily drill down to more and more specific details if they want.
In short, they have turned what used to be represented by a meaningless list of data into useful information, and created actionable and consumable knowledge that is relevant to the citizens of Miami. For Miami, their ‘service call to the city' becomes an interactive process they can follow - and the City has a new tool to manage and deliver outcomes.
Anyone who has ever built a public facing, enterprise-level application, knows how spectacular that is. Everyone who wonders how their government is doing can appreciate the value.
When the city made the move to the web, they chose tools they knew and software they trust. The Microsoft Windows Azure cloud platform made it easy to do, and they used both Bing mapping and Silverlight to build a user friendly front end.
They took advantage of the technology roadmap we have built, which lets them decide what belongs in the cloud and what belongs on premise - in effect, they put our annual $9-billion R&D investment to work for citizens of Miami, right now.
No delay. Lower costs. Great use of existing talent. Better citizen services. Fantastic.
Our customers have made decisions about how their enterprise technology infrastructure needs to meet their business requirements. We've built the platform that helps them deliver on those choices across a broad set of technologies, and not just those that have our name on it.
In fact, our customers get to choose which data center their data lives in; the technology they want to write applications to access that data; and the developer tools they use to write the code. The Microsoft cloud today supports open source technologies such as Eclipse, PHP, Ruby, Python and PERL running on the Microsoft Windows Azure platform in our data centers.
In doing so, our customers have choice and avoid the problem of creating a new silo of complexity. Instead, they are able to extend their on-premises environment to fit their goals in ways they are comfortable with. Turns out, it is OK to use a broad range of technologies, including Open Source software, with Microsoft solutions.
Now, something that is really cool: Miami is making their solution available to other jurisdictions (no surprise, most cities deal with similar challenges). I can't wait to see what the next iteration of contributions will be, as more thought leaders across the country engage.
Miami really is taking a lead, in very hard circumstances, and we're proud that our technology is part of that solution. But, as I said, it's about people solving problems.
|
OPCFW_CODE
|
what is the best way to edit csv file
I have a incomplete csv file that needs to be accurately updated, so there is csv file like this :
one,two,three,four,five,six,seven,eight,nine,ten //This is a line(String)
Naturally the file is far more complex but in this format, here is what I want to do insert new-word,new-word1, new-word3 or n words between seven and eight(or any range). How can I do that?
Pseudo-code,code or example would be great, I don't have a clue how to even start.
UPDATE:
Maybe I should convert this to array or some kind of datastructure. Then insert new item at the certain position, shift the rest of the content right and do that for each insertion.
I don't know if is right way to go or how to begin programming it
UPDATE II:
Maybe read in csv in the list, split list into two lists, first one ending with seven. Then add n words to first list and join two lists at the end? also don't know how to program this
Take a look at OpenCSV.
UPDATE: Here's some (barely) pseudocode to give a general idea of how you would use OpenCSV for this:
CSVReader reader = new CSVReader(new FileReader("old.csv"));
CSVWriter writer = new CSVWriter(new FileWriter("new.csv"));
String [] nextLine;
while ((nextLine = reader.readNext()) != null) {
List<String> lineAsList = new ArrayList<String>(Arrays.asList(nextLine));
// Add stuff using linesAsList.add(index, newValue) as many times as you need.
writer.writeNext(lineAsList.toArray());
}
Hat-tip: @Mark Peters who points out that you can't update the results of Arrays.asList
@Hank Gay I either don't understand what you've posted or this writes your existing entries to a new file, I don't see anywhere here how can I write between items
Will OpenCSV escape double quotes properly?
@Joe Phillips I guess that depends on your definition of properly, since there's no spec. It's a straightforward lib; I recommend you experiment with it and read the FAQ on its homepage.
Just want to note that Arrays.asList is an unmodifiable collection. You want new ArrayList(Arrays.asList(nextLine))
Last time I used this library it was memory consuming (If you have to process file ~15 gb size it is very important to not require a lot of ram; I had to optimize it in OpenCSV)
@uthark Even when operating on one line at a time (as in the example)? I would expect the readAll functionality to consume lots of RAM on big files, but line-at-a-time should be very efficient, because it doesn't need to store more than a single line.
This pseudo-like code might help :
List values = Arrays.asList(line.split(",\s*"));
List newWords = Arrays.aList(newWordsLine.split(",\s*"));
values.addAll(7,newWords);
StringBuffer buf = new StringBuffer(values.get(0));
for(v : values.subList(1,values.size()) {
buf.append(",",v);
}
return buf.toString();
this will insert the newWords after the 7th item on the line
I'd parse it into a collection, add your items, then render it back out to a csv.
yes, how would you insert n items to any collection ex:array list between two items which are already inside
You'd find the object before you wanted to insert, get it's index, and then insert the three things right there?
For ArrayList, use the add(int, object) method.
|
STACK_EXCHANGE
|
I finished my many days session to work on improving SLiM.
SLiM is now updated to version 1.3.5, which “out of the box” support UTF-8. Now SLiM messages can be translated into your native language.
I wrote a theme manager for SLiM — a small shell script. Now, after the installation of a package with the theme, this theme will be chosen without further action by the user. Whenever you install a theme, the manager remembers the previous theme. If you remove the current theme, the manager returns to the previous theme used.
In addition, the manager translates theme's messages into your language! Now it support 30 languages: English (en), Български (bg), Bosanski (bs), Català (ca), Česky (cs), Dansk (da), Deutsch (de), Ελληνικά (el), Español (es), Esperanto (eo), Suomi (fi), Français (fr), Hrvatski (hr), Magyar (hu), Indonesia (id), Italiano (it), Lietuvos (lt), Nederlands (nl), Norsk (bokmål) (no), Polski (pl), Português (pt), Português (Brasil) (pt_BR), Română (ro), Русский (ru), Slovenčina (sk), Slovenščina (sl), Српски / Srpski (sr), Svenska (sv), Türkçe (tr), Українська (uk).
Original English messages and all translated messages are stored inside the theme. Therefore, theme author can completely choose message, for example, if he wants to see the word “Username”, “User” or “Login”.
There are two types of messages. “Welcome” messages I translated using Google-translator, and I'm not sure about the quality of translation. Please correct me! You can find my strings here. Translations for messages “Username” and “Password” I took from Wikipedia's multilingual logon pages. There's gotta be right, given the large community of Wikipedia.
I have created three themes for SLiM. That's how they look in different languages:
They are generally similar to each other, because I'm too lazy to invent a different layouts ;) Background is standard SliTaz's wallpaper (/usr/share/images/slitaz-background.jpg). The point is, that you see as it were has already booted the desktop (or at least its background); you need only to input a name and password, and — here it is, the desktop.
As usual, you can find all the new packages in the Cooking repo.
And, if you still use SliTaz Stable (4.0) as me, you can find these packages on my Mediafire page: SliTaz-4.0.
I would like to draw your attention to the fact that after installing the slim-1.3.5 you will need to restart your computer to complete the installation. SliTaz desktop session running on top of SLiM, so it halting/restarting will lead to a momentary shoot out to the console. We will not do that. After a reboot, you can use the new SLiM, and new themes, and all that is described in this post.
And happy SliTaz! ;)
|
OPCFW_CODE
|
Can you suspect false carding from the following situation?
You are playing this hand at 3NT having arrived there after you bid 1NT and North 3NT, East and West having passed throughout. Only you are vulnerable. (The hand is linked for reference; but you'd only actually get to see your hand and dummy as below, plus individual cards as they are played). Dummy is North, the lead (by West) is the 5 of hearts 6, T, won by J in hand.
AK9
63
AT94
J985
QT8
KJ7
QJ82
AK3
You have two chances for your ninth trick; a diamond finesse is 50-50. You also have an extra chance (8% by my calculation) if East has Qx of clubs, since you can drop the Q, then finesse against Tx remaining in West.
Since clubs is a "key" suit, I begin by inventorying the opponents' combined holding: QT7642. I start by playing the A, and expect the two lowest cards, 4 and 2 to drop (which in fact is the case). Say one opponent drops the 6 instead; I would then expect his partner to have the 4.
I lead the K, and sure enough, East drops the Q. But here's the tricky part; West drops the 7, meaning that the 6 and T are outstanding. Put another way, they can't both be playing their lowest card and at least one of them must be falsecarding!
Should I now suspect "chicanery," and go back to my 50-50 diamond finesse because my a priori 8% chance of a successful drop is now too low? (The actual South player tried a finesse which lost to East's ten.)
But perhaps South could now capitalize on the 3-3 break (36% a priori) by putting up his Jack, since the queen has been sacrificed. I can understand why East falsecarded, but why would West do so? Because if he hadn't I would have taken the club finesse, thinking that West had four cards and the queen was an honest drop from East. But faced with evidence of chicanery, I would suspect that E-W were trying to hide the fact that the diamond finesse would work.
@PieterGeerkens: I changed the first two sentences so that the bidding and vulnerability information matches the link. Reminder noted.
If the diamond hook loses you'll still make the hand when hearts are 4-4. That makes cashing a second club questionable, since you could be setting up the defenders' fifth winner.
@AdamWildavsky: After trick 1 there is only a 1110 / 2019 = 11/38 chance that Hearts are split 4-4. (ie that both unplaced Hearts are with East, with 9 and 11 cards unknown in West and East respectively.) Combined with the Diamond finesse that gives roughly a 64.5% chance to make the contract. Testing the Club Q first gives a 50% + 1/3 * 48% = 66% chance to make on a straight doubleton Q, plus the additional chance here of an expert defender attempting a subterfuge that fails on an endplay.
Dear Frank has omitted one additional possibility - there is a throw-in working against West when (a) it is actually possible to make the contract and (b) West holds T762 of Clubs. Once the Club Q falls on trick three this play should be visible to Declarer.
Cash three Spade tricks ending in Dummy to leave this for Declarer
S: -
H: 3
D: AT94
C: 5
S: -
H: K7
D: QJ82
C: -
and the possible West holding to be these:
S: - or J or J or J
H: AQ8 AQ8 AQ8 AQ
D: Kx KX K Kx
C: 6 - 6 6
In all cases West has at most three cashing tricks, and no exit - just lead the last Club to West, pitching a diamond from hand, and wait for your last 2 tricks.
Update #2 - Play details:
Spades should be cashed in the order K; then Q; then A to maintain control if Spades break worse then 4-2. Then the Diamond Q should be pitched from hand on the 4th round of Clubs to unblock; there is no need for more than 2 Diamond tricks and we desire to win both in Dummy.
Update #3 - Odds of both lines of play:
After trick 1 there is only a 11*10 / 20*19 = 11/38 chance that Hearts are split 4-4 (ie that both unplaced Hearts are with East, with 9 and 11 cards unknown in West and East respectively). Combined with the Diamond finesse that gives roughly a 64.5% chance to make the contract.
Testing the Club Q first gives a 50% + 1/3 * 48% = 66% chance to make on a straight doubleton Q, plus the additional chance, as here, of an expert defender attempting a subterfuge that fails on an endplay.
Update:
Note that an expert West holding the Diamond K, and thus knowing that his partner has at most three HCP, will almost always false card in Clubs so that (a) a false card by his partner in the suit is believable; and (b) if East cannot false card in Clubs then Declarer won't know whom to believe because the signals are inconsistent. It is a mandatory play at that level, and cannot hurt because his partner (a) already knows what to return if he gets in and (b) has nothing to be fooled about.
The guideline here is that when the defensive values are split decidedly unevenly between the two hands, the weaker hand false cards only out of necessity while the stronger hand will false card frequently. This can sometimes assist Declarer in placing cards, but works to Defenders' advantage more frequently.
Note that after 3 rounds of Spades by Declarer any Spade remaining in West's hand cannot be an entry to East, as it is the 13th card of the suit.
Thanks for your help/guidance in improving my question. I'll grant you that this sequence "works to defenders' advantage more frequently." But the price seems awfully high, because declarer would benefit more from one 36% chance (3-3 split with Jack high because the Queen was dumped) than with four 8% chances (exactly Qx with East). And the first is what was created to produce the illusion of the second. Not to mention the other possibility you brought up.
@TomAu: I cannot follow what you are arguing. This hand has to be IMPS play because in matchpoints you test the diamonds first in order to set up a squeeze or throw-in for extras in Clubs and Hearts.
Coming from me, a question would be IMPS or rubber, because I am not familiar with Matchpoints. It's been 40 years since I've played that version of bridge.
@TomAu: Accurate defense is simply not possible without both communication, based on agreed signalling methods for attitude, count, and suit preference, and faith that partner's signals can be trusted absolutely whenever a decision actually needs to be made on the signal. That latter point is at the core of when and how to false card as a defender: Never lie to a partner who will have to make a decision based on your lie. This has the advantage of simultaneously showing mixed-signals to Declarer at occasional times when he is looking to make a critical decision.
Good defenders do not simply play the lowest card from their hands when following suit; they signal for partner. Intermediate players sometimes signal when they should not. Here, West was presumably signaling count -- initially playing their lowest card (showing that it was played from an odd number), then playing their highest (showing that it was played from an even number). This is consistent with the situation that West started with 762. Count signals are frequently played to declarer's leads.
If E played the Q from QT4, this was a very good play (assuming that they did not hold the DK as well) because it gives declarer a chance to go wrong. If West signaled count faithfully, they would have ruined partner's play. I would cash the CJ and try diamonds if clubs didn't start 3-3, congratulating West for deceptive signaling if they started with T762 and no DK.
Throw in of West is on if West has 4 Clubs - just lead the last Club around to him after cashing three Spades, pitching a diamond from hand.
@PieterGeerkens That's a good point.
|
STACK_EXCHANGE
|
var _main = function () {
$.get("desc.txt", function (_desc) {
var _desc_lines = _process_desc(_desc);
$.get("dict.csv", function (_dict) {
var _dict = _process_dict(_dict);
_recode_desc(_desc_lines, _dict);
});
});
};
var _process_desc = function (_desc) {
var _lines = _desc.trim().split("\n");
return _lines;
};
var _dict_types = [];
var _process_dict = function (_dict) {
var _lines = _dict.trim().split("\n");
var _d = {};
for (var _l = 0; _l < _lines.length; _l++) {
var _fields = _lines[_l].trim().split(",");
var _type = _fields[1].trim().split(";");
_d[_fields[0]] = _type;
for (var _t = 0; _t < _type.length; _t++) {
if (_type[_t].trim() === "") {
continue;
}
if ($.inArray(_type[_t], _dict_types) === -1) {
_dict_types.push(_type[_t]);
}
}
}
_dict_types.sort();
return _d;
};
var _recode_desc = function (_desc_lines, _dict) {
var _results = [];
for (var _l = 1; _l < _desc_lines.length; _l++) {
var _d = _desc_lines[_l];
var _types_match = _create_null_types();
for (var _key in _dict) {
if (_d.indexOf(_key) > -1) {
var _type = _dict[_key];
for (var _t = 0; _t < _type.length; _t++) {
if (_type[_t].trim() === "") {
continue;
}
_types_match[_type[_t]] = 1;
}
}
}
var _line = [];
for (var _t in _types_match) {
_line.push(_types_match[_t]);
}
_results.push(_line.join(","));
}
var _result = _results.join("<br />\n");
_result = _dict_types.join(",") + "<br />\n" + _result;
_finish(_result);
};
var _create_null_types = function () {
var _types_match = {};
for (var _d = 0; _d < _dict_types.length; _d++) {
_types_match[_dict_types[_d]] = 0;
}
return _types_match;
};
var _finish = function (_result) {
//console.log(_result);
$("body").html(_result);
};
_main();
|
STACK_EDU
|
There’s been a lot of discussion in the past few weeks about how anonymous Bitcoin actually is. If you’re new to Bitcoin, you can check my short introduction to the cryptocurrency over at Ars Technica.
Bitcoin is often incorrectly touted as a completely anonymous currency, which is not quite correct. Bitcoin uses a distributed transaction register, which is completely public. In fact, it depends on this open ledger to allow coins to be signed over from one owner to another. This sounds like it might completely defy any measure of anonymity. After all, if every coin can be tracked from transaction to transaction, it should be no trouble to keep tabs on the whole network. This is the point that a lot of insightful commentators have been making in the past weeks. Tim Lee‘s post over on Forbes distills this criticism concisely, and expresses the mechanism by which Bitcoin de-anonymization might happen. Tim’s article is well worth a read, but I think that he misses some of the ways that Bitcoin-users can — and do — remain effectively anonymous with the currency. In particular, I think that he misses just how much of an advantage it is that operations with Bitcoin can be automated, and abstracted away from the user.
If I have one Bitcoin account, and I use that for all incoming and outgoing payments, it’s very easy to keep track of my transactions. Anyone who has ever given me coins can now see exactly where I send how much money, forever. However, this is not the way that anyone really does or ever should use Bitcoin. It’s standard practice to use a new address for each incoming payment. This way, there’s no link between different inbound transactions. When making an outgoing payment, pick a selection of addresses whose balances add up to only slightly more than the sum you wish to pay. Pool those into a new address (with a little left-over in one of the original accounts), and send the whole payment from that new address. Over time, you accumulate little remainders, which can help fill in the gaps with other payments. This approach keeps transactions largely separate, and makes it very tough to associate more than a couple of transactions. With this approach, it’s mind-bogglingly tough to track a particular person, because there isn’t any particular identifier for “them”. They don’t make and receive payments from any distinct account or unique login: they just keep a wallet full of private keys that own some coins.
Right now, the Bitcoin client is fairly simple. It has all the low-level technical details set up, and it’s meant to be used by savvy users who understand the underlying technology, and know the implications of their behavior. Currently this sort of account-balancing is done manually, by users who know what they’re doing. However, it’s a very simple practice to automate, and we should expect future Bitcoin clients to implement this sort of obfuscations natively, without exposing the user to the technical details. The interface is for this can be simple, and uncluttered: a user just sees their balance, summing over all the addresses they own. Whenever you need to receive a payment, the program generates a new incoming address, and perhaps shows it as a QRcode; whenever you need to pay out, tell it the address and the amount and it’ll take care of the details. As long as the user’s connection to the Bitcoin network is relatively anonymous, minimal information is spilled. Even using heavy-duty anonymity software like Tor has minimal inconvenience because the Bitcoin network already has a fair bit of lag time before confirming transactions.
Tim also plays down the importance of money-laundering services, suggesting that they’re too much trouble for most users. Again: while current interfaces are simplistic, these sort of services can be highly automated, and present very simple interfaces for users. We could imagine a simple Bitcoin-laundering service as follows. Users (through a client) ask the laundry service for an address to pay into, and they specify an address to pay out to, as well as how quickly they want the payout. The service specifies an address and the users pays some money in. At a regular rate, perhaps every minute, or ten minutes depending on the number of users, the service makes a whole load of transactions, paying all the users who requested payouts at about that time. It randomly picks which addresses to make transactions from, so there’s no easy way to link an incoming payment with an outgoing payment. Because the outgoing transactions all happen at the same time, the laundry service acts like a medium-latency mix network, making it very difficult to use timing to associate a user’s inbound and outbound address. Of course, the laundry service takes a small cut from all these transactions. This sort of laundry doesn’t have to be interactive: in fact, it’s the sort of activity which would be well suited to occurring silently and slowly in the background of a Bitcoin client. Users pick certain parameters, like how fast they want all their coins to turn over, or how much to spend, and the client negotiates all the details with a selection of laundry services approved by the user. Other tweaks could include prioritizing certain transactions for laundry, or using a number of laundry services chained together to reduce the amount of information that any one of them has.
There may not be many dedicated laundry services right now, but if people start using Bitcoin for more personal activities, and big brother starts paying closer attention, demand for these sorts of services may well increase. Conveniently, users don’t have to place much trust in a laundry service. If I want to clean a large quantity of money, instead of depositing it all at once, I can deposit it one chunk at a time, and wait for the payback to be confirmed before putting in my next piece. Depending on the size and reputation of a service, it might use chunks ranging from pennies to a few dollars in size, and a user would be free to pay multiple chunks at once if they trust the service and require some speed. This sort of thing should be more and more expected if Bitcoin extends to wider use.
These are just some examples of the techniques that can be used to retain privacy when using Bitcoin. Many of them are complex in structure, but can be automated and represented to even un-savvy users with a very simple interface. All of these strategies could probably be used with traditional currencies too, however it would actually require the user to manually perform all the steps involved. With Bitcoin — like web browsing, instant messaging, or any other complex and protocol-driven activity — users don’t have to understand every detail of the interactions to use the system effectively. The strength of Bitcoin isn’t that it’s anonymous per se — it isn’t — it’s that it makes automation easy and keeps transactions secure. The underlying protocol is already in place, now we can innovate on the techniques and processes that make it convenient and anonymous, or give the system any other properties which we can design for.
|
OPCFW_CODE
|
Type: Posts; User: PlatinumRH
I also recently saw a demo. Anyone tried using the DG Box to talk straight to BACnet controllers?
Unless you are having major performance problems I wouldn't do it. I've seen older computer get completely hosed up when attempting a defrag. At the very least I would back up the computer before...
You would need the Alerton Envision software, and a copy of Microsoft Visio to really be able to program the Alerton controllers. I don't think you'd have to have a BCM if you used the Jace as your...
I second what Matrix said. Virtual XP mode is the only way I've gotten FX Builder to work on Windows 7.
I am using gmail for my SMTP on 2 different AX sites with no problems. I use port 465 with SSL enabled.
Histories aren't a problem if you set them up to archive in the Server. The ORD for the history in the Jace is the same as the ORD for the archived history in the Server. For example:...
If all you have under tools is application lookup then its definitely the privileges of the account you are signing in with. The Alerton rep should be able to get in there and change that for you.
Sentry, I am pretty sure the "_" is not a special character and you don't need the "$5f", just an "_"
I sent you an email to the address in your profile.
I don't have the Honeywell ax wizard jar. We just need to pull in the points, not any programming. Do I need the wizard jar to get the points? The NVs I learned don't match what I am seeing in the...
Is there a Hwell XL15C shadow object/jar file in the palettes somewhere? I've looked and can't find one. When I did a discover on site it came in as a Dynamic Device with a bunch of NVs that don't...
Can I get the .xif file using the R2 Jace and convert that file to an .lnml file to use with my AX station? Anyone know or done this before?
Yeah, it was a dynamic device in the R2 Jace also. I was hoping I was overlooking it somewhere since I could do a lot more work offline if I had the device. Thanks.
If there are an time delays in the program then you definitely need to put equipment in hand. They reset when DDC is sent to the controller.
Does anyone know if there is a MNL 800 device/shadow object for Tridium AX? I am upgrading a site from R2 to AX and don't see it in the LonSiebe palette or anywhere else?
Can you email me what Landis & Staefa you still have, if any? Thanks.
Did you check to see if the points are "out of service"?
pault is correct. You would need Alerton's Envision for Ibex software and an Alerton USB or parallel port key. You would also need a cable to connect directly to the microview.
We've had sucess with export tags. This way you can house the graphics on both the Jace and the Web Sup. If the Web Sup goes down, then you can access the Jace directly for the graphics.
The BCM-ETH and the MS/TP controllers have different network numbers. Make sure you are scanning all networks when do the discover and that your jace network number is different.
Try the PxInclude widget in the bajaui palette
frontline, check your email address that's in your profile
When you go to first install it you should have the option to choose the full version or the commissioning only version. Make sure you installed the full version.
For anyone else with the same problem I finally found the solution. All of my AX installations were 32-bit. I tried uninstalling one version and re-installing it at 64-bit. As soon as I did this...
I am interest also! Thanks.
|
OPCFW_CODE
|
#include "Mutation.h"
Mutation::Mutation(void)
{
mutationProbability[0] = 1;
mutationProbability[1] = 0;
mutationProbability[2] = 0;
mutationProbability[3] = 0;
mutationProbability[4] = 0;
mutationProbability[5] = 0;
mutationProbability[6] = 0;
mutationProbability[7] = 0;
mutationProbability[8] = 0;
mutationProbability[9] = 0;
randomNum = 0;
}
void Mutation::mutateInverse(std::vector<std::vector<int>> &vectorOffspringsPopulation)
{
//std::cout << "Mutating 1 population, inversion....." << std::endl;
//Iterator vector, to be able to find stuff
std::vector<std::vector<int>>::iterator iteratorVector;
std::vector<int> vectorChromosome;
int randomBeginIndex = 0;
int randomEndIndex = 0;
int counter = 0;
//Inverse a random selected sequence in a chromosome in population
for(int i = 0; i < vectorOffspringsPopulation.size(); i++)
{
//Mutation probability
randomNum = mutationProbability[rand() % sizeof(mutationProbability)/sizeof(int)];
if(randomNum == 1)
{
vectorChromosome = vectorOffspringsPopulation[i];
randomBeginIndex = rand() % vectorChromosome.size();
std::vector<int> vectorIndex;
vectorIndex.push_back(0); //Push back dummy element (needed to do this because compiler was complaining about integer zero division)
//Push all values bigger than randomBeginIndex up to 7 (into vectorIndex) to make randomEndIndex
for(int i = 1; i <= (vectorChromosome.size() - 1) - randomBeginIndex; i++)
{
vectorIndex.pop_back(); //Pop dummy element
vectorIndex.push_back(randomBeginIndex + i);
}
randomEndIndex = vectorIndex[rand() % vectorIndex.size()];
//Create a vector sequence which will be filled and reversed
std::vector<int> vectorSequence;
for(int k = 0; k <= (randomEndIndex - randomBeginIndex); k++)
vectorSequence.push_back(vectorChromosome[randomBeginIndex + k]);
std::reverse(vectorSequence.begin(), vectorSequence.end());
for(int j = randomBeginIndex; j <= randomEndIndex; j++)
{
vectorChromosome[j] = vectorSequence[counter];
counter++;
}
iteratorVector = std::find(vectorOffspringsPopulation.begin(), vectorOffspringsPopulation.end(), vectorChromosome);
//Did not find, apply change
if(iteratorVector == vectorOffspringsPopulation.end())
vectorOffspringsPopulation[i] = vectorChromosome;
counter = 0;
}
}
}
Mutation::~Mutation(void)
{
}
|
STACK_EDU
|
Softdisk PC was a diskmagazine of the 1980s and 1990s, which started in 1986 as Big Blue Disk and later became On Disk Monthly before settling into its final name of Softdisk PC. It ended publication in the late 1990s. It was one of the publications of Softdisk Publishing, which also included Softdisk for the Apple II, Loadstar for the Commodore, Softdisk for Mac (formerly Diskworld), Softdisk for Windows, Gamer's Edge, and PC BusinessDisk (and a few others).
It had various specialized file formats used in presenting its articles and menus, run through a "shell" program which presented the issue.
Early issues used a text-mode shell that worked on the whole range of PCs from non-graphical Monochrome Display Adapters (although these weren't officially supported because most other programs on the issues required graphics) to the various graphical adapters (CGA, EGA, VGA, SVGA). Later a new graphical shell was developed which required VGA or up. The storage medium for the issues also evolved with the times; originally one or more 360K 5 1/4" PC-format floppies, later a 3 1/2" 720K disk, and still later a CD-ROM with its whopping 600 megabytes or so of storage. While the floppy medium was still in use, compactness of size was of great importance, leading to the use of the Softdisk Text Compressor.
Various markup commands were embedded in the text files comprising magazine articles.
The CGA/EGA/text-mode issues (and a number of other Softdisk products, but not the later VGA issues) had a file called STATUS.DAT which was used to check on which issue (and which disk of a multi-disk issue) was inserted (for instance in order to tell whether the right disk was inserted or if the system needs to prompt the user to insert a different one), as well as to store some user settings. Since the system re-wrote the file regularly, occasionally something went wrong and clobbered it, which could interfere with the functioning of an issue (as it would then be unable to determine that the correct disk was inserted).
STATUS.DAT is a text file with one parameter per line in a fixed order. Over the history of its use, some new parameters were added on to the bottom of it, so the last lines didn't appear in early issues.
The lines are:
- 1. "Status file for BIG BLUE DISK" (or various other product or company names after "Status file for"). Generally, programs that used this file checked for the string "Status file" to see whether it was a valid-format file.
- 2. Name of publication: "BIG BLUE DISK", "Softdisk PC CGA/EGA", etc.
- 3. Issue number.
- 4. Issue date (e.g., "March 1990"). Was blank in later issues which no longer had an official date, as well as various undated products.
- 5. Disk number within issue. (0 if entire issue was self-contained and no disk-swapping was necessary.)
- 6. Current menu position. Used to save place in menu when the menu program exits to run another program on the issue, but needs to get back where it left off once the program exits.
- 7. Text brightness setting. Where user could switch between light and dark text, this saved the current setting.
- 8. Sound flag. Saved whether user turned off sound. Various programs might read this and honor the user setting.
- 9. Version ID: indicates what version of the issue it is. (Some issues got remastered to fix bugs, etc.) The formatting of this varied over time.
- 10. Tag line sometimes displayed in the menu, e.g., "The PC Software Subscription".
- 11. Color setting (when user had choice of color schemes for menu).
Softdisk diskmagazines had the files in their disk directories carefully arranged to put the files related to each program together (though they were all in the root directory of the disk; subdirectories were not generally used). The files for each item were separated by zero-length dummy files with lots of dashes in their names. (This was a tradition going back to earlier Softdisk publications for other platforms such as the Apple II and Commodore 64, which didn't support subdirectories in their disk filesystems.)
Of course, once you look at a Softdisk PC disk directory in Windows, that system "helpfully" sorts the directory for you, meaning that the related files are no longer necessarily together, and the separator files cluster uselessly.
|
OPCFW_CODE
|
On 03/17/2011 01:00 PM, Thomas Bächler wrote:
Am 17.03.2011 16:50, schrieb Dieter Plaetinck:
Gerhard is inactive. Since I'm the only other releng person, some people expect me to do it, even though I don't like being involved in archiso and I already spend a lot of time on all other releng tasks (aif, image building, image testing, releasing, ...) Gerardo knows archiso and has commit access to it, but he also has little time and he never made any commitment to us in any way (he is just a community member). Gerardo has done some work in the past, but not in the last 3 months.
This is worse than I thought: I saw Gerardo reply to some patch sent, but he never commited it. I even sent him an email if he could finish some cool work that I started, but he never got back to me. I assumed he was semi-active, which apparently is false.
(The work I started was about making a self-contained image in the /arch/ directory on the ISO. This is partially done already, but I recently succeeded in moving /syslinux/ into /arch/ and only leave behind a tiny /boot/ directory that would chdir() to /arch/boot/syslinux when loading. This would further simplify installing onto an existing vfat USB drive, as you only need to 1) copy the whole /arch folder 2) extlinux --install /mnt/arch/boot/syslinux 3) adjust the archisolabel in the syslinux configuration files)
Thomas (and a few other devs) know archiso and has commit access, but he also has more then enough Arch stuff on his plate. Hell, and I can't even get that done.
If someone can collect the pending patches from this list (and maybe Gerardo's github) for me, I can check and apply them. But I guess we desperately need a maintainer for archiso. It is a task with not a lot to do, but still needs to be done.
Hello, First, sorry for "no response". I can take this weekend to review all pending archiso patches. I think that there are at least three patches: 1) Thomas: syslinux/arch directory structure. 2) Charles: autologin on tty1 3) Simo: serial port support. Please let me know if there are more. Please resend to ML and append in the tittle [archiso]. There are minor issues (can be considered as cosmetic) with aufs2 and util-linux-2.19 that I want to report to upstream. PS: I will change compression from gzip -> xz by default when 2.6.38 hits [core]. -- Gerardo Exequiel Pozzi \cos^2\alpha + \sin^2\alpha = 1
|
OPCFW_CODE
|
Starting last week, hackers foiled a handful of software providers that promote freedom of information by helping web surfers in China reach the open Internet. The attacks that drastically slowed the anti-censorship services of San Francisco-based GitHub and China-based GreatFire.org emanated from computers around the world. Unbeknownst to their owners, attacking computers apparently were infected by code triggered by using the advertising or analytics tools of Baidu, China’s largest search engine—a company whose shares are traded on the NASDAQ exchange. Baidu has said it has found no security breaches and is working with other organizations to get to the bottom of the attacks. Have the latest cyberattacks, as some coverage has suggested, “weaponized” the computers of unsuspecting global netizens? What should governments, businesses, and individuals do about this apparent spread of China’s official command-and-control vision of the Internet beyond its borders? —The Editors
Wednesday, April 1, 2015 – 6:01am
The Chinese have already weaponized the Internet. They assume that everyone else has done the same thing. China does not see the Internet as a benign force. They see the Internet as a weapon aimed at their heart. It is therefore completely natural that they will respond to what they see as threats directed at China that originate on the Internet.
One method they will use for protection is to create a Chinese sovereign Internet. Within China, the Internet will be entirely in the control of the Chinese authorities. This is a Balkanization of the Internet. The Chinese authorities understand this and welcome the result.
The problem for the Chinese is then is what to do about attacks against China that come from outside of the borders of China. They have a two-prong policy. First, the Great Firewall will block access to China. This is the primary strategy. Second, where the Great Firewall is not effective, China will strike back, using the open Internet as a weapon. This is exactly what is happening in the current GitHub denial of service attack.
Officials of the Chinese government and their academic advisors believe that their actions are completely justified. Every country has a right to self-defense and China is simply exercising that basic right. For this reason, cross border discussions asking the Chinese to stop this practice will fail. That is, this kind of attack is not an example of malicious hacking. From the Chinese point of view, it is legitimate self defense.
So what can be done? There are three basic strategies:
- Submit to the will of the Chinese and remove all content that the Chinese see as a threat to their interests.
- Understand the threat and install countermeasures specifically designed to deal with the threat from China and other countries with a similar basic approach.
- Attack back, understanding that cyber-war is still war and that any counter-attack may result in unanticipated consequences: more extreme damage, blowback, collateral damage, and the like.
Since no one in the U.S. has made any effort to understand the Chinese position, no one is publicly taking any steps that are likely to have any practical impact. I therefore expect that capitulation will be the most common response. Capitulation is fine when you are small and weak. Capitulation is humiliating when you pretend otherwise.
TEXT: CHINA FILES
|
OPCFW_CODE
|
Designing a Framework for a MS/MS Library to be Utilized for Compound Identification
Meth Jayatilake (Mentor: Dr. Amrita Cheema, Department of Oncology and Biochemistry and Molecular & Cellular Biology; Director of Metabolomics Shared Resource, Georgetown University Medical Center)
August 28, 2018, 2:00pm, Room 1300, Harris Building
Identification of metabolites is a challenge in the field of untargeted metabolomics. Tandem mass spectroscopy (MS/MS) can be used for identification by comparing spectra of pure standard compound with that in the sample for a given metabolite (m/z). Current workflow in untargeted metabolomics relies on manual comparison of these spectra for metabolite identification. Hence, the creation of the MS/MS database will enable compound identification by matching spectra using peak and pattern matching algorithms, rather than the manual evaluation that is currently done. Although online libraries already exist, information can be limited, and peaks detected can vary between vendors and instruments. More importantly, the pattern matching is still a manual process. Therefore, a database structure was created to be interoperable between vendors by using the netCDF exchange data format as its input.
For each compound that was run, the MySQL database would store associated information including empirical formula, molecular mass, and monoisotopic masses for each ionization mode. Each compound would also be associated with a HMDB ID to maintain chemical information consistency. The netCDF (a universally readable format of raw MS/MS files), for each metabolite, also underwent peak peaking by using the XCMS package for R. The m/z value, intensity, and retention time for each peak of every compound was stored in the database.
To query the database, files containing information about the positive and/or the negative ionization mode netCDF as well as the respective monoisotopic masses is required. Each of the files would first undergo the same R peak picking service that was used to store compounds in the library. The next step was to select parent ions from the library that were close to the query value. For each of the compounds that were selected, a retention time window would be created for each based on the parent ion to determine the true compound peaks to compare. The query peaks were compared to the highest intensity daughter ion peaks to determine a match.
Although precautions can be taken to minimize inconsistencies, such as by maintaining proper parameters, variance can still be observed between instruments. The retention time window was a feature that other MS/MS libraries did not have, and therefore, our workflow would allow for reliable peak selection even between instruments. This project also created a framework for lab specific MS/MS libraries. Construction of these libraries based on data each lab has acquired will allow for more accurate identification of properly curated metabolites, which will be beneficial to metabolomics researchers. Finally, the workflow that we have developed can be translated for any vendor specific MS/MS library.
- Summer 2018
|
OPCFW_CODE
|
Cost of Obtaining Italian Dual Citizenship
One question I am often asked is how much it costs to obtain Italian Citizenship jure sanguinis. The answer is, it really depends on several factors including how many records you need to order, which states you are going through, the price your translator charges and which consulate you are going through.
Number of records
Generally speaking, the more generations you have to go back to claim Italian citizenship, the higher the cost. The reason for this, of course, is you’ll have more people to order records for the further back you go. The number of records you are making corrections on can impact this as well. When you request a correction, you’ll likely need to order and pay for a new copy of the corrected record.
States/Cities/Counties you are ordering records from
The cost of records and apostilles can vary greatly by state, city or county. I’ve seen states charge anywhere from $15 to $25 per vital record. Similarly, apostilles can cost anywhere from $1 to $20 each depending on the state. The cost of making corrections also varies. I wrote about a strategy on collecting records in this article that could help you keep unnecessary costs down.
The Italian consulate
Requirements can vary greatly by consulate which will increase or decrease the cost. For example, the number of documents the consulate requires be translated or have apostilles for can differ significantly. In addition, some consulates are more understanding when there are minor discrepancies whereas others require everything match across documents. Having to correct many records can add significantly to the cost. I have not heard of any consulates charging to actually apply. In addition, most consulates require you apply in person. The added cost to visit the consulate in person will vary based on your proximity to the consulate, accommodation needs, mode of transportation and the number of times you need to visit the consulate.
UPDATE 4/8/2015: Consulates are now charging a 300 Euro fee (paid in US Dollars for consulates in the United States) due at appointment. This fee is subject to change every three months.
Translators vary in their price. I’ve seen translators charge anywhere from $15-$50 per record.
Summary of costs
In general you can expect to pay :
- Vital Records: $15-$25 each
- Naturalization certificates and paperwork: $22 (for NARA documents), $35 (for USCIS certificate plus $20 if you need to do a search)
- Corrections: $10-$50 per record
- Apostilles: $1-$20 each
- Translations: $15-$50 per record
- Cost to visit consulate: varies
- Postage and miscellaneous: $20-$40
Have you run into any unexpected expenses in your own journey to claim Italian citizenship? Do you have advice for others on things you have learned to keep costs down?
|
OPCFW_CODE
|
McCloskey, D. (1985), “Economical writing”. Economic Inquiry, 23: 187–222, http://onlinelibrary.wiley.com/doi/10.1111/j.1465-7295.1985.tb01761.x/abstract
Hamermesh, D.S. (1992): “The young economist’s guide to professional etiquette”, Journal of Economic Perspective, 6(1), pp. 169-179, https://www.aeaweb.org/articles.php?doi=10.1257/jep.6.1.169
Kristin Sainani “Writing in the Sciences”
Publication Manual of the American Psychological Association
Writing mathematical relations and models
Thomson, W. (1999): “The young person’s guide to writing economic theory”, Journal of Economic Literature, 37(1), pp.157-183, https://www.aeaweb.org/articles.php?doi=10.1257/jel.37.1.157
Varian, H.R. (1997): “How to build an economic model in your spare time”, http://people.ischool.berkeley.edu/~hal/Papers/how.pdf
Issue 1: Procrastination
Pomodoro technique: 20mn work, 10 mn relaxing, 20mn work, …
Issue 2: Inefficiency
Mostly because one does not know where one’s time goes
Records the time spent on the computer by software category
Issue 3: Discouragement
To record what you did, not what you need to get done.
Issue 4: Forgetting
You might feel that you included all that was required in your paper, but think again!
Swan – Scientific Writing Assistant http://cs.uef.fi/swan/
Issue 6: Organization
Logical structure of the document, insertion of references in the correct place
Mindmaps: http://www.xmind.net/ or http://freemind.sourceforge.net/
Issue 7: Revisions
The issue is people becoming too attached to their paper
A good and fun way to destroy and re-construct a paper is to print it, cut it, and paste it back, for real!
Choosing an outlet
- Top journals vs Field journals
- There are many ranking lists:
- http://www.scimagojr.com/journalrank.php?area=2000 (for economics, econometrics and finance)
- http://www.harzing.com/jql.htm (Journal quality list for Economics, Finance, Accounting, Management, and Marketing)
- http://tool.handelsblatt.com/tabelle/?id=33 (Handelsblatt)
- https://scholar.google.com/citations?view_op=top_venues&hl=en&vq=bus (Google Scholar)
- Your university will also probably have its own ranking list
- Look into what journals your adviser published in, what journals people you cite published in.
|
OPCFW_CODE
|
Software apps and online services
Hand tools and fabrication machines
I have designed an Arduino based wireless dual-motor driver board, it has an nRF24L01 transceiver module, dual L293D motor driver and an Arduino Nano V3 microcontroller. On the transmitter side, I built a breadboard Joystick controller, which also includes an nRF24L01 module. In this project, we wirelessly controlled a robot car chassis with two DC motors with a joystick. Let's take a look at the video on "how it works?".How It Works?
If you have viewed my previous projects, you may have noticed that I often use the nRF24L01 transceiver module. The nRF24L01 transceiver module is an easy-to-use low-cost wireless communication module. In this project, wireless communication was provided by using the nRF24L01 module on both the receiver and transmitter sides. Two L293D motor drivers were used on the receiver side and a two-axis joystick module was used on the transmitter side. Of course, there is an Arduino Nano V3 on both sides.
The analog values read from the X and Y axes of the joystick in the transmitter are processed by the Arduino Nano, and then the data is sent to the receiver via the nRF24L01 module. On the receiver side, the data received with the nRF24L01 are processed by the Arduino Nano, then the L293D motor drivers move the DC motors in the desired direction.Transceiver Communication Module
The nRF24L01 transceiver module allows two or more microcontroller to communicate with each other wirelessly and remotely. It also has the 2.4 GHz frequency band and the communication range is high enough. A transmitter and a receiver circuit are required to provide communication.
In this project, we built the receiver circuit as a motor driver, and a Joystick breadboard control circuit for the transmitter circuit.
An address (array of bytes) is created for two nRF24L01 modules to communicate. It communicates over this address and transfers data. If you are having any problem with getting it work, try adding a 10uF capacitor in parallel to the Vcc and Ground pins. Or you can use the power adapter module produced for the nRF24L01+
A popular library called RF24 is used to interface with the nRF24L01 transceiver module. RF24 Arduino Library for nRF24L01 Module: https://github.com/nRF24/RF24Two-Axis Joystick Module
Joystick module is mostly used in Arduino based DIY projects and Robot Control. The joystick module was used on the transmitter side of this project. It's very easy to connect and fun to use. The module gives analog output. We can use a Joystick Module with Arduino, Raspberry Pi and other Micro controllers. We just need to connect the VRx and VRy axis Pins to the Analog Pins of the microcontroller.
The output range is fixed for all directions. The image above shows the analog output value for the X and Y axis depending on the movement of the Joystick Module in four directions (+X, -X, +Y, -Y). You will also get some analog value as you move the knob diagonally.The Joystick Transmitter Breadboard Circuit
We briefly mentioned the two basic components for the joystick transmitter hand controller. As seen in the circuit, a Joystick, an nRF24L01 transceiver and an Arduino Nano were used. Also, a powerbank was used for power. Make the connections according to the shared circuit diagram.
nRF24L01 to Arduino Nano Connections:
nRF24L01 VCC to (10uF) +5V
nRF24L01 GND to (10uF) GND
nRF24L01 CSN to Digital 9
nRF24L01 CE to Digital 10
nRF24L01 MOSI to Digital 11
nRF24L01 MISO to Digital 12
nRF24L01 SCK to Digital 13
Joystick to Arduino Nano Connections:
Joystick VCC to +5V
Joystick GND to GND
Joystick X to Analog 1
Joystick Y to Analog 2
I built the circuit on a 400 hole breadboard and fixed it on the powerbank. So I got a prototype that I could easily hold in my hand.
You can also take a look and try the PCB version of Joystick Hand Controller, Accelerometer Hand Controller or Flexible Sensor Hand Controller from the links below.Motor Driver Receiver Breadboard Circuit
L293D IC motor drivers were used on the receiver side of the project. To briefly mention the L293D motor driver, one of the easiest and inexpensive way to control DC motors is to interface L293D Motor Driver IC with Arduino. It can control both speed and spinning direction of two DC motors.
Also, it can even control a unipolar stepper motor like 28BYJ-48 or Bipolar stepper motor like NEMA 17. The L293D is a dual-channel H-Bridge motor driver capable of driving a pair of DC motors or one stepper motor. That means it can individually drive up to two motors making it ideal for building two-wheel robot platforms.
The L293D motor driver IC actually has two power input pins viz. Vcc1 and Vcc2. Vcc1 is used for driving the internal logic circuitry which should be 5V
From Vcc2 pin the H-Bridge gets its power for driving the motors which can be 4.5Vto36V. And they both sink to a common ground named GND
The L293D motor driver’s output channels for the motor A and B are brought out to pins OUT1, OUT2 and OUT3, OUT4 respectively. You can connect two DC motors having voltages between 4.5 to 36V to these terminals.
Each channel on the IC can deliver up to 600mA to the DC motor. However, the amount of current supplied to the motor depends on system’s power supply.
Two DC motors can be controlled with one L293D IC motor driver, but the current supplied may be insufficient for more powerful DC motors. Therefore, two L293D ICs were used in the project, which also enables the use of two stepper motors.
You can connect the breadboard circuit according to the circuit diagram shared above. Two 3.7V 18650 Li-ion batteries were used for power.Motor Driver Printed Circuit Board
After building and testing the breadboard circuit, I designed a printed circuit board to turn the project into a useful prototype. Printed circuit boards are plates with conductive paths on the surface for mounting electronic circuit components.
To get the PCBs, simply upload the shared "Gerber file" from https://www.pcbway.com/project/shareproject/ to PCBWay and create an order. High-quality PCBs will arrive in a few days depending on the shipping address.
The Wireless Dual Motor Driver Board needs a few components. Easy solderable components. Place and solder components according to shared reference designator.
PCB dimensions are approximately 87mm x 51mm. Each motor has an input terminal for easy connection. There are also female headers for unused Arduino Nano pins. So you can include different sensors in your project.Transmitter and Receiver Source Code
Download the shared Receiver and Transmitter source codes. First of all, the first thing you need to do is download and install a popular library called RF24 to interface with the nRF24L01 transceiver module. You must use the library manager for this. If you want, download the library from the link below and add it from the library manager, or search for the name RF24 from the library manager and press the install button.
On the transmitter side, the pins to which the Joystick X and Y axes are connected are defined. Two variables are defined to read data from the joystick X and Y axes. A total of two data will be sent and received for the two axes.
In both source codes, the pin number to which the CE and CSN pins of the nRF24L01 are connected is defined. Also, a unique address (0xE8E8F0F0E1L) is defined to enable communication between Transmitter and Receiver.
The analog values of the X and Y axes are read and a total of two data is sent. Then, by initiating serial communication, these two axis values are displayed on the serial monitor. View the displayed minimum - maximum values of the X and Y axes and note them for the Receiver side.
Open the Receiver source code, define the pins to which the motors are connected. Then enter the Transmitter X and Y axis values in the section of the motor control functions. Then upload the receiver code and display the transmitter/receiver communication on the serial monitor. For more details, please see the programming section of the project video.Assembly on the Robot Chassis
After editing and uploading the Receiver and Transmitter code, mount the motor driver board on a 2WD Robot Chassis of your choice, and connect the DC motors. If the motors move in opposite directions, change the positive and negative poles of the DC motors at the motor input terminal.
If you run into a problem please leave a comment, I will reply as soon as possible. Thanks for reading.
|
OPCFW_CODE
|
A friend of mine asked me to make an audio montage with several tracks, for making a CD. He’s in America and I’m in Europe. What I thought is, ok, I’ll make the audio montage, will burn the montage in a CD, and then, instead of sending the CD by post/curier, I’ll make an .iso or .cdr image from the master CD and I’ll upload it to Dropbox/GDrive, etc. and he can download it and burn a CD from that image.
Well, the plan is not bad,… but after trying making several images from the original CD Master, I can’t get a working copy. While burning (as test) the images created from the CD Master, I just get errors, and errors. The CD Master works perfect, it’s only the image I create from this Master, that can not be burned into a cd correctly.
Is there any way to make this working? What would you do?
An .iso or .cdr image is not suitable for CD Audio. If your friend also has WL, it’s possible to burn a Data-CD with your montage files (.mon) and all needed audio files and then create an .iso from that.
Or even better, create a DDP image from the montage (via ‘Render’). This is the standard distribution format these days for fully authored Audio CDs. Your friend can use a DDP player (Hofa for instance) to burn an exact CD copy from that.
As I’m able to make a Master CD with WL LE 9.5, I just found an old freeware app for burning cds in MAC, called Burn. This app is able to make an image of the Master CD (it creates 2 files, one iso and other one). The only problem is that on my tests, the copy of that iso it does not contain any sign of CD-Text written on the Master CD.
So finally I’ve donwloaded a trial and I made a DDP.
Yeah, this comes up a lot. People try to send a foolproof master via internet using ISO, Nero, or other consumer type file formats and assume all the CD-Text and ISRC codes will remain but it can often be more complex than that, or unexpected things don’t carry over.
The reality is that DDP is the only foolproof way to do it. It’s one of the rare cases where it’s both easiest and best to use it.
WaveLab 9.5 Pro comes with a DDP Player that you can send to others but it doesn’t burn CDs last I checked which is an unfortunate oversight.
You can use WaveLab Pro to make the DDP, and the HOFA DDP Player Maker app is pretty affordable and creates a universal DDP Player embedded in the DDP folder, or the HOFA standalone DDP Player that you friend/client could buy is about $10 USD.
You can do this in Nero if I remember right. It’s not a standard format, but NERO on windows is your best bet. Toast may also do it on the mac, I can’t remember. I had to do this once, and maybe in the search function.
|
OPCFW_CODE
|
An important fondation for GRIN transactions will be Payment Proof made mandarory. It will disable the ability of receivers to deny receipt of payment.
For that David @david has created this solution through this RFC which I believe has been implemented in GRIN:
It seems to me that the solution doesn’t prevent any party to pretend having received fund through a given kernel with a given amount from different (fake) senders addresses, enabling the ability to make many “receiver” proofs for a given kernel. Also the sender could artificially make different payment proofs with many different (dishonest) receiver addresses that he makes up. This can probably create problems.
My proposed fix is that payment proofs include proof-of-signature-of-kernel in the payment proof, making it not possible to people that have not signed the kernel to claim receipt or spend of fund for this kernel.
We can achieve that by requesting that the payment proof for the receiver incorporates a simple signature with its partial excess as public key and including its receiver address and the total excess in the message for this signature
The sender should also do the same for his partial excess and prove that he signed it, while committing to his address, total excess, and receiver address.
We can verify that the sum of the two partial excesses sum to the total excess that is being committed to.
The point of the payment proof is to prove to the receiver that you sent the funds to them. It’s signed by the private key of the receiver’s (slatepack) address. A receiver with a known slatepack address cannot deny receiving the funds as long as the following conditions occur:
The kernel is confirmed on chain.
The signature that commits to the kernel and amount validates against the known slatepack address.
As long as those conditions occur, then a receiver cannot deny receiving the funds.
While I agree, payment proofs could be created that pretend to prove that the funds were sent to a different address (that you control), that is not a problem a payment proof is meant to solve. We’re not trying to prevent people from claiming false ownership. Payment proofs only need to prevent the receiver from falsely claiming they never received the funds.
If you care to prove you were sent funds, that’s already quite simple to do: Just prove ownership of the output in the transaction.
Is there a way to trick the assumption: A receiver with a known skatepark address? In court cases for example
It seems like the solution with directly using the partial excesses as public keys allows to not use that assumption. We can also think of the receiver wants to prove that the sender sent him coins for this kernel.
Depending if the activity is illicit (drug, sex workers, etc, for example) different payment proofs with different strengths might be interesting to have to avoid people creates problems to other people.
In fact, at some point, due to grin doing payment proofs being attained through offchain protocol , we might think on how to create different payment proofs (by package) for different usecases.
Might be a usecase of interactivity in grin: the possibility to offer different payment proofs alllowing to prove, or not, different things according to if the tx is legal activity or illegal activity, and so on and so forth. People would agree on which payment proof they use. In any case, it would be helpful first probably to be able to opt out easily from payment proofs in some cases as well (illegal activities)
Are you sure you can really do that and that it is quite simple?
Sure you can prove that you own outputs that are in the same block of the kernel, but I have the impression that nothing can ensure to people that those outputs belong to the considered transaction.
Not even talking about the merkle proofs and stuff that you would have to have.
|
OPCFW_CODE
|
Why This Site is Ugly
The majority of my WordPress Hosting Clients come from talks I give to private Mastermind Groups.
Mastermind Participants only care about results, rather than pretty sites.
Thus this site provide terse information.
If you’re looking for pretty, best look elsewhere.
If you’re looking for Hosting built for Speed and Stability and Security and SEOity you’re the right place.
Current Focus - WordPress Hosting and Tuning
I’ve been generating online income since 1994 and run several companies.
My current focus revolves around Tooling WordPress sites build for extreme Speed and Stability and Security and SEOity.
New Sites - 5,000+ requests/second.
Existing sites - similar numbers and usually requires replacing certain plugins and refining code design/implementation approaches.
Reaching these numbers is fairly easy, if the entire WAMPL Stack runs on servers which are continually tuned, sometimes every few minutes.
|WAMPL Stack - WordPress and Apache and MariaDB and PHP and Linux.
MariaDB is a drop in replacement for MySQL, with years of performance and security patches applied to code base, so you don’t have to build MySQL from source and apply these patches yourself.
I’ve been working with WordPress since it was B2 and Linux since it shipped on a stack of 3.5" floppy disks.
Over the years, I’ve developed a set of tools to install WAMPL Stacks inside LXC containers which are pre-tuned to reach high speeds.
I only work on sites I host. This is the fastest and cheapest way I can achieve…
Extreme WordPress Speeds
Maintain Draconian/Brutal Security
Realtime Track Onsite SEO Factors
I started hosting sites because usually $5,000-$10,000+ was required to bring existing hosting environments to a level which allowed me to start tuning sites.
More cost effective for my clients to host with me, rather than remediate (resolving deficiencies) an existing hosting environment.
Only contact me if you have a serious project. If your hosting budget is <$100/month, host elsewhere till your traffic and conversions increase to where you require Fast Stable Hosting.
Then Skype me at ID davidfavor and we’ll talk about your project. When you send your Skype Add Contact request, include how you found me (who referred you) and what we’ll be talking about. This will allow me to determine you’re a real human, rather than Skype Bot or Skype Scammer.
WordPress Plugins For Improved Performance
This zero config plugin can dramatically reduce server load and dramatically increase overall site speed, if any theme or plugin you’re using includes the poorly thought out and implemented retina.js library. Retina Stripper README covers technical details. Reading will cure even the most advanced insomnia.
Blazing Fast Websites
In 2005, I retired from onsite client work and began using what I learned starting from my first consulting gig in 1979 for my own projects.
In 2013, a general mutiny at a Keith Baxter Mastermind ensued between members where they demanded I make my tech available to them… so… to escape the room with my life I agreed.
At the time I thought the massive saturation of hosting companies meant there were many companies providing high speed hosting. I was wrong. Hardly any hosting company can deliver this simple premise.
|This became evident when I migrated my first client off an expensive Rackspace server to a $100/month (well tuned) server and their WordPress site throughput increased from 0.66 requests/second to 5000+ requests/second.
I still run Sun Fire Super Foods so I have good food on my table.
I still speak about Radical Health and Wetware Hacking and Redoing Yourself in backroom Masterminds and underground conferences.
And for the rest of my life, my primary focus will likely be turning my private hosting services into public facing services.
How I do this is simple.
Use http://Ubuntu.com for host system environment.
Use LXC containers (Ubuntu or Alpine) for sites.
Run LXC containers on specific server machine hardware…
Many CPUs, to reduce context switching. This is where a CPU has to suspend and point to another place in memory to execute new code. Many slow CPUs always run faster than few fast CPUs for Websites.
Have massive memory available and tune sites to always run out of memory, so only disk reads for sites occur on first visit to a page. Only disk writes occur when a database record is updated.
Pretty simple to understand. Most sites are disk bound. They constantly thrash disk, meaning same records are read over and over to render a site page view.
Running out of memory simply circumvents all these slow disk operations.
How Much Will This Cost You?
Sites I host range from $100/month to $1,000/month on average. I have hosted some sites at $10,000/month for a client which had to deal with 500,000+ uniques/hour for November and December of one year.
Site cost relates to resources required to run a site. This has very little to do with page views or uniques. If a site is tooled well, very little resource is required to deliver content pages. Tooled poorly, some low traffic sites require massive resources.
In general, if you’re paying $100+/month I can assist you. If you’re paying $1,000+/month likely I’ll help you sleep way better at night as providing similar cost services from me, will likely increase your conversions and produce a far more stable site.
|
OPCFW_CODE
|
BasKet - This is one of my favorite KDE applications. BasKet is called by some a killer application for Linux, due to its completeness regarding features and a different approach compared to other notes applications.Read more »
THERE was a degree of hostility between Tomboy and Gnote. Part of it was to do with licensing issues (one complainer was Jo Shields), but as the following shows nicely, it would be hypocritical for Mono proponents to whine about this from now on.Read more »
Before I begin this essay, I would like to go ahead and pre-empt any attempts to make me out like I’m a Mono shill. I’m far from it. The only application that uses Mono that I found remotely useful was F-Spot. I see no point in running Tomboy when I can just use Gedit (since files are opened in multiple tabs, just like EditPad).Read more »
If you want to remove Mono for whatever reason, then this is the ultimate guide. Simple straight forward steps to completely remove Mono.Read more »
Your editor has long been a user of the Tomboy note-taking tool. Tomboy makes it easy to gather thoughts, organize them, and pull them up on demand; it is, beyond doubt, a useful productivity tool. But all is not perfect with Tomboy. Some people have complained about its faults for a while; Hubert Figuiere, instead, chose to do something about it in the form of the Gnote utility.Read more »
More steps are taken to leave Mono behind and move just GNOME/GTK forwardRead more »
A FAVOURITE project of ours, Gnote, has just achieved another major milestone/release, which constitutes a potential migration away from Novell's troublesome Mono. The new software is already being pushed for inclusion in Ubuntu 9.10 (Karmic) and Debian GNU/Linux too.Read more »
FOR those who are looking to detoxify their GNU/Linux distribution which contains GNOME, Gnote 0.3.0 is finally out.Read more »
The new release of Gnote and its significance. A former Novell engineer has just released a new version of Gnote, whose great value we explained in [1, 2, 3, 4, 5, 6]. The project is growing quickly because it's mostly a constructive port to Microsoft-independent grounds.Read more »
This is a review of three of the most popular notes-taking applications for Linux: BasKet, Tomboy and KNotes. I included the screenshots below the reviews, at the end of the article.
In my opinion, this is the most powerful and beautiful notes application I've ever seen. The last stable version is 18.104.22.168 for KDE3, but a port is in the works for KDE4 too.
|
OPCFW_CODE
|
Pointers in C : Pointers in C What is a variable? : What is a variable? Each variable must be defined before you can use it inside your program.
Did you ask yourself why we have to declare variables?
First reason is To allocate memory
Second reason is Instruct compiler how
to treat this memory
location ML? As int or
float... What is a variable? : What is a variable? int main()
int x = 16;
float y = 2.3;
char z = 70;
} Three MLs have been allocated. The first ML holds int value. The second ML holds floating-point number. The last ML holds one byte integer. Sometimes we say: the content of ……… is ………. Run this code ? : Run this code ? int main()
float num = 1.0;
printf(“num = %d\n”, num);
} What is a variable? : What is a variable? A variable is a named memory location.
Variables provide direct access to its memory location.
Can you understand what will happen when you write:
x = 44;
y = x; Memory Addresses : Memory Addresses Each ML has an address ? each variable has an address.
Memory address in 32-bit Machines is 32-bit unsigned integer number.
How to get the address of a variable?
&variable_name Run this code ? : Run this code ? int main()
float num = 1.0;
printf(“Address of num is %u\n”, &num);
} Pointer Variables : Pointer Variables Ask yourself:
What is an integer variable?
What is a floating-point variable?
What is a character variable?
Now, what is a pointer variable?
I will answer: is a variable which holds an address of a ML. Pointer Variable : Pointer Variable Assume ptr is a pointer variable and x is an integer variable x x = 10 10 ptr = &x &x Now ptr can access the value of x.
Write: *variable . For example:
Can you tell me why we put %d???? Declaring a Pointer Variable : Declaring a Pointer Variable In previous example, ptr points to the variable x. We say that ptr is an integer pointer.
Similarly, we have character pointer, floating pointer, long pointer, … . Declaring a Pointer Variable : Declaring a Pointer Variable To declare ptr as an integer pointer:
To declare ptr as a character pointer:
char *ptr; Run this code : Run this code int main()
x = 10;
ptr = &x;
*ptr = *ptr + 1;
printf(“x = %d\n”, x);
} What is Asterisk ( * )? : What is Asterisk ( * )? char *z = *x*y;
|
OPCFW_CODE
|
Subject: Re: Sendmail and anti-spam
To: John Nemeth <firstname.lastname@example.org>
From: Andrew Brown <email@example.com>
Date: 02/28/1999 23:50:30
>} i recommend a configuration where the mc file contains
> This is a very bad idea. Since anybody can create an MX record
>for their domain that points at your mail server, it would open you up
>to uncontrolled relaying.
no...it would require them to poison your name server with mx records
that point to the domains they wish to spam. so, for aol, they'd have
to spoof an answer to my name server that said i was a redundant mx
host for aol.com. tricky at best. and usually much beyond the
abilities of lame-brained spammers, even with "5|<r1pt |<1dd13" tools.
Turns on the ability to allow relaying based on the MX
records of the host portion of an incoming recipient; that
is, if an MX record for host foo.com points to your site,
you will accept and relay mail addressed to foo.com. See
description below for more information before using this
feature. Also, see the KNOWNBUGS entry regarding bestmx
the known bugs thing refers to the mx list being truncated. so there,
i'd only lose some of the mx records. but if someone is using that
many mx records (or just really long names) and listing me in one of
them, then they can afford to lose me as a relay.
yes, they could set up an mx record that listed me as a redundant mx
host for the domain (singular) that they wanted to spam, if they had
control of the name server in question. but then they'd only be able
to spam themselves. either that, or they'd be open to computer
trespassing charges for breaking into the name servers for the domain
they were spamming. that's another step they haven't (yet) taken.
>} since that will allow the least amount of reconfiguration for most
>} people. without that, all the domains for which your host is a
>} secondary (or other) mx host for a zone will have to have all those
>} zones listed in its /etc/mail/relay-domains file. which is a pain.
> It's also the only way to prevent your server from being used for
yes, but it's incredibly tedious. and it adds yet another possible
"point of failure" when setting mail and dns service for someone.
this is yet another place where something needs to be changed/added.
for example: given the number of mx records out there that list
mail.uu.net as a redundant mx host, i wonder if uunet will *ever*
close down that relay point. :)
|-----< "CODE WARRIOR" >-----|
firstname.lastname@example.org * "ah! i see you have the internet
email@example.com (Andrew Brown) that goes *ping*!"
firstname.lastname@example.org * "information is power -- share the wealth."
|
OPCFW_CODE
|
How do I say 'people insist on' in the passive voice?
If I have 'read guides where people insist that...' then how do I use that in the passive voice?
'In guides I have read, it is insisted that...'?
'In guides I have read, it is insisted upon that..'?
Or is it something different?
Why would you want to do this? You could say "I have read guides in which it is insisted that ...", but this is much more awkward than "I have read guides which insist ..."
I agree that the construct Peter Shor suggests is a lot more natural than the ones you have thought of.
Comparing a couple of obvious forms in Google Books, it's clear widely insisted is sometimes used, though it's rare by comparison with widely claimed. A quick scan of the instances suggests "upon" is slightly more likely to be present than not, but my feeling is that usually it's just a stylistic choice whether to have "upon" or not.
Here X is the thing insisted. For example, X could be "visitors to Italy must try the delicious local pizzas".
You could say, "There has been insistence that X". Or "That X has been insisted".
And your structure also works, "... it has been insisted that X".
You can prefix or postfix any of my examples with "In guides that I have read", "In some guides", and so on. For example:
In guides that I have read, it has been insisted that X.
It has been insisted that X in guides that I have read.
In some guides, there has been insistence that X.
It is insisted, in some guides, that X.
These are all acceptable, though some are awkward.
If you really wanted to, you could say it is insisted that and you would have the support of at least one citation from the OED (Oxford English Dictionary). It is insisted upon that, however, has no such support. In any case, both sound very formal and are unlikely to be used in contemporary English.
I edited my question to reflect my real wonder - the passive voice
And I've now shortened my answer to reflect your edits.
I've nothing to back it up apart from gut feeling, but I suspect upon is more likely to be present when the insistence concerns something that must be done rather than something that must be true. I think insisted is a bit unusual, but no more "formal" than, say, claimed. It's just the passive voice that's formal.
@FumbleFingers: I share your visceral inclination. The passive is formal in such constructions, but it is not invariably so.
It seems that in OP's construction he's going to continue with the clause defining what's being insisted (upon), which I think requires at least the word "that". But NGram suggests it's comparatively unusual to precede it by "upon", so although I've already upvoted your answer I'm inclined to think you should answer the specific question and recommend he uses it is insisted that.... Whatever - your call.
It depends on the exact sentence, specifically on whether the construction is "insisted that" or "insisted on"; your question mentions both.
Here are two examples:
I have read cookbooks where people insist that you sift your flour twice.
I have read cookbooks in which it is insisted that flour be sifted twice.
and
I have been to countries where people insist on eating dinner at three in the afternoon.
I have been to countries where eating dinner at three in the afternoon is insisted upon.
I for myself would probably use an expression like
'... read guides where people would insist that ...'.
Grammatically, this may not be the correct way to transform into past tense. However, it seems to convey the meaning appropriately enough, assuming you are dealing with literary writing and not technical material.
Terribly sorry - I meant passive voice, not past tense. I have corrected my initial question
Insist is an intransitive verb, and only transitive verbs may be Passivized, except under very special circumstances. Of which this is not one.
Semantically, insist means to make a statement (with optional That-clause) or to issue an order or demand for action (with optional gerund), but with the additional information that the statement was strongly affirmed by its speaker -- or the order strongly delivered, and probably repeated, and that the speaker was convinced of the truth of the statement or their authority and determination to see the action performed, and tried to convince the listener(s) of its truth or necessity.
Syntactically, insist requires a volitional human subject and takes two types of complements:
a tensed That-clause, with its own subject, denoting the content of the statement:
She insisted that the cat was hanging from the chandelier.
a transitivizing preposition on, which may take an NP object or a gerund referring to the the statement or proposed action.
She insists on it. ~ She insisted on that one.
She insists on him/his driving his own car.
If the gerund is subjectless, the subject is interpreted to be the same as the subject of insist (i.e, insist on takes a Gerund with Equi-NP-Deletion)
She insists on raising her own tomatoes.
The That-clause or gerund can be passivized, of course, if they're transitive:
She insists on being driven everywhere.
She insists that he was hit by a train.
But the only passive I can find for insist is with the preposition on
Ordinarily we wouldn't do that, but this was insisted on by the top brass.
and only really works with a small (ideally pronoun) pre-Passive preposition object
??That window treatment over there was insisted on by the top brass.
*For the chauffeur to drive her everywhere was insisted on by the old lady.
*That the world would end tomorrow was insisted on by the Mayan scholar.
Whether a particular predicate can be Passivized depends on a number of things, including transitivity, and a nascent transitive verb construct like insist on hasn't grown a whole lot of transitivity muscles yet -- just enough to support a pronoun, it seems.
The two OED citations I mentioned in my own answer are ‘It was insisted, that the testator had restrained the estate of inheritance during her life' (1805) and ‘This condition should be first humbly insisted on’ (1702).
Extraposition is necessary to save the first citation (and I really didn't want to get into Extraposition; this was too long already), and it's legal language, which is capable of anything. I wouldn't call it good Modern English. The second one is a passive of insist on, but the word placement, the use of humbly, the formal terminology, all shout that the sentence is 300 years old and depicts an ancient social hierarchy and ancient linguistic means of dealing with it. Again, nothing I could recommend for ModE speakers.
Historically, Ngrams shows that the use of "insist" in the passive was not particularly rare, although this usage has indeed been declining since the 1940s. So I'd classify "insist" as one of the exceptions to the rule that only transitive verbs can be put into the passive.
OK. Although I'd rather say that insist can, like most intransitive verbs, be used transitively in certain constructions.
|
STACK_EXCHANGE
|
<?php
namespace App\Utils;
class ImageMontage
{
/*
$templateFilePath = resource_path() . '/assets/props/11.png';
*/
public static function montageBookFrame($baseimage, $templateFilePath)
{
$src_width = imagesx($baseimage);
$src_height = imagesy($baseimage);
$dst = @imagecreatetruecolor(671, 412);
$dst_width = 671;
$dst_height = 412;
$new_width = $dst_width;
$new_height = round($new_width*($src_height/$src_width));
$new_x = 0;
$new_y = round(($dst_height-$new_height)/2);
$next = $new_height < $dst_height;
if ($next) {
$new_height = $dst_height;
$new_width = round($new_height*($src_width/$src_height));
$new_x = round(($dst_width - $new_width)/2);
$new_y = 0;
}
imagecopyresampled($dst, $baseimage , $new_x, $new_y, 0, 0, $new_width, $new_height, $src_width, $src_height);
$baseimage = @imagecreatetruecolor(766, 586);
imagecopy ($baseimage, $dst,53,88,0,0,766,586);
$effect = imagecreatefrompng($templateFilePath);
imagecopyresampled ($baseimage, $effect,0,0,0,0,766,586,766,586);
return $baseimage;
}
/*
$templateFilePath = resource_path() . '/assets/props/16.png';
*/
public static function montageCollageFrame($baseimage, $templateFilePath)
{
$canvaswidth = 600;
$canvasheight = 451;
//desired max width and max height of the uploaded image
$dst = @imagecreatetruecolor(599, 450);
imagefill($dst, 0, 0, imagecolorallocate($dst, 255, 255, 255));
$src_width = imagesx($baseimage);
$src_height = imagesy($baseimage);
$dst_width = 599;
$dst_height = 450;
$new_width = $dst_width;
$new_height = round($new_width*($src_height/$src_width));
$new_x = 0;
$new_y = round(($dst_height-$new_height)/2);
$next = $new_height < $dst_height;
if ($next) {
$new_height = $dst_height;
$new_width = round($new_height*($src_width/$src_height));
$new_x = round(($dst_width - $new_width)/2);
$new_y = 0;
}
return $baseimage;
imagecopyresampled($dst, $baseimage , $new_x, $new_y, 0, 0, $new_width, $new_height, $src_width, $src_height);
$baseimage = @imagecreatetruecolor($canvaswidth, $canvasheight);
//delete the following if you do not want a rotation
//position of the resized uploaded image, 0 is the left position and 200 the top position
imagecopy ($baseimage,$dst,3,3,0,0,$canvaswidth,$canvasheight);
$effect = imagecreatefrompng($templateFilePath);
/*for debug
//uncomment the following and comment the next imagecopyresampled to see the transparency and edit the position easily
//imagecopymerge ($finalimage,$effect,0,0,0,0,$canvaswidth,$canvasheight,50);
*/
imagecopyresampled($baseimage, $effect,0,0,0,0, $canvaswidth, $canvasheight, $canvaswidth, $canvasheight);
return $baseimage;
}
/*
$templateFilePath = "../props/9.png";
*/
public static function montageConferenceFrame($baseimage, $templateFilePath) {
$canvaswidth = 878;
$canvasheight = 586;
//desired max width and max height of the uploaded image
$dst = @imagecreatetruecolor(430, 300);
imagefill($dst, 0, 0, imagecolorallocate($dst, 255, 255, 255));
$src_width = imagesx($baseimage);
$src_height = imagesy($baseimage);
$dst_width = 430;
$dst_height = 300;
$new_width = $dst_width;
$new_height = round($new_width*($src_height/$src_width));
$new_x = 0;
$new_y = round(($dst_height-$new_height)/2);
$next = $new_height < $dst_height;
if ($next) {
$new_height = $dst_height;
$new_width = round($new_height*($src_width/$src_height));
$new_x = round(($dst_width - $new_width)/2);
$new_y = 0;
}
imagecopyresampled($dst, $baseimage , $new_x, $new_y, 0, 0, $new_width, $new_height, $src_width, $src_height);
$baseimage = @imagecreatetruecolor($canvaswidth, $canvasheight);
//delete the following if you do not want a rotation
//position of the resized uploaded image, 0 is the left position and 200 the top position
imagecopy ($baseimage,$dst,218,110,0,0,$canvaswidth,$canvasheight);
$templateFilePath;
$effect = imagecreatefrompng($templateFilePath);
/*for debug
//uncomment the following and comment the next imagecopyresampled to see the transparency and edit the position easily
//imagecopymerge ($finalimage,$effect,0,0,0,0,$canvaswidth,$canvasheight,50);
*/
imagecopyresampled($baseimage, $effect,0,0,0,0,$canvaswidth,$canvasheight,$canvaswidth,$canvasheight);
return $baseimage;
}
}
?>
|
STACK_EDU
|
Thhis example shows some of the latest changes in dvd-slideshow.
# Examples for dvd-slideshow 0.7.5
# This example shows off some of the new features available in
# 0.7.5 through 0.7.2:
# different configuration variables and reading method
# Comments in all lines
# Smooth scroll effect in high-quality mode
# Kenburns zoom velocity fix
# chapter marker keywords
# sequential audio files
# using \n in subtitles to force newlines
# Subtitles stay constant between slides if the same subtitle is specified
# And, some older effects that aren't in the other examples:
# Copy original images into output dvd filesystem directory
# First, set some variables:
debug=0 # debug
pal=0 # use ntsc mode
ac3=1 # use ac3 audio instead of mp2
copy=1 # copy all images into a directory in output dvd filesystem directory.
autocrop=1 # make images that are very near the dvd aspect ratio fill the screen.
# Note that autocrop doesn't have an effect on portrait-orientation pictures.
# NOTE that the font options are not nice now, so the syntax will probably
# change to be more flexible. E-mail me if you have suggestions.
bottomtitle_font_color=blue # or use hex "#RRGGBB"
bottomtitle_bar_location_y=156 # relative to bottom of image
bottomtitle_bar_height=55 # 0 for no 50% white behind text
# note that tenths or hundredths or thousandths of seconds can be specified:
# Just display the top title. Lower titlebar will not show:
titlebar:4:Top Title # add another comment if you want.
# Just show the lower title. Top title will not show.
# show both titles:
toptitle_font_color=green # change the top font color
toptitle_bar_height=0 # no white background this time.
# note that variables need to get set BEFORE the crossfade because
# crossfade actually renders the title slide in order to use it.
titlebar:4.72:Top Title:Lower Title
# now let's start playing the audio on track 1:
# If you want the audio files to play one after another, you can just put them
# one line after another and they will be concatenated...
# display a picture and set a chapter marker here. By using the "chapter"
# keyword, you automatically set dvd-slideshow to use only your manual chapter
# markers instead of the default at each picture.
# The simple title can have \n characters to mark newlines:
title:4.72:Second part of\nmy slideshow
# set another chapter marker
pano_small.jpg:3:cool panorama picture
# zoom in to left side of panorama (no chapter marker)
pano_small.jpg:2::crop:imageheight;left # pause for a second
# now, the scroll effect is much smoother if you use the high-quality switch:
# -H when invoking dvd-slideshow. It will only be smoother if your scroll
# is quite slow (moving slower than about 4 pixels per frame)
# the slower you scroll, the more you can notice the difference.
pano_small.jpg:3.5::crop:imageheight;right # pause for a second
# you can force a newline in subtitles by using \n
picture2.jpg:4.5:First subtitle line\nSecond subtitle line
# note, that if you specify the same subtitle for adjacent slides, the
# subtitle will stay the same now.
picture1.jpg:3:My trip was great!
crossfade:1:My trip was great!
picture2.jpg:3:My trip was great!
pano.jpg:6:My trip was great!
# so, the subtitle will be displayed constantly for 6 seconds, then turn off for one
# second, and then come back on for 4 seconds...
background:2:This is the background
|
OPCFW_CODE
|
At Runn, time never stands still. We continuously discover and implement new products and features. We iterate and release fast and don't shy away to throw out stuff that's not working.
Here's a glimpse of what we have coming up.
If you have feedback, a suggestion, or want to upvote a feature, please get in touch via our chat or send us an email to firstname.lastname@example.org. We'd love to hear from you.
To see all the great things we've built recently, head over to Just launched!
- SAML SSO. Support for SAML SSO via our enterprise customers. Manage your users security internally and control their access via Active Directory.
- Import Timesheets via CSV. You can already import timesheet data via the API, next you'll be able to import them via CSV as well.
- Chrome extension time tracker. A chrome extension to allow users to record time on projects as they are working on it.
- Weekend support. Let people schedule time on weekends.
- Custom work weeks. Support for work weeks other than Monday - Friday.
- Retainers. Ongoing maintenance projects that repeat.
- Cost and profit budgeting. In addition to revenue (what you're billing the client), we want to constrain the project around total cost or profit margin. Useful for non-billable and fixed-price projects.
- Phase based reporting. Link work to phases for a more detailed breakdown of where project time and money is being spent.
- Scenario planning. Test out different scenarios by selecting which tentative projects will get included in our capacity and billing forecasts.
- Supercharged search. With Runn you can already omni-search on many different categories and next it'll be supercharged to give more options and a visual guide.
- Jira integration. Allow projects to be synced between Jira and Runn.
- Public holidays. A more streamlined way of managing public holidays.
- More powerful reporting. Graphing, segmenting, drilling down. There's a lot more power we can add to Runn's existing reporting to make it even easier to get the data you need.
- More granular timesheets. Support for multiple entries per project, per day.
- Daily, weekly, and monthly rates. Tired of dividing your day rates into hours? We'll do it for you and calculate your project financials accordingly..
- Invoicing & billing milestones. Allow invoiced data to be captured and reported on.
- Attached assignments to phases. Get more granular reporting for projects by allowing budgeting for and attaching assignments to phases.
- Custom fields. Additional custom fields for people and projects that our users can search, filter, and report on.
- Project templates. Streamline adding new projects with templates.
- Moving projects. A more streamlined way to move a project and all it's assignments and phases once it's been planned out.
- Webhooks. Lets you streamline your integrations.
- Xero integration. Sync your invoicing data.
|
OPCFW_CODE
|
Flicker is a result of too slow refresh. You need to refresh each segment at a few 100 Hz minimum. However, there are some tricks that can reduce apparent flicker while not actually doing faster refresh. The naive approach is to refresh the digits in order. But, if you alternate them a bit, the whole number will appear to flicker less. For example, do digits 1, 3, and 5, then come back and do digits 2, 4, and 6.
Without knowing the processor and seeing the source code, it's impossible to say whether the vendor is trying to string you along or the mess really needs to be re-written. Keep in mind that 99% of firmware engineers write horrible firmware. There could be hard coded constant all over the place that make assumptions about the clock frequency, the LED refresh rate, etc. With well written firmware, increasing the refresh rate assuming the processor has the necessary cycles already should be easy. With badly written firmware, it could be a lot more trouble than to ditch the mess and write it right.
How come the original designer didn't address the flicker? Perhaps the firmware is so badly architected that simply increasing it wasn't possible? If the flicker is that obvious, then why was the product ever created the way it is? That alone makes it likely the original designer made a mess. If he could have easily fixed it, he probably would have.
The really funny thing is now you're doing it again. You are going overseas because you want to keep costs down. Good design costs real money, but bad design costs much more. Even though you have been bitten by that, you have still not apparently learned it. With good design in the first place you wouldn't be in this position, and even if you were, it should be easy to change. There is no excuse for changing stored audio not being a simple operation.
How do you know if it's a bad idea or not to change the microcontroller and the circuit if you don't know what either are? Buying engineering strictly on price is the most expensive way to go.
Added in response to comments:
I don't remember where I heard about refreshing digits non-sequentially, but I have tried it and found it to help. I think it works for the same reason interlaced TV appeared to flicker at the field rate instead of the frame rate. For NTSC, the whole picture was redrawn at 30 Hz, but the apparent flicker was 60 Hz because of the interlacing refresh. You're not going to get 2:1 like that by interlacing digits, but it does help.
No, 60 Hz is not fast enough, not even close. 60 Hz is about where most people don't see flicker anymore for a square wave. Someone staring directly at a LED driven 50% of the time at 60 Hz may not see the flicker, but that's not the only way people perceive it. Unless you only have two digits, the LEDs will be on brighter for a smaller fraction of the time, which makes the flicker more apparent. The center of your retina is the slowest in responding. You will notice flicker more at the periphery of your vision. However the real objectionable part is when you move your eyes. Flicker is easily apparent at 60 Hz. You can't make the flicker invisible due to this phenomena, so the problem is to make it less annoying. 60 Hz is still quite annoying for most people. As I said, you want a few 100 Hz at least. If you have to pick a number, I'd start by trying to achieve at least 500 Hz.
As for getting good engineering, that's a whole topic on its own. There is nothing inherently wrong with going overseas. Competent people live in various places. The issue is first to recognize that bad design will cost a lot more than hiring a top engineer to do it right in the first place. Second, you have to realize that finding and vetting engineering talent takes some work. You're going to spend 1000s of $, probably 10s of 1000s of $. Treat it like other purchase decisions of that magnitude. Ask around, interview, get references and actually follow up on them.
As long as you're serious and the job is real, I'd say you have the right to expect around 2 hours of initial consultation before any committment is made. Keep in mind that goes both ways. Part of this time is for you to evaluate the engineer, but of course the engineer is evaluating you too. They are trying to decide whether this job fits in line with what they want to be doing, whether you are going to be a pain in the butt customer, etc. Either way, there should be plenty of time to get into the requirements and talk about initial impressions of what path the engineer will persue towards the solution. This should tell you a lot about how they think, how much they just implement whatever you told them versus drilling down and trying to get at the real problem and making sure that is solved, suggesting alternate solutions, etc.
None of this says the engineer can't be oversees, but it does make logistics and good evaluation difficult. If you have a couple of strong recommendations from people you trust, then that helps a lot. If you're logic is only that Bob in Boston wants $130/hour and is estimating 4 weeks while Naresh in Bangalore wants $35/hour and can do it in 2 weeks, you're headed for serious trouble.
|
OPCFW_CODE
|
Hypercet Blood Pressure Formula is considered to be an insider's Hypercet Blood Pressure Formula, but its popularity has Hypercet Blood Pressure Formula in the recent past - more and more users are Hypercet Blood Pressure Formula huge Hypercet Blood Pressure Formula with Hypercet Blood Pressure Formula and sharing their successes.
Numerous user opinions suggest that Hypercet Blood Pressure Formula help you improve Hypercet Blood Pressure Formula health. However, that sounds too good to be true. For this reason, we have decently reviewed the remedy and the result, the dosage and its application. Read all the final results in this review.
What is generally known about the product?
Hypercet Blood Pressure Formula is based on natural substances & has been extensively tested by many people. The product is cheap & almost never has side effects
On top of that, the provider is extremely trustworthy. The acceptance is executable without prescription and can be processed because of an SSL-encrypted line.
Now a list of processed ingredients
With Hypercet Blood Pressure Formula it is mainly the contained ingredients, as well as, which are important for the majority of the effects.
Likewise, and in health issues, traditional medications are incorporated into several nutritional supplements.
Dosing is important, many products fail here, but this is not true for the product.
While some consumers may feel like an unfamiliar choice, if you look at current research, this substance seems to be opportune to achieve more health.
Let's briefly summarize:
Prudent, well-balanced drug concentration and helps with other ingredients, which also contribute to the effective improvement of health.
These benefits make Hypercet Blood Pressure Formula an outstanding product:
Our countless outgrowths of Hypercet Blood Pressure Formula clearly guarantee: The excellent effect makes the purchase decision extremely easy.
- You do not have to rely on uncertain medical procedures
- The best possible compatibility and ease of use allow the fully organic ingredients or ingredients
- You avoid going to the pharmacy & a humiliating conversation about a solution for keeping healthy
- You do not need a prescription from the doctor, because the product can be ordered without medical prescription and simply inexpensively on the Internet
- On the occasion of private Internet ordering, none of your problems need to get something
How is the effect of the product?
The effect of the product comes as expected through the interaction of specific ingredients to create.
For this, the sophisticated function of the human body makes it a particular advantage, by taking advantage of these already existing processes. So it makes more sense than Vivese Senso Duo Oil.
After all, the body has the utensils to improve health and it's all about getting those same processes to start.
According to the paver, the effects shown here are thus exciting:
These are the mentioned effects that are possible with the product. However, it should be clear that, as expected, those results may be significantly stronger, or even gentler, depending on the buyer. Only an individual proof will bring clarity!
These user groups should Hypercet Blood Pressure Formula from using Hypercet Blood Pressure Formula :
It is totally simple:
In the following situations, I strongly advise against using this method:
- You have not reached the age of 18 yet.
- You do not want to incur expenses for your well-being.
- They are satisfied and want nothing changed.
This preparation seems to represent a comprehensive support for this problem.
The side effects of the product Hypercet Blood Pressure Formula
As already mentioned, Hypercet Blood Pressure Formula solely on components that are natural, carefully selected and wholesome. So it is commercial without a prescription.
Both the producer or reports as well as feedback in online traffic are unanimous: the product does not cause any side effects according to manufacturer, reviews and the network.
Surely, this is guaranteed under the condition, provided that you keep disciplined to the recommended use for the use, since Hypercet Blood Pressure Formula exceptionally gigantic effects.
In addition, you should make sure that you order the product only from verified sellers - follow our buying advice - to prevent counterfeiting (fakes). Such a wrong product, even if a favorable price at first glance may attract you, has usually no effect and in the worst case can be associated with immense risks.
advantages and disadvantages
- to buy only from the manufacturer
- works over time
Disadvantages of Hypercet Blood Pressure Formula?
- free shipping
- discreet mailing
- courteous service
- no prescription
- very cheap
- simple application
- full practicality
Now several interesting Hypercet Blood Pressure Formula about the use of Hypercet Blood Pressure Formula
You can Hypercet Blood Pressure Formula for a full 24 hours without anyone noticing. The way in which you use the article and gain the best possible experience is explained by the specific instructions for use - these are quickly explained and easy to implement
When are the first steps?
Repeatedly, the product makes itself noticeable after the first application and already in the period of a few weeks smaller successes can be achieved according to the manufacturer.
In studies, consumers have often attributed a resolute impact to the product, which initially lasts only a few hours. With long-term use, these results are consolidated, so that even after the end of the use of the results are tedious.
Surprisingly, users seem to be so obsessed with Hypercet Blood Pressure Formula that they will also be phased in after a few years for a few weeks.
It seems to be advantageous in the following, despite occasional news that testify to short-term results, to be serene and to use the product at least for several months. In addition, please refer to our help section for further information. This is exactly what distinguishes this article from other articles such as Varikosette .
Reviews for Hypercet Blood Pressure Formula analyzed
To be sure that a Hypercet Blood Pressure Formula like Hypercet Blood Pressure Formula works, it does not hurt to take a look at contributions from forums and other people's reviews. Unfortunately, there are very few clinical tests on this, because they are always given prescription only Means made.
To get an idea of Hypercet Blood Pressure Formula, we include clinical trials, reviews, and user statements. Exactly those interesting results we look at immediately:
As a result of these exciting advances, consumers are happy about the product:
According to expectations, it concerns only a few experience reports and Hypercet Blood Pressure Formula can strike each with varying degrees of Hypercet Blood Pressure Formula. On average, the results are fascinating and I think the result will be very satisfying for you as well.You can look forward to the facts without hesitation:
My conclusion: Try the product as soon as possible.
That type of high- Hypercet Blood Pressure Formula product, including Hypercet Blood Pressure Formula, is annoyingly too often on the market for only a short time, because of course, certain competitors do not enjoy seeing effective agents. You should therefore make a decision as soon as possible, before the opportunity is missed.
This opportunity to get such a powerful product through a trusted dealer and a fair purchase price is not often found. The website of the manufacturer can be purchased at the moment. In contrast to other sources of supply, you can rely on it to obtain the exact product.
What do you think: Are you determined enough to complete the program? If you question your ability, do not bother. The odds are, however, that you are spurred on enough to engage in the method and achieve your goal with Hypercet Blood Pressure Formula.
What Everyone Hypercet Blood Pressure Formula
Certainly should be avoided while shopping for bargain shopping in any untrusted online stores.
There, there is the risk of buying imitations that, ideally, do not change anything and usually damage the body as well. Incidentally, users are attracted with great special offers, which turn out to be a fraud on closer inspection.
Attention: If you decide to Hypercet Blood Pressure Formula, avoid dodgy third-party Hypercet Blood Pressure Formula! Trust in the linked supplier.
This provider remains the best option for buying the product because it gives you the best of the worlds - the lowest prices for the original item, an extensive customer service package as well as optimal delivery terms.
You should note this to get to the purchase of this product:
Avoid dangerous research sessions in Google and linking to this review. The editors do their best to always keep the links up to date so that you are guaranteed to order at the lowest cost as well as at very fast delivery terms.This is remarkable if you compare it with Saw Palmetto.
|
OPCFW_CODE
|
You can customize your workspace by saving and loading custom user interface (UI) schemes.
A custom UI scheme is saved as a set of six files:
- .cui: Stores toolbar and panel layouts.
- .clr: Stores all color settings (except quad menu colors).
- .mnu: Stores menu bar and quad menu contents.
- .qop: Stores quad menu colors, layout, and behavior.
- .kbd: Stores keyboard shortcut assignments.
- .ui: Stores the icon scheme (Classic or 2D Black and White).
You can load and save each of these files individually from their respective panels in the Customize User Interface dialog. You can also load an entire set of UI scheme files at once with the Load Custom UI Scheme dialog, and you can save the current UI scheme as a complete set with the Save Custom UI Scheme dialog.
By default, two sets of UI schemes are present in the 3dsmax\UI\ folder: maxstart and defaultUI. Upon startup, 3ds Max uses the maxstart file series if it exists; if not, it uses the defaultui series.
WarningDo not save over any files that begin with defaultUI, as doing so permanently overwrites the default UI scheme.
To load a custom UI scheme:
- Set up the custom UI scheme within 3ds Max using the options on the Customize menu Customize User Interface dialog.
- Save the custom UI scheme with Customize menu Save Custom UI Scheme.
- During your current 3ds Max session or any later session, choose Customize menu Load Custom UI Scheme.
- In the Load Custom UI Scheme dialog, select a type of customization file (.cui, .mnu, .clr, .kbd, .qmo, or .ui) from the Files of Type drop-down list.
- Choose any file with the appropriate extension. 3ds Max will search for (and load) any other type of UI scheme file with the same base file name.
If you choose a UI scheme for which one of the six file types is not present, the part of the user interface for which there
is no file will not be changed.
To return to the default UI scheme:
If you start 3ds Max and its user interface has an unfamiliar layout, you can always return to the default UI scheme.
- Choose Customize Load Custom UI Scheme.
- From the Load UI File dialog that displays, choose defaultui.cui, and click Open.
All the default UI files begin with the base file name defaultui. When you choose defaultui.cui, 3ds Max loads all default UI scheme files.
To start 3ds Max with a custom user interface:
- Arrange the user interface as you would like it to appear when you start 3ds Max.
- Choose Customize menu Save Custom UI Scheme, and save your custom UI scheme with the base file name maxstart.
The next time you start 3ds Max, 3ds Max uses the current UI scheme.
If the Save UI Configuration On Exit option on the Customize menu
Preferences General tab
is on (which it is by default), the state of the user interface when you close 3ds Max
overwrites the maxstart
UI scheme files.
To start 3ds Max with a custom user interface from the command line:
- Save your custom UI scheme with a descriptive base file name with the Save Custom UI Scheme dialog.
- Right-click the 3ds Max icon on the Windows desktop, and choose Properties.
- In the Target field, change 3dsmax.exe to 3dsmax.exe -c, followed by the name of your .cui file.
Example: 3dsmax.exe -c myfile.cui. Be sure to leave a space both before and after the -c.
If you want to move the UI scheme to a different computer, copy all the files in the 3dsmax\UI\ folder that start with the custom UI scheme base name to the new 3dsmax\UI\ folder. Alternately, you can add the path name to the command line.
To save a single UI scheme file:
- Choose Customize menu Customize User Interface.
- Access the panel for the type of user interface item you want to save.
- On the panel, click Save.
To change the icon display from Classic to 2D Black and White:
- Choose Customize menu Save Custom UI Scheme, enter a filename, and click Save.
- On the Custom Scheme dialog, next to Icon Type, choose the type of icon you want to display.
- Click OK to close the dialog and save the scheme.
- Choose Customize menu Load Custom UI Scheme and then open the UI scheme you saved.
- Load Custom UI Scheme
On the Load Custom UI Scheme dialog, you specify the base file name of the custom UI scheme you want to load. You can select
any type of UI scheme file from the dialog, and 3ds Max will load any other type of UI scheme files with the same base file
- Save Custom UI Scheme
This standard Windows file save dialog lets you save your customized UI scheme.
- Revert to Startup Layout
Revert To Startup layout automatically loads _startup.ui, which returns the user interface to its startup settings. This temporary
system file is created automatically when you start 3ds Max. Use this option to return the UI to startup settings.
|
OPCFW_CODE
|
Phases and to a global view, and defining the project's technical strategy the management activities include: setting priorities, defining objectives, project tracking and status reporting, change control, risk assessment, step wise commitment. Activities in the design phase of sdlc sdlc process renee fowler systems analysis and design/bsa/376 august 20, 2012 deborah j marshall sdlc process the system development life cycle (sdlc) starts when a project is planned for the implementation of an information systemexecutives of the organization make a decision for a new system or replacing or upgrading an old system and the project begins. The design phase is when you build the plan for how you will take your project through the rest of the sdl process—from implementation, to verification, to release during the design phase you establish best practices to follow for this phase by way of functional and design specifications, and you. 3 system design: in this phase, the physical system is designed with the help of the logical design prepared by system analysts the analysts and designers work together and use certain tools and software to create the overall system design, including the probable output.
The design document is a deliverable of the design phase documents from the previous phases are revised as necessary during the design phase in the system design, first the general system characteristics are defined. Traditional sdlc – phase and purpose phase 4: design describes how the proposed system is to be built the design is specific to the technical requirements the system will be required to operate in and the tools used in building the system. Developing test activities approval to progress to the design phase in the requirements analysis phase, you need to write the following documents: in general this plan is developed during the design phase of the software development lifecycle and updated during the development phase.
Security assurance usually also includes activities for the requirements, design, implementation, testing, release, and maintenance phases of an sdlc background a survey of existing processes, process models, and standards identifies the following four sdlc focus areas for secure software development. A systems development life cycle (sdlc) adheres to important phases that are essential for developers, such as planning, analysis, design, and implementation, and are explained in the section below a number of system development life cycle (sdlc) models have been created: waterfall, fountain, spiral, build and fix, rapid prototyping. Thanks for sharing such a knowledge about software development life cycle model ie from requirement gathering and analysis, design, implementation or coding, testing, deployment, maintenance that really helpful to understand the life cycle model of the software development. Because sdlc has several different phases, each student is designated a separate phase to fully explain its importance marge is assigned the design phase, which is the third phase in a set of six. Requirements for faster release cycles and applications packed with features make security testing in the sdlc a business imperative as early as during the analysis and design phases, as well as throughout development and, of course, during the testing phase align security testing activities to your current sdlc process.
Phase oi the sdlc uses the results oi the previous one a systems development liie cycle (sdlc) adheres to important phases that are essential ior developers, such as planning, analysis, design, and implementation, and are. The seven phases of the system-development life cycle the system-development life cycle enables users to transform a newly-developed project into an operational one the system development life cycle, sdlc for short, is a multistep, iterative process, structured in a methodical way. Phase 3: plan-for-design design execute phase 4 the project team should continue to engage the ea primes for assistance with all activities that require involvement from the ea group pmo will also assess the content of ppm to ensure the pm is updating the required areas / tombstone data such as sdlc stage, project end date risks and.
Given is the progress flow of water-fall process model in waterfall process model, the deliverable (document) produced in one phase serves as an input to the next phase suppose you are a project manager who is leading this project. Includes implementation of the design specified in the design document into executable programming language code the output of the coding phase is the source code for the software that acts as input to the testing and maintenance phase. A good design is a testable design imo, one needs to always be thinking of how one would test the software even during the design phase of course, the level of attention required would depend on whether you are doing detailed design or a high-level architecture. Need help with the design phase of your software project contact us we have used our unique business process improvement and software development methodology to design, develop, and implement hundreds of software projects for manufacturing, service, construction, and high technology companies.
System/software design phase in sdlc the purpose of the design phase is to plan a solution of the problem specified by the requirement document this phase is the first step in moving from problem domain to the solution domain. System development life cycle (sdlc) is a conceptual model which includes policies and procedures for developing or altering systems throughout their life cycles sdlc is used by analysts to develop an information system. A version of the sdlc which is a sequential, activity-based process in which one phase of the sdlc is followed by another, from planning through maintenance seven phases of sdlc planning, analysis, design, development, testing, implementation, maintenance. The planning phase requires far greater precision in estimating the size, complexity and impact of the expected system such detailed planning is only possible after definitive conclusion of the activities inherent in the design phase of the sdlc.
|
OPCFW_CODE
|
The HTTP Request Sender module in Xeoma IP security cameras software
Starting from Xeoma 14.5.19, released on May 13, 2014, IP security cameras software Xeoma offers to use its HTTP Request Sender module.
This module allows you to generate and send HTTP requests. It will work perfectly with smart home systems or supplement your home security system.
Here is a glossary of terms you are going to encounter throughout this article:
- URL* – Uniform Resource Locator, the address of a resource/device we are going to contact over HTTP. Sample:
- URN* – Uniform Resource Name, the name of the specific part of the resource/device (the one from URL) we’d like to communicate with. Sample:
- URI* – Uniform Resource Identifier, a unique combination of URL and URN that identifies both the exact part and the exact address of a resource/device. Sample:
*If you happen to know the intricacies of HTTP/HTTPS, it’s possible that you have a different way to define these terms – there are, indeed, multiple approaches. However, we chose this one for simplicity’s sake, since overall HTTP terminology is not the subject of this article.
If you want to send motion-triggered requests, connect the module after the Motion Detector. The module will send HTTP requests with configurable parameters to the specified address when it receives the signal from the previous module in the chain (this is called an “event”). For example, when the motion is detected, the module will send HTTP request to the “HTTP Switcher” module and it will enable other modules, for example:
Here we see that “HTTP Request Sender” sends requests to enable sound alarm when motion is detected. Please pay attention to the sensitivity level and the object size settings of the Motion Detector.
Let’s have a closer look at the HTTP Request Sender’s settings:
- Test – this button allows you to test the connection between the sender and the receiver of the HTTP request; if something goes wrong, it will show an appropriate error message.
- Resulting URL – this box will show the full text (URI) of the HTTP request, this is useful for troubleshooting, since you can see exactly what is going to be sent.
- User name (login) and Password – if the request’s receiver requires authorization, you can indicate the credentials here; if not – these can be left blank.
- Host name or IP address and Port – here you can input the request’s content, including the receiver’s address (direct IP addresses and DNS names are both fine), port and any other parameters (e.g. URN, HTTP or HTTPS).
- Show all parameters – when checked, this box shows extra spots for inputting parameter/value pairs, those will be added to the Resulting URL as
parameter_name=value, each pair separated from another with a “&” sign.
- Send – this drop-down menu decides when the request is to be sent: before, after or during the event; if “at a fixed interval” is chosen, you will see a slider allowing you to pick the time between such requests.
- Event name – here you can input the text you wish to associate with that request; this box directly relates to the next item on this list.
- Show on screen – when checked, this box causes Xeoma to show the Event name in the camera’s window; this is useful for both troubleshooting and user notifications.
- Sending method – this drop-down menu defines which HTTP method to use when sending the request; the following methods are currently available:
If POST is selected, Xeoma will also show a separate box for “Additional headers” – here you can indicate custom headers for the HTTP request.
Here is an example of a simple request:
This is an HTTP GET request without any authorization that sends one parameter (state=on) to the address 192.168.1.23 with this URN: “/protect/rb08.cgi” (“?” is used to separate the URN from the actual command). This request is sent only after the event is finished (i.e. the previous module in the chain started triggering and then stopped).
Here is a more comprehensive example:
This is an HTTPS POST request with authorization that sends to the URI
https://192.168.0.12/report a custom header (Extra-Token: 1122ggff123123) and one parameter (problem), whose value comes from the Problems Detector. This request is sent right as the event starts.
This request uses a macro
%REPORT%, read more on what they are and how to use them in this article.
21 May 2014; updated 01 October 2021
|
OPCFW_CODE
|
cmrukcom wrote: John,
this is similar to what we were looking for, but perhaps I think the other way around to what I was asking for. We have a complex build which is composed of many subassemblies - those subassemblies are composed of sub-subassemblies and so on down for 4 or 5 levels. A few of the subassemblies contain serial numbered components - either because they have to be tested by us, and certified, or because they have been bought in and have asssociated warranties with the original manufacturer. The subassemblies that contain serial numbered items are generally also themselves serial numbered (though I suppose they need not necessarily be) with the serial number being defined as WO#_Unique Letter (our batches are usually 1) eg 50002-1_A.
We would need a report that for a top level system build, shows all the serial numbered items contained within that specific build. This is for the scenario in which the customer phones up in 4 years time to say that a particular part is no longer working. With such a report it would be possible for us to pull up their system build, track the serial number of the problem component, find out when it was bought and if it is now out of warranty and to deal with the customer problem most effectively. I suspect but am not in a position to confirm, that functionality like this is almost essential for ISO 9000/9001 so it really needs to be something that can be done under the advanced serial number tracking proposed for the upcoming ver 3.0.
In its simplest configuration the report would probably only need a single input field - namely the serial number that one is looking to get a serial number report on. A more advanced version might allow one to select a particular item, a listing of all serial numbers under that item is listed, and this list can be drilled down into to see all the serial numbers enclosed within that particular top level item.
Note - In our situation I don''t think it is really necessary to have all the subassembly layers in the complex build visualised with indents in the serial number report - it might be sufficient, and perhaps clearer to read, if just a listing of item code-serial number was available. This way if one has a top level build made from 5 assemblies and each of these has 5 subassemblies and each of these 5 subsubassemblies (625 in all) and only 10 parts in the whole structure are serialised, then you do not end up with 99% chaff. Perhaps other users might comment. -- Read more at http://www.xtuple.org/phpBB2/viewtopic.php?t=2015
|
OPCFW_CODE
|
Python Performance: A Recommendation
Python performance and multi-threading have a bad reputation (outside of the Python community). We show when these are indeed issues (not always), search for solutions, and give some recommendation on how the community could focus on solutions. And you can contribute to the solutions!
Python is a dynamic language. The main implementation CPython, is interpreted (actually, it is compiled into code interpreted by a Python virtual machine). The Python interpreter uses a Global Interpreter Lock (GIL) to protect its structures from concurrent access.
Outside the Python community, you hear often that Python is slow, and the GIL sucks. Inside the community, the view is more positive, and indeed there is a wealth of approaches for High Performance Python. There are different approaches for I/O-bound problems and CPU-bound problems:
- I/O bound problems can make good use of multi-threading (where the GIL is released during I/O) or asynchronous programming.
- CPU-bound problems can be addressed by better algorithms (nothing beats an algorithm with less computational complexity), using array-based programming (NumPy), using various problem-specific packages written in a compiled language, or using Cython, a mix of C and Python.
- Application-level caches are also helpful, because no computation is always faster than the fastest possible computation.
Some approaches are quite complex, and unless confined to a small hotspot, the advantage of Python over less dynamic languages might get lost. That’s why one author calls it Python’s Hardest Problem.
The GIL constraint is removed when multiple processes are used, each with its own Python interpreter and GIL. This works nicely for problems that don’t require massive interaction between data or even massive amounts of read-only data.
In my own work with Quantax, the Swisscom Market Risk System, which is written in Python, we always face demand for increased speed. Using a lot of NumPy and many levels of application caches, we achieve about 25000 valuations of financial instruments per second on one core of a laptop CPU.
However, the price for this is complexity of cache invalidations, and complicated code to map the problem to NumPy.
We use processes at a relative coarse-grained level, as worker processes to calculate reports. The main issue of processes is the massive amount of common data the financial calculations require, leading to rather large memory consumption per process. However, there is rarely more than one logical process that modifies the objects (by changing transactions or rates).
If Python wants to be a viable programming environment for medium size applications with those characteristics, we need two things:
- Speed achievable by (just in time) compilation
- An efficient way to use dozens of cores while sharing memory.
With a large community, solutions are bound to come up:
PyPy shows impressive results with its JIT compiler (PyPy’s Architecture; Overview article). The Python programming ecosystem is large, with many packages coded in other languages. PyPy is slow in providing the whole ecosystem, and is still working on NumPy support.
PyPy also made a breakthrough progress replacing the GIL by Software Transactional Memory (STM), and getting that up to speed. This allows multi-threaded Python programs to use all cores, with acceptable overhead. It also provides an alternative to locking that may make multi-threaded programming less error-prone.
These are implementations that target a particular language runtime, Java VM for Jython and .NET for IronPython. They run without the GIL, because they map Python structures to the thread-safe structures provided by their platforms. They do not provide JIT compilation. Also, they do not uniformly provide all libraries of the CPython ecosystem.
PyParallel (long talk, slides, summary) is not a solution yet, but an experiment designed to circumvent the GIL in the standard CPython interpreter, under the very specific condition that threads running without the GIL do not modify any Python objects except for those created by that thread.
Other compilers: Pyston, Nuitka
- Nuitka is a full compiler, which uses the Python runtime to execute code. While admirably compatible with CPython, the achievable speedups seem to be limited to elimination of the parsing overhead and some limited static analysis.
- Pyston is a project to build a method-at-a-time JIT using LLVM as its code generator.
I admire these projects for their courage and stamina. Many of them are starving for resources (including money). I believe that PyPy is the most promising, most mature, and most complete of these efforts.
Let’s see what the community can do to foster it.
Python is mature, and has a large following, with a wide range of usage (ranging from short-lived glue scripts, ad-hoc analysis, Rasperry Pi applications to full blown applications such as Quantax, stacks (OpenStack) and major services such as Youtube and Dropbox).
The community is nice and helpful. There are strong opinions about what is “pythonic” (follows the Python way of doing things). This helps to keep Python conceptually simple (low conceptual overhead). There is central control over language features (by Guido van Rossum, the creator of Python) and over the reference implementation, CPython.
Switching to a new language infrastructure such as PyPy is even slower, for several reasons:
- For many purposes, CPython is fast enough.
- Many usage scenarios (especially Python as a glue language) use scripts with a short execution time, for which the warmup of a JIT can be prohibitive.
- Many libraries of CPython call C code; they’re either not available in PyPy or calling them is slow.
- CPython is available on exotic platforms or not so exotic platforms (64-bit Windows) that are not currently supported by PyPy.
All these are reasons why PyPy hasn’t replaced CPython, and will not do it in the near future. Therefore, I recommend a two-pronged approach, with focus on PyPy as the most promising new technology.
For Python to retain its current mainstream acceptance, protect its application code base, and defend its positions against newcomers such as Julia and conceptually complex languages such as Scala, I personally believe efforts should be focused on:
- CPython as the all-purpose interpreter, compatible with all legacy code and running short scripts efficiently.
- It’s is questionable why CPython needs to provide ongoing Python 3.x support – these folks have little incentive to upgrade anyway!
- PyParallel should become part of CPython, to provide immediate relief for those who need it and can live with its constraints.
- PyPy as the high-performance, STM-enabled reference implementation, where performance and modern techniques make a difference.
- Platform-centric implementations will remain a niche (or dead end when resources run out), or should be re-targeted to run on top of PyPy’s infrastructure.
If you are a Python user, you are part of the community, and might want to get involved. This is very welcome! As many other open source projects, PyPy is looking for financial contributions and for volunteer work:
- You may donate to general development, or to a specific topic such as STM.
- You may also become a developer – the best occasion for Swiss people (and ski fans) will be the upcoming Leysin Winter Sprint (20-28th February 2015).
- Windows system cracks may also be interested in helping to finally port PyPy to Windows 64.
|
OPCFW_CODE
|
Basic knowledge in operations analysis, mathematics, and statistics is required. Experience with R or a programming language, i.e., Python is a plus, but the course is designed that students get support in learning the software in use.
Overall learning objectives
•Describe and explain the subject of data science (“What it is and what it is not”)
•Explain and use the terminology in the correct way
•Describe the data science process and the interaction of its components
•Carry out a data science project
•Apply statistical learning algorithms
•Create effective visualization of given data
•Explain and communicate data analytics projects results
Learning objectives - Knowledge
•Knowledge of theory, methodology, and practice within data science specific areas such as mathematics and statistics
•Knowledge of data handling including data storage, data processing, and large-scale data analysis
•Knowledge of data analysis tools
Learning objectives - Skills
•Be able to understand each step of a data science project
•Be able to use systems for data management to clean, transform, and query data
•Be able to select and apply appropriate tools for data analysis
•Be able to organize, summarize, and visualize data and project outcomes for relevant stakeholders
Learning objectives - Competences
•Be able to understand and evaluate theoretical issues of problems to select and apply appropriate tools to perform data analysis including appropriate data handling
•Be able to discuss and evaluate data science projects and application areas including ethical issues
•Be able to work as a team member in data science project
Digitalization is on a top position of companies’ agenda. We are living in the digitalization age where sensors, machines, and other entities linked to the internet produce massive amounts of data each day. But this is not the only source of data. For instance, social media or the Internet in general provide access to a variety of data, e.g., consumer preferences. Anyhow, companies are increasingly facing problems on how to use data. The new(ly) (re-) emerging fields such as (big) data analytics, artificial intelligence, machine learning etc. are tools for deriving knowledge of data that can give a competitive advantage to firms. As a result, skills to handle large amounts of various data types and to analyze these to retrieve knowledge of the past, present, and future, are paramount.
Data science is the study of and the learning from data. It focusses on how to manipulate data effectively and efficiently. This requires skill in mathematics, statistics, databases, and machine learning along with a good understanding for the underlying problem (formulation) to provide good decision support.
This course introduces students to the field of data science and equip them with practical skills of doing data handling and analysis including some of the basic principles and tools they can use to deal with different parts of data science. This encompasses knowledge on exploratory data analysis, descriptive & predictive modelling, and evaluation. The course will give an introduction into this broad field but will select topics where practical skills are acquired that can be immediately applied. This makes it neither a “breadth” nor a “depth” course, i.e., it will not try to be comprehensive across techniques or dig deep into some specific technique.
1.Introduction to Data Science
2.Life Cycle and Workflow Management of a Data Science Project
3.Data Manipulation and Visualization
4.Working with Large Datasets
5.Machine Learning – Supervised and Unsupervised
|
OPCFW_CODE
|
I feel slightly embarrassed to be asking this question but is it possible to specify a random effect for a predictor that is on the highest level of a multilevel model? I searched online but cannot find a clear answer. I have students (L1) nested in teachers (L2) and am looking at the effect of teaching practices (L2 variable) on some student-level outcome variables. Can these teaching practice effects be modelled as random or can only effects of student-level predictors be modelled as random? I tried the random effects with brms and the models all looked fine. Thanks!
Yes, instead of assuming a random effect of mean zero and estimated standard deviation, you can compute the mean as the result of an equation with covariates!
I am not sure you can implement this model in brms, though!
Thanks for responding! I basically just modelled a random slope for the predictor on the highest level in the same way I did as for variables on the lowest level…
So something like this:
Outcome ~ 0 + Intercept + student_predictor + classroom_predictor + (0 + Intercept + student_predictor + classroom_predictor |p| classroom)
And the model converged fine. Does that seem OK then?
I just got confused because I cannot seem to find any papers in which they modelled highest-level predictors as random effects. So I thought that this is maybe not a thing.
I think modeling the teaching practice effects as random should be no problem. In general, I don’t see a problem with random effects on different nested levels. There are some options, though:
- You have to pay attention to the coding of the lower levels (students in your example). In lme4, there’s the nesting syntax
(1 | g1/g2)which is equivalent to
(1 | g1) + (1 | g1:g2)(see Table 2 here) but I don’t know if that nesting syntax is also supported by brms. With the nesting syntax, it’s possible to have the same student codes within each teacher (e.g.
stud2, …) even though
stud1of teacher 1 is not the same person as
stud1of teacher 2. To be safe, I think it’s better to have students coded like
teacher2_stud1, …). Then you also don’t need the nesting syntax.
- brms offers argument
gr()which leads to separate variance-covariance matrices for each level of the
byvariable. That might be interesting for you, too.
Note that I am assuming that each student has only one teacher since you said
I couldn’t really connect your formula
to your initial post so I can’t say anything about that.
Thank you. Sorry, I should have said teacher rather than classroom in the simplified example code. I actually have a cross-classified multiple membership structure, with students (L1) nested in classrooms (L2) and teachers (L2), whereby some students belonged to multiple classrooms and/or teachers over the course of the study. So much more complicated than this. But the important thing is that my main question has been a answered and I now know that it was not a mistake to model the effects on the highest level as random. Thanks so much!
Ok, that sounds rather like a multi-membership problem. See function
brms::mm() (perhaps also
brms::mmc()) for that. This vignette also partly deals with multi-membership models.
|
OPCFW_CODE
|
We need Native English content writer from Europe or Am...English content writer from Europe or America having fair knowledge of SEO, keywords to write content for websites, blogs & other writing works. More details will be shared with selected freelancers. Sorry but please don't apply Indian writers as we are looking for Native English persons only.
... c) Test reports which have passed reviews from team member(s), and d) Writings of deploy and rollback procedure of a modification. As one of the open minded, polite, and multi-stack engineers, you will 1) investigate design and implementation of the existing system, 2) explore and evaluate design choices, 3) organize a modification into a design level
Hi, I am a photographer looking for a unique logo/signature to add on my photos. Please feel free to submit your entries. One entry will be selected. Name : Sarin Raghuraman (looking for a simple logo) Thanks and regards, Sarin
We require a website designer, candidate should have creativity and skills for good designs. Should be able to make clear, clean and responsive design...good designs. Should be able to make clear, clean and responsive designs. Knowladge of any cms, and framwork will be good. Skills Required Website Designing HTML, Boostrap, CSS, JS Graphics Designing.
RuteStock is a Hybrid representation group representing Home Builders and Multi-Dwelling Unit Builders with Integrators and manufacturers across the United States. We need a four page marketing directive to distribute digitally and possibly in print. The four pages will include pictures if needed. If just word content we will only need no more than
Random back-end task and bug fixing : [log ind for at se URL] Read our document "Sitemap Technical Instruction" (only travpart) so you'd understand further about our website. Please read all of the attached files. You will be given source to work on your local host
Task includes creating HTML template in SEMANTIC UI html/css framework ([log ind for at se URL]) from existing Wordpress template [log ind for at se URL] (shop1 demo). This website is based on Semantic UI (but created for Wordpress) so you just need to inspect elements and replicate this in clean HTML code. Task includes creating all pages: 1)
...price • If no filters are set this page will show all available kits in alphabetical order. • 3 products per line. • Product name only – price and details when the product is selected. Tools Page: [log ind for at se URL] [log ind for at se URL] • Links to Facebook, Instagram, Pinterest and
Looking for a full stack developer to fix a bug issue on our site, [log ind for at se URL] It is hosted on AWS and you will need to look at the code and see why a help form is not being forwarded to our support email. The application uses pusher but we can switch to another platform if necessary. After that, we would like to white label the
I am lookng for an application in Php mysql or Python to createa e-Catalog system. 3 type of user. 1. Company user - (buyers - who can post their requirments) 2. Vendors (Sellers- who can upload thier products, and also to respond to buying requirments) 3. Moderators (Our company facilitate the buyer and seller) a mini CRM for match making deals.
detect the reason of slow web page [log ind for at se URL] programming php we probably knows the reason ,but need the web...web page [log ind for at se URL] programming php we probably knows the reason ,but need the web programming professional to assist this is a very tiny gig, but also the test for being selected for the next big program PHP pages , welcome to bid. thanks
I am a contributor to Dreamstime, one of the major stock photography websites. I don't own the website, I am not its emplo...% year-to-year increase in monthly sales volume. 100 % might sound too high but believe me that it could be easily reached. I will share all the relevant information with selected applicants. Please feel free to ask questions.
I need a very simple multivendor online marketplace website similar to OfferUp or eBay (without the auction feature). PROVIDED: + Incomplete WordPress site + HostGator access + WordPress access + Domain + Dokan plugin (free) + Elementor PRO plugin (paid) + Interactive Map of Region (paid) REQUIREMENTS: + Fully 100% Responsive Website + Vendor Setup + Unlimited Product Categories & Sub Catego...
I need a new website. I need you to design and build it. Quote should include full pricing of any premium plugins. This site should be a similar base to [log ind for at se URL] or [log ind for at se URL] but on a much smaller scale.
A Front-End developer needed For a Shopify-Based ...Shopify-Based online store to modify cart page: # Fix our custom-coded free shipping bar to get the correct total cart price. No 3rd-party apps allowed. Pure code only [JS + HTML + CSS + LIQUID] All code must comply with our store color-code and font. *Long-term job if we like your job and rates.
...design, implement enhancements to our website. 80% of your time is expected to be around building new capabilities on the site. 20% or less of your time would be to perform bug fixes and maintenance activities on the site. The role will be entirely remote, giving you the flexibility of working from home. Candidate’s Experience -------------------------------------
...patterns preferred here. Verify using at least 1 unit test. 4. Once the “get_async” method works, instantiate the class and enable the get method to be called using a multi-threaded pattern. Call the method “get_parallel” 1. Input will be an array of URLs, 2. Use the same logic from b and c above, 3. This should support up to 10
...etc.) The final result should be live in a service like CodePen, JS Bin, etc. Candidates need to attach the URL with their applications. (Where to Apply?) Feel free to use a CSS preprocessor and/or JS Framework/Library. It’s also fine to not use any of these technologies. When submitting the form: If the email field does not validate, an error message
Project managers should have a background in business skills, ...technologies Excellent client-facing and internal communication skills Excellent written and verbal communication skills Solid organizational skills including attention to detail and multi-tasking skills Strong working knowledge of Microsoft Office PMP / PRINCE II certification is a plus
i am looking at ch...health concious. more over, there is a level thats ascended beyond vegan diets, and that is called an alkiline diet, this means that even the fruits and vegetables we use are selected because they are natural fruits and not hybrid or GMO technology, all fruits we use are alkalinee diet based fruits, with a Ph level of 7 and above.
...PRESENTING. THE LOGO FOR BRAND CONSUME BY YOUNG PEOPLE AND EDUCATIONNAL FOURNITURE WE LOOKING : - LOGO 2D , 3D AND ANIMATION - VIDEO FOR BRAND AND ADVERT - WEBSITE DYNAMIC MULTI LANGUAGE WITH E-COMMERCE - GRAPHICAL CHARTE - CONCEPTION ADVERT GADGET - AND MORE ... - WEBSITE DESIGN - E-COMMERCE STRUCTURE - PRODUCT PRESENTING IN WEBSITE - VIDEO EDITING
CHECK THE PHOTO .....
...Edit and Remove User Access Add, Edit and Remove User Folder Access on Network File Folders Can install new software and allow all or selected users to use it Can manage security and restrict websites on selected users Set user profiles e.g. admin, operator and manager Set printer settings for printing and scanning (assign scanning folder) •
We are in Digital marketing agency in need of an experienced graphic designer living...fashion. • Young, ambitious, and self driven! • We dont require expiriance or degree, spirit is what we are looking for. • Proficient in Illustrator and Photoshop. • Ability to multi-task and meet up. • Familiar with the latest cloud based design softwares
I need a FREELANCER who is proficient in English. I require a developer/designer who has knowledge in css and wordpress and has an eye for design. I need the colour of a header changed and 1 page made dynamic. I also need some minor changes on pages so they too are dynamic. I also require some text to be realigned. I also need text added to the site
Complete the react native project - Compile IOS project - Integrate codepush - Integrate IAP for both IOS and Android - Onesignal for push notification - finishing up for some bug and function - Fixing the real time chat function (Nodejs backend) - Complete some remaining screen (1 - 3 screens)
Hi My p...perfectly with ROR expert. I have more tasks coming. but now I need current this task done first. please check on attach files. the present coder will help you for everything. selected coder will connect on him! i want you to change the entire frontend but yes i want the checkout page design on a1 tools site on running heroku..yes correct
...excel. This export will form the dataset of the new workbook. A user will need to be able to input certain variables on an input screen, after which the exported data will be selected/loaded. The VBA code will then run and export a variety of sheets in a new workbook for the respective client. Developing the workbook with one insurance platform will be
I want to add a table in WordPress with more than 5000 rows that is available in excel The table should be in a very good design and responsive for mobile access. The rows should contain product details , price and like/dislike ratings and optional image before the product title, buy now and couple more. As part of this project need to be setup for 50 rows initially. Project cost includes the pl...
...you will likely need an in-depth understanding of TCP/UDP based communication, and a good understanding of Java programming techniques, including socket communication, multi-threading, and possibly bitwise manipulation techniques for extracting bit fields from binary data (including knowing whether the tracking device sends data in big-endian
...screen with our company dtails and auto trcking. 14. Google Analytics and Google tag management should managed and separate screen for this too. Front End : Angular, HTML, CSS BACK END : Laravel Database : Mongo DB & PostgreSQL Note : Kindly check over here, further clarification just get in connect with skype : alan258037, but payment all other
New product type based off the existing 'grouped product' type. I need to be able to select configurable products (currently only simples can be selected). The frontend will display like a category, with a grid of the associated grouped products, which link into each individual product page.
...more on design, use of backend API & Google Map API. Backend API is well documented & project is straight forward but very big actually complex also. Backend API starts with multi user authentication system(Token based) and then actual projects starts. please create a google form before accepting this project. Remember i am a developer and API is written
|
OPCFW_CODE
|
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace MinecraftMapper {
/// <summary>
/// Uses multiple PositionBase's to calculate a mapping into the minecraft world. The multiple
/// positions are averaged and weighted based upon how close we are to them.
/// </summary>
class PositionConverter {
private readonly PositionBase[] _positions;
private readonly int _worldWidth, _worldHeight;
public PositionConverter(int worldWidth, int worldHeight, params PositionBase[] positions) {
_worldWidth = worldWidth;
_worldHeight = worldHeight;
_positions = positions;
}
public bool IsValidPoint(BlockPosition point) {
//return point.X >= 0 && point.Z >= 0 && point.X < 10688 && point.Z < 22500;
return point.X >= 0 && point.Z >= 0 && point.X < _worldWidth && point.Z < _worldHeight;
}
public BlockPosition ToBlock(double latitude, double longitude) {
var targetZ = PositionBase.Lat2YMeters(latitude);
var targetX = PositionBase.Long2XMeters(longitude);
double totalInverseDist = 0;
double zTotal = 0, xTotal = 0;
for (int i = 0; i < this._positions.Length; i++) {
var zDelta = ((this._positions[i].baseZ - targetZ) * 1.10);
var xDelta = ((targetX - this._positions[i].baseX) * 1.10);
var dist = zDelta * zDelta + xDelta * xDelta;
var inverseDist = dist == 0 ? 1 : (1 / dist);
zTotal += (this._positions[i].mcZ + zDelta) * inverseDist;
xTotal += (this._positions[i].mcX + xDelta) * inverseDist;
totalInverseDist += inverseDist;
}
return new BlockPosition((int)(xTotal / totalInverseDist), (int)(zTotal / totalInverseDist));
}
}
}
|
STACK_EDU
|
Book Review: The Mythical Man-Month
It (The Mythical Man-Month) is the only book you need to read about software management.
I truly believe this assertion after reading this book (of course, it might because of my small sample size of software management books).
This book reflects my current thinking that Agility is all about gathering fast feedback1 so well. And it amazed me from time to time when I read this book that this kind of thinking was already there 20, or even 40 years ago.
What amazed me more is that after 40 years, we are struggling with some basic problems that could've been solved using the ideas from this book. I guess this fact is also a strong proof for No Silver Bullets, at least it shows that there is none yet.
I'll write down some amazed moment down below:
Mockist style TDD
In Chapter 13. The Whole and the Parts, it mentioned top-down design and dummy component. They all remind me of the mockist-style TDD I'm current learning and using.
- Top-down design is basically what mockist-style TDD is doing.
- Dummy component is what we call mocks/stubs/fakes today.
It always surprises me how early this style of development were used in the history of software development. Even without tests as automated as today, people in the early ages were still trying their best to use the most tight feedback loop they could get.
In Chapter 19. The Mythical Man-Month: after 20 years, it also introduces Building an end-to-end skeleton system, which is basically what Growing Object-Oriented Software Guided by Tests2 is doing.
a written program has another face, that which tells its story to the human user.
I truly believe that the only way to make our code maintainable is to write self-documenting programs as described in this book.
- Class/Function/Variable Naming
- Use the parts of the program that have to be there anyway (symbol names), for programming language reasons, to carry as much of the documentation as possible
- Style matters4
- Use space and format as much as possible to improve readability and show subordination and nesting
- Insert the necessary prose documentation as paragraphs of comment
Open Source v.s. Buy v.s. Build
If there was anything close to a Silver Bullet, I guess that must be Open Source Software Movement5.
With the rising of open source software hosting platform like GitHub, one can easily build its own software upon these countless open source libraries/frameworks without any cost.
And one can continue get free security patches and performant improvements if he/she continues updating these dependencies.
Indeed, Open Source made a great leap forward from buying software, but we are still facing the same issue as "buy v.s. build": applicability.
It's still hard to find an off-the-shelf package to perform my own task and suit my own needs perfectly. (I'll talk more about this in a future post.) Because of this, I still cannot believe Open Source is the Silver Bullet.
Continuous Integration / Continuous Deployment (CI/CD)
In Chapter 19. The Mythical Man-Month: after 20 years, this book shows how Microsoft did CI/CD in the early days. They called it "Build Every Night" Approach, which is basically what Continuous Integration / Continuous Deployment means.
And as you might have known, we've learned a lot about this CI/CD practices in the last 20 years.
As it was said in Chapter 19. The Mythical Man-Month: after 20 years:
Software engineering is merely immature as chemical engineering was in 1945
I think we as the whole software development community are still learning and gaining more experience in this area in 2018. And we've also improved a lot in the last 20 years.
The beauty of continuous learning and discovering in this immature industry is what intrigues me the most from this industry.
|
OPCFW_CODE
|