Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
In Windows Phone 8, it is possible for an app to create contacts in it's own contact store. These will be deleted if the app is removed. I don't know if there are any limitations on these contacts.
Custom contact store for Windows Phone
It is also possible read most, if not all, of the contact information on your phone.
Read-only access to Contacts and ...
The answer is the same as for Windows Phone 7. You cannot import directly from this file to the phone, but you can import the contacts to your linked email account and then they will be synced down onto the phone.
What kind of email account do you use on the phone? Gmail can import a vcf file directly, but hotmail will make you convert it to a csv file ...
You can import them into your Live Calendar and then that will sync with your phone. To do that just do the following:
Go to the events page on FaceBook.
Scroll down to the bottom and look for where it says "Past Events · Birthdays · Export" and click on "Bithdays".
Scroll down to the bottom and look for where it says "Past Events · Export Birthdays" and ...
Tap the details to bring up the contact -- if you swipe across to the history, you'll see things like:
1 missed call
1 missed call
Which then tells you which contact number they used to get hold of you.
The easiest and best way to do this, is to sign up for a Microsoft Account (you probably already have one, since your phone requires it for app purchases, etc).
Sign into http://www.outlook.com/ using the Microsoft Account you used to sync your phone. Click the / arrow next to Outlook and select "People" --
Then select "Import from file"
Then select your ...
Edit: Microsoft has re-branded the Calendar UI. I have modified instructions below (without updating the screenshots).
The good news is yes, you can disable notification for birthday reminders that are coming from Facebook.
You cannot do this from within the calendar app but you can do it by using your Windows Live Calendar. The steps:
Log on to calendar....
Since you had an Android phone before, I guess you might have synced all your contacts to your gmail account. Add your gmail account to Windows phone, and set sync options to include contacts. All your contacts are automatically synced. (even without the vcf files.)
You can switch phone numbers in SMS thread.
It is same procedure as you would switch to some online IM channel - use little button with two arrows on it. It is located at the bottom right corner of the screen, in the application bar. When you tap it, SWITCH TO page is shown and it contains link more phone numbers which (after tapping) reveals other contact'...
I believe you mean that photos you manually add to contacts stored in your Live account are not shown in your contacts online?
What you need to do first is make sure that the photos are associated with the contact entry on the Live account that you are looking at online. If the photo is associated with a contact stored in, for instance, Google, while it ...
Press and hold the volume down and Power buttons at the
same time until you feel a vibration (about 10–15 seconds).
When you feel the vibration, release the buttons, and then
immediately press and hold the volume down button until you see a
large exclamation mark.
Once the exclamation mark appears, press the following four buttons
in this order: ...
It is possible to import your contacs using a VCF file. I just did it with this app: "contacts+message backup" by Microsoft. Here is the store link.
I created a backup using the app (settings). After that I replaced the backup with the VCF file from my old Android device and restored it using the app. Worked just fine.
Any email account will work with the WP8 email client, regardless of using IMAP, Exchange, POP3. Gmail accounts can use both IMAP and POP3, for the free email as well as for business accounts.
So no worries as I am in the same situation and all email accounts work on my HTC 8S.
Not sure if you can fix this on your phone, but try the following:
Go to https://people.live.com
From the top menu, select Manage → Clean up contacts
Mark all the duplicate contacts you want to merge and click "Clean up"
Normally when typing a contact's name, you'll get an auto complete list of all the matches, with all the numbers for those matches - you can then choose to send to (say) their work number or their mobile number, etc.
All of your contacts and calendar reminders seem to be available online, right? Then I would agree with the MS Tech support to go ahead and do a hard reset. This usually does solve a lot of issues related to the W10 update.
It is a bit of a pain to transfer data from Android to Windows. I wanted to transfer from a Huawei P7 to a Lumia 640. I tried importing the contacts as a CSV file, but it could not import all the phone numbers. I tried sharing all my contacts to the Windows mobile using Bluetooth but it didn't work either.
However in the Windows phone there is an app called ...
Try the following solution:
Save the .vcf file to your computer
Open Gmail (new version, not basic)
Click the Gmail button (above the Compose button) and select Contacts
Click on More
Click on Import
Select your .vcf file
Now your contacts should be imported to Gmail and you can sync them to your phone.
The contacts are stored locally on your device, however there isn't a "phone only" address book.
Contacts are kept synchronised with your Live account (or any other account that supports SyncML, such as Google, or if you have one, Exchange Server - or anything that can pretend to be one), so you can choose which cloud service you prefer to use to back up ...
You can use the Nokia contacts transfer app (iPhone 4 and higher) or your SIM card.
If you only want to transfer your contacts, use the Transfer my Data
app on your new phone. If you don’t have this app, you can download it
from Store. If you have previously had a micro-SIM or micro-RUIM card,
you can also copy your contacts from your card to your ...
You might need to convert the file to a .csv file first if you plan to upload it online, this depends from service to service. It's required for Windows Live on my first look. Try http://labs.brotherli.ch/vcfconvert/ or search an off-line variant.
In case you have Windows Live:
Go to http://live.com/ and log into your Windows Live.
On the Inbox page, click ...
Yes, when you add a contact you are offered the option of what service to create the contact in, Gmail, Windows Live, etc.
To create a new contact
On Start, tap People.
Flick to All, and then tap New .
Tap New contact.
Tap the account you want to create the contact in.
Reference: Windows Phone Help - How to add a contact
If you edit a contact that is ...
Your best bet is to export from your old Microsoft account and the import that data into the new account. To do this, start at the bottom of my answer here, and then go to the top and work your way down.
If they are associated only with Facebook, simply go to
Settings -> email and accounts -> Add -> Facebook
If you have two Facebook accounts, I'...
|
OPCFW_CODE
|
package com.sagunpandey.spookyspidersmash.graphics;
import android.content.Context;
import android.graphics.Canvas;
import android.graphics.Point;
import com.sagunpandey.spookyspidersmash.engine.GameEngine;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
/**
* Created by sagun on 8/10/2017.
*/
public class GameGraphics {
private GameEngine engine;
private GameBoard gameBoard;
private GameMenu gameMenu;
private List<Spider> spiders;
private Map<Spider, Float> spidersToClean;
private boolean initialized = false;
public GameGraphics(GameEngine engine) {
this.engine = engine;
gameBoard = new GameBoard(engine);
gameMenu = new GameMenu(engine);
spiders = new ArrayList<>();
spidersToClean = new HashMap<>();
}
public void initialize(Context context, Canvas canvas) {
if(!initialized) {
gameBoard.initialize(context, canvas);
gameMenu.initialize(context, canvas);
initialized = true;
}
}
public void refresh(Canvas canvas, Context context) {
gameBoard.load(canvas, context);
for(Spider spider: spiders) {
spider.load(canvas, context);
}
gameMenu.load(canvas, context);
}
public void spawnSpider() {
Spider spider = new Spider();
spiders.add(spider);
}
public void checkIfSpiderReachedFood() {
List<Spider> spidersToRemove = new ArrayList<>();
for(Spider spider: spiders) {
int foodMargin = gameBoard.getFoodMargin();
Point point = spider.getCurrentPosition();
boolean reached = point.y >= foodMargin;
if(reached) {
spider.reached();
engine.lostALife();
spidersToRemove.add(spider);
}
}
for(Spider spider: spidersToRemove) {
removeSpider(spider);
}
}
public void checkForSpiderCleanup() {
for(Spider spider: spidersToClean.keySet()) {
float currentTime = System.nanoTime() / 1000000000f;
float inactiveStartTime = spidersToClean.get(spider);
float elapsedTime = currentTime - inactiveStartTime;
int spiderCleanUpTime = 5;
if(elapsedTime >= spiderCleanUpTime)
removeSpider(spider);
}
}
public void clearAllSpiders() {
spidersToClean.clear();
spiders.clear();
}
private void removeSpider(Spider spider) {
spiders.remove(spider);
}
public void hitSpider(int x, int y) {
for(Spider spider: spiders) {
if(spider.wasHit(x, y) && !spider.isDead()) {
spider.die();
engine.spiderKilled();
spidersToClean.put(spider, System.nanoTime() / 1000000000f);
} else {
engine.spiderMissed();
}
}
}
}
|
STACK_EDU
|
Am looking for expert who knows analysis of algorithm and can do pseudocode using LaTeX.
topic: Complexity analysis from pseudo-code, The sorting problem: common sorting algorithms, complexity analysis, and comparison, The divide-and-conquer methodology: application in the sorting problem..etc.
18 freelances font une offre moyenne de 157 $ pour ce travail
Hi there,I'm biddin on your project "Analysis of algorithm -- 5" I have read your project description and i'm an expert in Python and machine learning therefore i can do this project for you perfectly.I still have a f Plus
* * * * * * * * * * * * * * * * * * * * * * * * Hello There, I’m a Professional and Expert Software Developer. I am interested in doing this job for you. I do have the necessary skill set & resources to complete this Plus
⭐⭐⭐⭐⭐ Hi there, I am java developer with 7+ years of experience in web and desktop app and game development. I have strong expertise in OOP, data structures, design patterns, statistics, multi threading, networking, g Plus
Hello, Dear Client! I'm a Full Stack and Algorithm expert with over several years of experience and very familiar with LaTex. I have good experience with Complexity analysis from pseudo-code, sorting algorithm. Please Plus
Hi , I am a computer science graduate and expert in related topics, please share more details in chat, thanks.
Hi there, i have gone through your project description i am an experts in this i can help you with your work. I'm a senior engineer with rich experience in Java, Python, Algorithm, Computer Science, Algorithm Analysis Plus
Hi there, I am a talented python dev, and I suppose I can handle this task successfully. Please let me know more details, and Please give me your chance. I look forward to hearing from you. Plus
Hi I have seen your job description and interested in algorithm work. Please provide more details about the work Best
Hello. I read your project description very carefully. I have a deep understanding and experience in the areas of python in ML and LATEX that you mentioned. We are a company of mechatronics, electrical, computer Plus
===== Expert of Algorithm(Sorting, DFS, BFS, Greedy, Tajan) and LaTex HERE===== Dear Client! I am Andrei, experienced programmer from Russia. I have read your requirement and I noticed that I am appropriate to this pro Plus
Hi, there I’m an algorithm expert with over 10 years of experience dealing with many projects. I can help your projects. I’ve previously worked on project similar with this project. When do you need this finished? Plus
Hi. I have ACM/ICPC background so I have mastered algorithm and data structure. Especially, I have strong knowledge of sorting algorithms such as merge-sort, divide and conquer algorithm. I have 13 years of experience Plus
Hi Nyambayar. Thanks for your job posting. I really want to develop algorithm but there are no projects related to the algorithm. Sort and divide conquer are the most basic but also important algorithm skills. I have Plus
Hello there, Hope you are doing well! I have seen your requirements Algorithm Analysis. I have 5+ years of strong experience as Python Developer. I have expertise with Django, Flask, Python, AI, ML, NLP and Data Sci Plus
Respected Client, I am Dhaval (Business Developer). **I have understood your All the Requirements well.** I belong to the CLEVERALGORITHM Company. Our branches are in the USA and India as well. Our company has a team o Plus
Hello, I can do the algorithm analysis for you. Please initiate a chat at your convenience. Regards, Amit
|
OPCFW_CODE
|
Image classification with tensorflow
While working through the Google YouTube series on machine learning I watched episode six Train an Image Classifier with Tensorflow for Poets. Since I create notebooks for every episode I did this here, too. The following text is taken from this notebook and it is a short tutorial on how to implement an image classifier with tensorflow in a very short amount of time. You can transfer this knowledge to every other image classification task without much effort. You can find the original python notebook here and my other notebooks on my github.
This notebook collects the experiments I did after watching the Google YouTube Tutorial Train an Image Classifier with TensorFlow for Poets. You can find the video here. The corresponding tutorial can be found here.
The goal of this notebook is to create a classifier for images of flowers. There are 5 types of flowers in this case: roses, tulips, sunflowers, daisies and dandelions. After the model is created you can download a random flower image from the internet and classify it. It worked very well in my case.
Step 1: Preparing the environment
The first thing I had to do was setting up the environment on my mac. I use Anaconda to manage my Python environments. So I created a new environment (Python 2.7) called tensorflow. To do this I use the Anaconda Navigator, a graphical user interface for Anaconda. I installed the following packages in this new environment: numpy, pandas, matplotlib. Tensorflow however could not be installed with the Navigator. So I opened a Terminal in this environment. I installed tensorflow with the following command:
conda install -c conda-forge tensorflow
The next step was cloning the tensorflow GitHub repository. This repository contains the script that will create our model later. You can clone it by navigating into a folder and calling
git clone https://github.com/tensorflow/tensorflow.git
Theoretically you are finished with your environment here. I however ran into a strange problem where I had to fix a line in the script we are going to call later. I don't know if that had something to do with my environment. If someone knows the answer don't hesitate to write a comment on my blog. I got the error that Python could not import tensorflow.python.framework.graph_util. It seems that the tensorflow team moved the graph_util.py recently, so depending on your tensorflow version you should edit the file retrain.py in the folder tensorflow/examples/image_retraining in the git repository. I changed the line
from tensorflow.python.framework import graph_util
from tensorflow.python.client import graph_util
Remember: only do this if you are getting an module not found error when you execute retrain.py.
Step 2: Downloading the images
I have a data folder in my user folder which contains all the data I experiment with. I navigated there with the terminal and called
curl -O http://download.tensorflow.org/example_images/flower_photos.tgz
After a while the .tgz file with the images was downloaded. I extracted it into the folder flower_photos. After doing that you can see 5 folders containing the images. These 5 folders represent the 5 classes the classifier will use later. You don't have to define these classes. The folder names will automatically be the class names.
Step 3: Training the model
Now, after setting up the environment and downloading the data, we can finally train our model. We won't train a model from scratch. The cool thing about this is, that we will use a model (Inception) that was created by Google and retrain it. Please refer to the video and the original (much longer) tutorial for more information on that.
Please call the following in your terminal and be very careful with the paths. Attention: I added new lines for every parameter for readability. Please note that you have to enter this as one long command.
--how_many_training_steps 500 --model_dir=../data/inception
The following parameters point to files/folders that will be created: bottleneck_dir, output_graph and output_labels. The bottleneck files aren't important for the model itself but they will be reused if you created another model. The graph and the labels contain the model. These two files are important for classifying an image. Please note the paths to these files.
Depending on the parameter how_many_training_steps the runtime of the script can be very high. I let it run over night on my MacBook Air. If you have a more powerful machine the script won't run longer than 30 minutes.
When the script has finished you can use your model to classify images.
Step 4: Classify images
To classify images you can use the following code. Please adjust the variables image_path, labels_path and graph_path to make it work. You can also switch the comments in the first three lines to make the code work as a script in the terminal.
I wanted to classify the following picture:
You can find the code for classifying the image in the notebook. The result of the classification was
roses (score = 0.99737) tulips (score = 0.00177) sunflowers (score = 0.00084) daisy (score = 0.00002) dandelion (score = 0.00001)
As you can see in my example the picture was classified as roses with a probability of 99,737%.
|
OPCFW_CODE
|
Keystore type: which one to use?
By looking at the file java.security of my JRE, I see that the keystore type to use by default is set to JKS. Here, there is a list of the keystore types that can be used.
Is there a recommended keystore type? What are the pros/cons of the different keystore types?
Since Java 9, PKCS12 is the default keystore type. This change is to the the JEP 229 goal: "Improve security. PKCS12 offers stronger cryptographic algorithms than JKS."
For more info see "JEP 229: Create PKCS12 Keystores by Default", http://openjdk.java.net/jeps/229; last accessed Feb 2, 2018.
There are a few more types than what's listed in the standard name list you've linked to. You can find more in the cryptographic providers documentation. The most common are certainly JKS (the default) and PKCS12 (for PKCS#12 files, often with extension .p12 or sometimes .pfx).
JKS is the most common if you stay within the Java world. PKCS#12 isn't Java-specific, it's particularly convenient to use certificates (with private keys) backed up from a browser or coming from OpenSSL-based tools (keytool wasn't able to convert a keystore and import its private keys before Java 6, so you had to use other tools).
If you already have a PKCS#12 file, it's often easier to use the PKCS12 type directly. It's possible to convert formats, but it's rarely necessary if you can choose the keystore type directly.
In Java 7, PKCS12 was mainly useful as a keystore but less for a truststore (see the difference between a keystore and a truststore), because you couldn't store certificate entries without a private key. In contrast, JKS doesn't require each entry to be a private key entry, so you can have entries that contain only certificates, which is useful for trust stores, where you store the list of certificates you trust (but you don't have the private key for them).
This has changed in Java 8, so you can now have certificate-only entries in PKCS12 stores too. (More details about these changes and further plans can be found in JEP 229: Create PKCS12 Keystores by Default.)
There are a few other keystore types, perhaps less frequently used (depending on the context), those include:
PKCS11, for PKCS#11 libraries, typically for accessing hardware cryptographic tokens, but the Sun provider implementation also supports NSS stores (from Mozilla) through this.
BKS, using the BouncyCastle provider (commonly used for Android).
Windows-MY/Windows-ROOT, if you want to access the Windows certificate store directly.
KeychainStore, if you want to use the OSX keychain directly.
@husayt, PEM certificates are not directly supported as keystore types (I suppose one could write a KeyStore implementation to that effect). You can however, load them on the fly into a keystore instance (typically JKS, the default type) in memory using a CertificateFactory (as shown in this answer).
i think the JKS has changed to JCEKS
@amphibient both types exist indeed, but JKS is still the default AFAIK.
Rather critically, a JKS key store cannot store secret keys. For this use case, JCEKS is appropriate. It may be worth mentioning this in your answer.
@Bruno so we can't store a plain keyfile that has 32 char random string which is used to encrypt the data to PKCS12 ?
PKCS#12 will be the default from Java 9 onwards. PKCS#12 can store secret keys and should also be useable as trust store.
@MaartenBodewes Indeed, PKCS#12 can already be used as a trust store, but only if it also has a private key with the cert at the moment. It could probably be used more generally without private keys, but last time I checked (probably Java 7), the Java implementation didn't allow for such PKCS#12 stores. I can't remember what the PKCS#12 spec says exactly about this, but I think it was a Java implementation limitation, so it's quite possible that further versions of Java let you use PKCS#12 stores more broadly.
Ok. In Java 8 I can create a PKCS#12 keystore with a single certificate without any issue. Note that for P12 certificate entries are implicitly trusted. If you need untrusted certs you may have to revert to a scheme with multiple key stores.
@MaartenBodewes You're right. I've just tried with keytool from the Oracle JDK 1.7.0_71, and importing a cert on its own (without private key) didn't work, but it works with the one from Oracle JDK 1.8.0_25 (and newer).
OK at least for Java 8 PKCS#12 key stores still cannot store secret key entries. You'll get a null pointer exception when storing such a key store (ugh), probably because it cannot find associated certificates. Somebody seemingly skipped the teachings of Joshua about fail fast code.
@Bruno is there resource where I can learn more about keystore and sslcontext. I can't find a single resource that gives me a good intorduction to these topics
@user1363516 I'd look at the JSSE Reference Guide and perhaps the Javadoc for KeyStore. Not necessarily the easiest introduction, but there are good explanations and a few diagrams.
@Bruno in windows environments you also can use Windows-MY-LOCALMACHINE/ Windows-ROOT-LOCALMACHINE but you will require administrator permissions.
Here is a post which introduces different types of keystore in Java and the differences among different types of keystore. http://www.pixelstech.net/article/1408345768-Different-types-of-keystore-in-Java----Overview
Below are the descriptions of different keystores from the post:
JKS, Java Key Store. You can find this file at
sun.security.provider.JavaKeyStore. This keystore is Java specific, it
usually has an extension of jks. This type of keystore can contain
private keys and certificates, but it cannot be used to store secret
keys. Since it's a Java specific keystore, so it cannot be used in
other programming languages.
JCEKS, JCE key store. You can find this file at
com.sun.crypto.provider.JceKeyStore. This keystore has an extension of
jceks. The entries which can be put in the JCEKS keystore are private
keys, secret keys and certificates.
PKCS12, this is a standard keystore type which can be used in Java and
other languages. You can find this keystore implementation at
sun.security.pkcs12.PKCS12KeyStore. It usually has an extension of p12
or pfx. You can store private keys, secret keys and certificates on
this type.
PKCS11, this is a hardware keystore type. It servers an interface for
the Java library to connect with hardware keystore devices such as
Luna, nCipher. You can find this implementation at
sun.security.pkcs11.P11KeyStore. When you load the keystore, you no
need to create a specific provider with specific configuration. This
keystore can store private keys, secret keys and cetrificates. When
loading the keystore, the entries will be retrieved from the keystore
and then converted into software entries.
@peci1 I have planned to write some tutorials on how to use these keystores. So far I have written one post for JKS, please find it at http://www.pixelstech.net/article/1409966488-Different-types-of-keystore-in-Java----JKS
@PixelsTech I've found this one and was wondering where's the rest of them :) So I'll stay tuned ;) Thanks
@peci1 I have covered JCEKS and PKCS12 today. For PKCS11, it involves hardware and extra configuration, need more time to compose it. http://www.pixelstech.net/article/1420427307-Different-types-of-keystore-in-Java----PKCS12 and http://www.pixelstech.net/article/1420439432-Different-types-of-keystore-in-Java----JCEKS
If you are using Java 8 or newer you should definitely choose PKCS12, the default since Java 9 (JEP 229).
The advantages compared to JKS and JCEKS are:
Secret keys, private keys and certificates can be stored
PKCS12 is a standard format, it can be read by other programs and libraries1
Improved security: JKS and JCEKS are pretty insecure. This can be seen by the number of tools for brute forcing passwords of these keystore types, especially popular among Android developers.2, 3
1 There is JDK-8202837, which has been fixed in Java 11
2 The iteration count for PBE used by all keystore types (including PKCS12) used to be rather weak (CVE-2017-10356), however this has been fixed in 9.0.1, 8u151, 7u161, and 6u171
3 For further reading:
Mind Your Keys? A Security Evaluation of Java Keystores (PDF)
Java KeyStores – the gory details
Java 11 offers the following types of KeyStores:
jceks: The proprietary keystore implementation provided by the SunJCE provider.
jks: The proprietary keystore implementation provided by the SUN provider.
dks: A domain keystore is a collection of keystores presented as a single logical keystore. It is specified by configuration data whose syntax is described in the DomainLoadStoreParameter class.
pkcs11: A keystore backed by a PKCS #11 token.
pkcs12: The transfer syntax for personal identity information as defined in PKCS #12.
Source: https://docs.oracle.com/en/java/javase/11/docs/specs/security/standard-names.html#keystore-types
|
STACK_EXCHANGE
|
Pros of Choosing AngularJS Technology for Your Business
Information Technology is constantly updating and moving ahead. We also keep changing and updating ourselves to keep par with the technology. A few years back, the trend of modern style website on a single page came up. Now, this trend has been developed with a better technology named angularJS.
What is angularJS?
It is an open source structural framework with dynamic web application technology. It was first developed by Misko Hevery and Adam Abrons in 2009. The current version of angularJS is 2.4. You can use HTML as the template language.
The angularJS Development technology is such that you do not have to depend much on coding that was done previously. It is very much compatible with the browser that makes it unique operating with any server technology.
Features of angularJS web application development
- It helps the developers to write a client-end application.
- It holds the Model View Controller pattern for the applications being developed.
- It is completely free and is licensed under the Apache License version 2.0
- It is an open source software framework and is being used by thousands of developers.
Advantage of angularJS development web application
- angularJS provides the developers to create Single page web applications in a very neat and clean manner and also to maintain it properly.
- It provides the capability of binding data with HTML and provides a rich and responsive output with the better experience.
- It provides a facility of using dependency injections and helps in the utilization of separation of concerns.
- It provides a facility of writing less code, but with more functions.
- Moreover, it runs on all major web browsers and smart phones. It also runs on android and IOS based mobile phones too.
Why use angularJS web application development programs for your website?
1. angularJS has a large community
You can hire angularJS developer or contact a Development Company to sort out the issue for your web application. They are available in large mass for the open source framework technology. There is also a core development team available for the same.
2. Code pattern is easy to work with
The web application of angularJS is designed in such way that HTML is the main structure for design. The coding gets light-weighted and easy with the proper end result.
4. Simple directives
The angularJS uses the pattern of HTML. It can be expanded by using directives which will add the code information with the necessary behavior.
The directives enable the coders to make their own HTML elements. Once the DOM manipulation code is put into the directives, it helps the coder to separate it from MVC. It also helps the MVC to update the view of new data separately. However, this view solely depends on the directives.
5. Filter options
Filters arrange the data before reaching the view and can make a format of simple decimal places. These act as a standalone function that can be segregated from the application as like as directives. These are also basically concerned with data transformation.
6. Less Coding facility
angularJS web application development is such that one does not write the coding and create your own MVC. It runs on data binding format. Data binding is a feature where you do not have to make an input of data to check the view manually. The coder basically writes the directive that will be separated from the application code. Filters control the data on the view level without making a change on the controller.
7. Time saving
angularJS is faster to develop and its framework is such that it helps in making big applications too.
8. Available and ready-made services.
angularJS provides ready-made services with lots of variety in it that makes the task easy for the coder to get the job done instantly.
9. Easy data binding
angularJS provides the facility of two-way data binding facilities. If there is any change in user interface, it will affect the application objects and it works in the other way also.
Now, it is a fact that the URL is no more a page. It acts as a separate module for the pages also. It is also known as deep linking. angularJS is developed with such feature that it permits the deep linking also.
11. Provides full User Interface support
angularJS provides HTML to describe the application user interface. HTML is known for the declarative programming language. It has a feature of the intuitive user interface. It also selects the suitable controller for the specific User Interface element. It basically focuses on the design and usage of the application also.
12. Some User Experience
Other than coding, angularJS works best in designing the application that appears visually appealing to the audience. The User Interfaces created in angularJS turns out terrific with aesthetic beauty in it.
Now, angularJS is available in 2.4 versions with various new added features in it. If you are planning to remodel or create a new website for your company, try angularJS web applications.
It’s a light coded feature with time-saving quality helps to the coders to design your application faster with less coding. It develops the quality user interface that provides excellent user experience and is highly recommendable.
Try the innovative feature of angularJS to have a better user experience for your web application. It is also the recommendable technology of the modern day one-page website. Save your time and money with fewer days of coding, but with better technology.
Credit: Source link
|
OPCFW_CODE
|
Delete the lines from file between pattern match
How to delete all the lines between two pattern in file using sed.
Here pattern are //test and //endtest, file content:
blah blah blah
c
f
f
[
]
//test
all text to be deleted
line1
line2
xyz
amv
{
//endtest
l
dsf
dsfs
Expected result:
blah blah blah
c
f
f
[
]
//test
//endtest
l
dsf
dsfs
see also How to select lines between two patterns?
If awk is a viable alternative for you you should checkout this answer I wrote some time ago: https://stackoverflow.com/a/31112076/42580
Don't use range expressions as they make trivial tasks very slightly briefer but then require a complete rewrite or duplicate code when your requirements change in the slightest (e.g. to test some value inside the range or include/exclude the range start/end lines). Use a flag variable instead. Since sed doesn't have variables that means you shouldn't use sed for tasks like this, just use awk instead, see for example https://stackoverflow.com/a/55721516/1745001.
This is common feature of sed
sed '/^\/\/test$/,/^\/\/endtest/d'
As / is used to bound regex, they have to be escaped, in regex.
If you want to keep marks (as requested):
sed '/^\/\/test$/,/^\/\/endtest/{//!d}'
Explanation:
Have a look at info sed, search for sed address -> Regexp Addresses and Range Addresses.
Enclosed by { ... }, symbol // mean any bound.
The empty regular expression '//' repeats the last regular
expression match (the same holds if the empty regular expression is
passed to the 's' command).
! mean not, then d for delete line
Alternative: You could write:
sed '/^\/\/\(end\)\?test$/,//{//!d}'
or
sed -E '/^\/\/(end)?test$/,//{//!d}'
Will work same, but care, this could reverse effect if some extra pattern //endtest may exist before first open pattern (//test).
... All this was done, using GNU sed 4.4!
Under MacOS, BSD sed
Under MacOS, I've successfully dropped wanted lines with this syntax:
sed '/^\/\/test$/,/^\/\/endtest/{/^\/\/\(end\)\{0,1\}test$/!d;}'
or
sed -E '/^\/\/test$/,/^\/\/endtest/{/^\/\/(end)?test$/!d;}'
getting below error https://stackoverflow.com/users/1765658/f-hauri sed '/^\/\/test$/,/^\/\/endtest/{//!d}' test.txt sed: bad regex '': empty (sub)expression
wich version of sed (and OS) are you using?
With awk:
$ awk '/\/\/endtest/{p=0} !p; /\/\/test/{p = 1}' file
blah blah blah
c
f
f
[
]
//test
//endtest
l
dsf
dsfs
You should delete your answer as it was clearly written that the solution should be solved with sed. Or?
@YesThatIsMyName no, the question was edited after I answered
Then you will recieve 1UP from me :-D
@YesThatIsMyName people often post questions asking for an answer in some specific tool just because they don't know any better. Restricting answers to just the tool they asked for (unless they're very specific that they can ONLY use that tool) while there's a better solution using some other tool isn't useful. In this case sed and awk are both standard tools that exist in every UNIX installation and people often get confused about which of them to use for which task so it's very common for people to ask for a solution in 1 and accept a solution in the other.
|
STACK_EXCHANGE
|
This content has been marked as final. Show 7 replies
Maybe you typed it wrong or maybe I'm misunderstanding. But you should never see these folders in the !SSL! folder.
Why? It's quite simple really. This !SSL! folder should contain other folders. One for each layout you have created. So if you created a WebHelp layout, you should see !SSL!/WebHelp. And inside THAT folder is where you would find the three folders you mentioned. So it would look like this:
Now if you just forgot to add the correct pathing when typing your post and you actually are looking in what you think is the right place, I would then advise you to look more closely at the first dialog you see when editing the WebHelp properites. Scrutinize the field that says: Output folder and start page. It might just be that you aren't looking in the correct location.
Thanks for the reply....
1) Yes, I typed the path incorrectly. I know that these subfolders are supposed to be in the <B>.../!SSL!/WebHelp</B> folder.
2) Yes, I am looking in the correct location based on the settings for my WebHelp output.
3) And, I have triple-checked everything and even run a Search on the whdata (etc) folder names... they are NOWHERE to be found!
Current Help project - Folder Contents: My <B>.../!SSL!/WebHelp</B> folder for the project contains .htm files, .css files, a project .ALI file, eHelp.xml, RoboHHRE.lng, ehlpdhtm.js, related image files, and an IMAGES subfolder. No other subfolders.
Old Help project - with Whdata(etc) Subfolders: I found an old project, verified that the WebHelp folder had the Whdata (etc) subfolders, made no changes to the layout's settings (Note that they are the same settings as in my current Help project), and then created the WebHelp. Now, THIS project does NOT have the Whdata (etc) subfolders.
So, does it sound like there is something wrong with my RoboHelp installation?
If you deselect the TOC, Index and Search options in the output setup, you won't see a navigation frame at the left. But even then, RH puts the three wh.... folders in the WebHelp output directory. The basic files are there, but with no data.
Before you go to the trouble of uninstalling and reinstalling RH5:
Have you tried starting with a completely fresh WebHelp output layout? If you've been working with the default layout only, back up the entire project, open it in RH and trash the WebHelp layout. Close and relaunch RH. It will re-create the default WebHelp layout .
When you say "old project," was it done with RH X5 or an earlier version? When you installed X5, did you uninstall any previous versions? Did you install the X 5.0.2 upgrade?
I just had this flash: You say you have a project .ali file. Isn't that a project source file?
The output folder also should have project .log, project_csh.htm and project_rhc.htm files.
If what Harvey offered doesn't help, there are another couple of things to check.
1. If you have recently re-imaged your PC and RoboHelp has been re-installed fresh, it could be that it wasn't correctly installed. What I mean by this is that if your IT folks re-installed it and was not logged in as you with full admin rights, it's probably not correctly installed. Being unable to generate WebHelp is a definite symptom of this. I'm working with a class at the moment that has at least three users with this very issue.
2. Assuming RoboHelp has been correctly installed, the problem could be related to something called your "XML Parser". I once had this issue and was suddenly unable to generate WebHelp. To re-install the parser, try clicking here and investigating some of the links.
Hopefully something here helps... Rick
The RH default location for output files is, indeed, the !SSL!\WebHelp directory. If someone before you has configured a different path for the output, however, that's where the output is being generated to.
Please verify that the path in the box under "Select Output Folder and Start Page" panel of the first window of the generate process remains the default for this layout, or that it's been changed.
Thank you all for your input.
Per my previous emails, I have verified the output location (many times) - no whdata (etc) folder. I have created new projects and used the RH default layout - with the same result. - no whdata (etc) folder.
Re: the RH install. Machine was clean before installing RH. I was the Admin on my PC. The 5.02 upgrade is installed. And, I am NOT generating Help across our corporate network.
So... Rick, I'm going to check out your XML parser idea.
I will post the results.
Thanks - Cate
PS - I have been using RH for more years than I care to say - :) - this is the first time I have experienced this situation!
You say your PC was clean before the RH install.
If that means what I think it means, you started with a clean slate, maybe with a disk image configured in your IT department.
Is it possible that your personal user profile is missing some permissions for writing to the hard drive?
(Grasping at straws here.)
|
OPCFW_CODE
|
Maybe a bug
Thanks for this excellent work! It really helps me a lot. But I guess there exists a bug at line 237 in model.py
Your implementation is U1 = U1.view(-1, uw * uh , ch), U2 = U2.view(-1, uwuh, ch), but actually, U1's shape is (batch_size, channel, w, h), if you simply view it as (batch, wh, c), actually the last dimension is not 3 channel pixels but pixels in width dimension.
Hey tiandunx, it's not a bug. See my explanation below:
U1, U2 and U3 shape is: (batch_size, channel, h, w). I reshape it to: (batch_size, h*w, channel). I concatenate U1, U2 and U3 along dim 1 to get U whose shape is: (batch_size,h*w*3,channel). Then I do matrix multiplication with a S1, S2 and S3 whose shape is: (batch_size, n_hat, n) where n=hw3 and n_hat=7. This gives me Ubar_1, Ubar_2, and Ubar_3 whose shape is: (batch_size, n_hat, channel)
Please note that channel does not refer to image channels (i.e. RGB/3) here. These are actually output channels that we get after final convolution in MultiStreamMultiStage module. It is 64 in this case.
Thanks for your explanation. For sure we should do matrix mulitplication along channel dimension as what you've done here. But what I am worried about is that since U1(U2,U3) 's shape is (batch_size, channel, h, w) and its memory in physical storage is width last and its memory format is( width_pixel_0, width_pixel 1, width_pixel_2). In memory format, 2 consecutive numbers are width instead of channels. Even if you reshape it as (batch_size, h*w, C), but you didn't change the physical memory layout. Let me show you an example.
x shape is 2x2x3 where the first 2 represents channels, height = 2, width = 3, but if we simply view it as (height * width, channels), then the last 2 consecutive number are not channel number, it's witdh. But 1,7 are expected behavior.
Hey tiandunx, thank you for explaining the bug to me. I understand what you mean now! I will make some time and update the repository fixing this bug. I will also look out for similar bug elsewhere in the code. I believe the right fix for this would be: U1 = torch.transpose(U1.view(-1,ch,h*w),1,2). Do you agree?
Agreed! In fact, you may simplify the code. Here is what I reimplement it.
U1 = U1.view(batch_size, ch, -1), U2 = U2.view(batch_size, ch, -1) , U3 = U3.view(batch_size, ch, -1)
Then U = torch.cat((U1, U2, U3), dim=2), U_bar1 = torch.bmm(S1, torch.transpose(U, 1, 2))
On more question that really puzzles me. When I change use your original code, everything works fine. Bug when I fix this potential bug, then Pytorch complains the following warning message.
Warning: Mixed memory format inputs detected while calling the operator. The operator will output channels_last tensor even if some of the inputs are not in channels_last format. (function operator())
But when I change the code back just as your current implementation. The warning message is gone. I cannot find a workaround. Do you have any idea?
Where did you encounter this warning? Right now I have made the fix and tried retraining the model again using 'Train Model' notebook and I did not get any warning. My code is:
U1 = U1.view(-1,ch,uh*uw)
U2 = U2.view(-1,ch,uh*uw)
U3 = U3.view(-1,ch,uh*uw)
U = torch.cat((U1,U2,U3),dim=2)
U = torch.transpose(U,1,2)
Ubar_1 = torch.matmul(S1,U)
...
I just looked up on this warning. It seems it is because when you are taking transpose, you are actually causing U tensor to become non-contiguous i.e. its physical memory layout is the same but order of strides has changed. S1 tensor is contiguous so when you call batch-matrix multiply operator, you are giving pytorch contiguous and non-contiguous tensors. I believe pytorch operators by default expects tensors to be contiguous. To remove this warning, you can do U.transpose(U,1,2).contiguous(). This will copy the tensor and reorder tensor in physical memory to preserve contiguity. This should remove the warning that you are having, let me know if it works.
Well, it works fine for me. When I change my pytorch from 1.6.0 to 1.7.0, the warning message is gone. I guess it's related to some specific version. Thank you. I'll spend some time working to find if any improvement can be made after fixing this bug.
|
GITHUB_ARCHIVE
|
The LeanIX Technology Risk and Compliance is key to successfully realizing obsolescence risk management use cases. It enriches your organization's LeanIX usage with capabilities to proactively identify, assess, and mitigate risks associated with their IT systems and processes. Organizations can protect their valuable assets by effectively managing technology risks, maintaining operational stability, ensuring business continuity, and complying with relevant regulations and standards.
This article explains:
- what are the key capabilities of the LeanIX Technology Risk and Compliance.
- how to start building your LeanIX repository and managing technology risk for your organization by leveraging the Technology Risk and Compliance.
- how to visualize your technology risk with Reports and manage risk mitigation.
- how to use dashboards to provide aggregated reporting to stakeholders.
LeanIX Technology Risk and Compliance adds the following capabilities and features to your LeanIX Enterprise Architecture workspace that are critical to running an obsolescence risk management use case successfully:
- Integration to ServiceNow to discover IT components automatically and manage Applications and lifecycles in LeanIX
- Synced to ServiceNow, allowing real-time mapping between Application Fact Sheet in leanIX and used software/hardware in ServiceNow
- Access to Lifecycle Catalog to retrieve lifecycles for IT Component Fact Sheets
- Three additional Technology Obsolescence Views: Mitigated risk, missing data, and unaddressed risk percentages
- Dedicated Obsolescence Risk Management dashboard
The following sections explain how to get started with LeanIX Technology Risk and Compliance and how to get the first value for your obsolescence risk management use case.
To initiate your first obsolescence risk assessment with LeanIX Technology Risk and Compliance, you must populate LeanIX with essential data, including Applications Fact Sheets and the underlying IT Components. This foundational step ensures you have a comprehensive inventory of your software assets, a prerequisite for conducting meaningful risk assessments. If your organization maintains a Configuration Management Database (CMDB), LeanIX offers seamless integration options. This integration streamlines the process of bringing your software asset data into LeanIX. Integration with your existing CMDB ensures that LeanIX is continuously updated with the latest information, providing a real-time view of your technology landscape.
LeanIX also provides an out-of-the-box ServiceNow Integration, a widely used IT Service Management (ITSM) platform. This integration simplifies the transfer of data from ServiceNow to LeanIX. Leveraging the ServiceNow integration expedites the data migration process, allowing you to build a comprehensive inventory in LeanIX quickly.
Companies who manage the discovery of their software assets with ServiceNow can configure our integration for a faster time to value, compared to the manual creation of Fact Sheets in LeanIX. Furthermore, the relationships between Applications and IT Components can be imported depending on your ServiceNow setup. This allows management of technology risks at an application level thanks to LeanIX’s ability to roll up the risks of smaller IT Components.
You can find the detailed instructions for the integration in this section: ServiceNow Integration
The Lifecycle Catalog provides lifecycle information for IT Components in your LeanIX workspace, enabling users to understand technology obsolescence risks for their technology landscape and make better upgrade and transformation decisions.
Linking an IT Component to the Lifecycle Catalog provides:
- Descriptive information for easier recognition of components & their lifecycle.
- Automated relations modeling for Providers and Tech Categories with the Tech Category Catalog.
You also get access to information like Support Policies, Descriptions, direct URLs to IT Component and Provider websites, and more.
You can find the detailed instructions for the integration in this section: Lifecycle Catalog.
- Use the Lifecycle Catalog Bulk Linking page with Confidence Level filters to quickly link the identified High Confidence matching recommendations.
- Identify your mission-critical IT Components next and link them to the Lifecycle Catalog items using recommendations provided for each.
- Raise Data Requests in-tool if you need information about an IT Component that is not already included in the Lifecycle Catalog.
Step 3: Visualize your Technology Obsolescence Risk with Reports and a dedicated Dashboard and manage risk mitigation
The Technology Risk and Compliance comes with specific views in the Report section and with a dedicated dashboard. These views help you to manage your obsolescence risk mitigation processes.
1. Make sure you capture all relevant lifecycle information so that you can prioritize risk mitigation next
- In the Dashboard, the module Data Completeness provides you with a good summary of data missing. A click on the row brings you to the inventory, where you can manage the process for data completion
- With the Report view Obsolescence: Missing Data Percentage, you analyze for what relevant applications in the context of business capabilities you still have lifecycle information missing.
Evaluate based on business capability criticality what related IT components to evaluate first and focus on end-of-life risks first before you continue with IT components that are in a “phase out” state.
- Use the Application Matrix report with Obsolescence: Aggregated Risk view to analyze what applications have end-of-life risk versus phase-out risk. Use business criticality or any other dimension of your choice to prioritize what applications to evaluate first
- Finally, trigger relevant stakeholders to evaluate the applications they are responsible for. The best practice is to use our Surveys or To-Dos.
During evaluation, stakeholders might identify that data was incomplete/faulty. If not, they decide to either accept the risk or address it by taking transformation actions.
- Mark the IT components with “risk accepted” to signalize the evaluation completion for this IT component so that you can discard it from the next prioritization round.
- For the IT components that you decided to address the risk for, use the “Upgrade a technology” template from the LeanIX Architecture and Road Map Planning to unlock the full potential of planning these changes. An alternative is to use the LeanIX Initiative Fact Sheet, through which you can plan different ways to tackle that risk.
- Mark the IT component you will mitigate the risk for, with “risk addressed” afterward.
- The Dashboard provides a great summary of Applications: Unaddressed Obsolescence Risk and “Applications: Addressed Obsolescence Risk” to report progress on a high level.
- Use the Report view Obsolescence: Mitigated Risk Percentage to keep track of your progress on the application level. As a best practice, to manage risk comprehensively across your application portfolio, you would seek to reach 100% coverage of Risk Accepted or Risk Addressed.
Updated 25 days ago
|
OPCFW_CODE
|
Variadic Function caches argument list of last calls
I wrote a Helper class with c functions for an iOS Library with the following pattern.
There are 2 wrapping (variadic) functions, which finally call the same function, with slightly different parameter. Idea is to have "default" properties being set.
__attribute__((overloadable)) void func1(NSString* _Nonnull format, ...);
__attribute__((overloadable)) void func1(int param1, NSString* _Nonnull format, ...);
Both will then call the following function:
void prefixAndArguments(int param1, NSString* _Nonnull format, va_list arguments);
Implementation as followed:
__attribute__((overloadable)) void func1(NSString* _Nonnull format, ...)
{
va_list argList;
va_start(argList, format);
prefixAndArguments(0, format, argList);
va_end(argList);
}
__attribute__((overloadable)) void func1(int param1, NSString* _Nonnull format, ...)
{
va_list argList;
va_start(argList, format);
prefixAndArguments(param1, format, argList);
va_end(argList);
}
void prefixAndArguments(NMXLogLevelType logLevel, NSString* _Nullable logPrefix, __strong NSString* _Nonnull format, va_list arguments)
{
// Evaluate input parameters
if (format != nil && [format isKindOfClass:[NSString class]])
{
// Get a reference to the arguments that follow the format parameter
va_list argList;
va_copy(argList, arguments);
int argCount = 0;
NSLog(@"%d",argCount);
while (va_arg(argList, NSObject *))
{
argCount += 1;
}
NSLog(@"%d",argCount);
va_end(argList);
NSMutableString *s;
if (numSpecifiers > argCount)
{
// Perform format string argument substitution, reinstate %% escapes, then print
NSString *debugOutput = [[NSString alloc] initWithFormat:@"Error occured when logging: amount of arguments does not for to the defined format. Callstack:\n%@\n", [NSThread callStackSymbols]];
printf("%s\n", [debugOutput UTF8String]);
s = [[NSMutableString alloc] initWithString:format];
}
else
{
// Perform format string argument substitution, reinstate %% escapes, then print
va_copy(argList, arguments);
// This is were the EXC_BAD_ACCESS will occur!
// Error: Thread 1: EXC_BAD_ACCESS (code=EXC_I386_GPFLT)
s = [[NSMutableString alloc] initWithFormat:format arguments:argList];
[s replaceOccurrencesOfString:@"%%"
withString:@"%%%%"
options:0
range:NSMakeRange(0, [s length])];
NSLog(@"%@",s);
va_end(argList);
}
...
}
My Unit Tests for the function look the following (order is important).
// .. some previous cases, I commented out
XCTAssertNoThrow(NMXLog(@"Simple string output"));
XCTAssertNoThrow(NMXLog(@"2 Placeholders. 0 Vars %@ --- %@"));
The crash happens when I want to use the arguments and the format (making format strong did not solve the problem, and does not seem being part of the problem, see below):
s = [[NSMutableString alloc] initWithFormat:format arguments:argList];
Here is the Log:
xctest[28082:1424378] 0
xctest[28082:1424378] --> 1
xctest[28082:1424378] Simple string output
xctest[28082:1424378] 0
xctest[28082:1424378] --> 4
Of course we won't see the desired string "2 Placeholders. 0 Vars %@ --- %@" as the crash happened before.
So, the question is now: Why is the amount of arguments now being 4 instead of 0? As none being passed in the second call, are the arguments being collected when the function is being called immediately again?
So, I started to call the function "again" to make sure the argument's list is being cleared, although va_end was being called:
__attribute__((overloadable)) void func1(NSString* _Nonnull format, ...)
{
va_list argList;
va_start(argList, format);
prefixAndArguments(none, nil, format, argList);
va_end(argList);
NSString *obj = nil;
prefixAndArguments(none, nil, obj, nil);
}
This does work now like a charm (argument's list is being cleared and the desired output is being received):
xctest[28411:1453508] 0
xctest[28411:1453508] --> 1
xctest[28411:1453508] Simple string output
xctest[28411:1453508] 0
xctest[28411:1453508] --> 1
Error occured when logging: amount of arguments does not for to the defined format. Callstack: ....
xctest[28411:1453508] 2 Placeholders. 0 Vars %@ --- %@
Here is finally my question:
What is the reason for this behavior and how can I avoid it? Is there a better way to solve the issue than "stupidly" calling the function a second time with "no" arguments to clear the them?
P.s. I tried not to use macros, because I consider them as more error prone than c functions. See this thread: Macro vs Function in C
Ask yourself this: how does va_arg know when to stop?
This seems to be a heck of a lot of trouble to go through to implement an optional first argument. How about instead just ... not.
@JohnBollinger thank you for your inout John. For reasons of simplicity I just mentioned two functions with one optional parameter. Of course, there are more.
Is the NMXLogWithPrefixAndArguments() function you've presented supposed to be the same thing as the prefixAndArguments() function you talk about and your code calls?
Thank you for your hint, I missed this when preparing this post, this one is supposed to also be prefixAndArguments(I have updated the question)
You appear to have some misconceptions about variadic functions, exemplified by this approach to counting the variable arguments:
while (va_arg(argList, NSObject *))
{
argCount += 1;
}
That code assumes that the variable arguments have at least one member, that all of them are of type NSObject *, and that the list will be terminated by a null pointer of that type. None of those is guaranteed by the system, and if those assumptions are not satisfied then the behavior of one or more va_arg() invocations will be undefined.
In practice, you can probably get away with actual arguments that are pointers of other types (though formally, the behavior will still be undefined in that case). If the arguments may have non-pointer types, however, then that approach to counting them is completely broken. More importantly, your test cases appear to assume that the system will provide a trailing NULL argument, but that is in no way guaranteed.
If your function relies on the end of the variable argument list being signaled by a NULL argument, then it is relying on the caller to provide one. It is very likely the absence of null termination in your argument lists that gives rise to the behavior you are asking about.
Thank you John for clarifying that. The function shall be used in a library, thus I want to ensure users do not crash their code, if they do not append the expected sentinel value. Can one append a such to the argument's list before calling the wrapped function prefixAndArguments?
No, @Lepidopteron, one cannot. But you could, conceivably, provide function-like macros for the users that wrap the real functions and append (NSObject *) NULL to the users' argument lists. That still won't protect you from misbehavior if variable arguments of non-pointer type are passed, however. Variadic functions place substantial responsibility on users to pass appropriate arguments; the compiler cannot check them.
@Lepidopteron You can get a compiler warning if the list is not properly terminated by using the NS_REQUIRES_NIL_TERMINATION macro. However, as John pointed out, this only works if all the args are all pointer types. If you expect primitives to be passed as well (which seems pretty likely), you must obtain a count of the number of items to read. This can be done either by counting the format specifiers inside your function (that's how printf/stringWithFormat: do it), or by adding an explicit count parameter.
@JoshCaswell cool thank you Josh. You are right, I do also have primitive data types being passed along. The counting of format specifiers is already in place, to ensure the app would not crash if one would pass not enough values for the given format tho. E.g. I want to ensure the amount of given parameters does not exceed the amount of format specifiers in the format-string. Additionally, I wonder how Apple accomplished that kind of function in their NSLog statement, as it obviously also uses the va_list for NSLogv. Am I thinking too complicated? Anyhow, this discussion enlighted me a lot!
@Lepidopteron Glad it's helpful. NSLog() does the same thing: count the format specifiers, and stop calling va_arg() when you've pulled that many items off the list.
@JoshCaswell I stumbled over the possibility that there might be more format specifiers being provided than arguments passed, but I think I just have to implement it a bit smarter than either the minimum amount of va_args or format specifiers is taken into consideration.
|
STACK_EXCHANGE
|
This is an application that allows you to pick and choose which of the many context menu add-ons you would like enabled in your browser. Internet Explorer does not support nested context menus so this I wrote this application to make it easy to enable and disable different menu items you may want to use (instead of separate scripts). The context menu items are broke up into 8 groups: Share-it, Analyze-it, Analyze-it-Scritch, Analyze-it-Viewdns, Plurk-it, Google-it, Look-it-up, and IE-Utility. The IE-Utility menu items are usually located in the Tools menu in the menubar on top which isn’t shown by default (just hit the Alt key and it will pop open temporarily).
There are also a couple of registry tweaks that I included in this application. One that adjusts the size of the thumbnails (on hover) of open applications on the taskbar. And another to give you more New Tab entries. There is also a reset button for the New Tab items you removed and want back. The New Tab functionality is generated automatically based on your browsing (you don’t get actual Speed Dial functionality – Internet Explorer chooses everything you see)
There are a few changes in this new version that should be mentioned. The user interface has changed, the code was ported to be compiled, and there is now a version check that will help keep the application current.
In this version the user interface has been changed to be more convenient. I traded the scripted multi-screen interface for a more basic checkbox based dialog written using WinForms. Its just easier to use having everything in front of you on one dialog while making choices.
The code was ported from jscript/hta to nice modern Visual Basic 14. This should improve application start up time as well as use less memory while running. This compiled version of the application will perform better overall.
The scripting run time has a large overhead and is not maintained by Microsoft as other technologies are. For instance, when scripting a hypertext application the jscript has to target Internet Explorer 4/5 by default. By changing properties on the hta you can target IE8. I would have considered keeping the application scripted if Microsoft ported the hypertext application system to use the Edge or Internet Explorer 11 runtime.
There is now a version check enabled in the application. This application uses many external tools and features that are completely out of my control. As such, I’ve founds the application needs to be updated more often than other projects I’ve coded. In an effort to keep the application current I enabled a version check that checks weekly in the background that will somewhat automate the process of upgrading. By pressing the “Install new version” button the application will download the updated application in the background (no browser needed) to the Temp folder and start the installer.
The installer has also been tweaked to automatically start the old version uninstaller application (if it detects one) instead of the “do or die” message box that used to pop up. As there is a dependency on .NET 4.5, the installer will detect and start the web installer for the .NET framework as needed (this should only happen in Windows 7 without .NET 4.5).
Why write and maintain this application if Edge is the newest thing? Because desktop users will still prefer Internet Explorer 11 over Edge. Because I can’t see Microsoft having the only browser not running plug-ins. Because its still included in the operating system.
Where is it? Microsoft Internet Explorer 11 is located inside the Windows Accessories folder on the Windows 10 Start Menu. Or you could just click on the symbol in the upper left corner of my application.
|
OPCFW_CODE
|
Error when execute ´npm run build-assets´
I try to install de application and I have a problem when execute
npm run build-assets
Server Info:
Distributor ID: Ubuntu
Description: Ubuntu 16.04.3 LTS
Release: 16.04
Codename: xenial
node -v
v9.3.0
npm -v
5.5.1
The error is:
npm run build-assets
<EMAIL_ADDRESS>build-assets /home/ubuntu/ecommerce_project/ecommerce
node ./node_modules/webpack/bin/webpack.js -p
events.js:136
throw er; // Unhandled 'error' event
^
Error: write EPIPE
at _errnoException (util.js:999:13)
at WriteWrap.afterWrite [as oncomplete] (net.js:883:14)
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR<EMAIL_ADDRESS>build-assets: node ./node_modules/webpack/bin/webpack.js -p
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the<EMAIL_ADDRESS>build-assets script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /home/ubuntu/.npm/_logs/2018-01-04T21_30_43_099Z-debug.log
Any Idea?
Thanks!
Me too getting the same issue.
npm ERR! code ELIFECYCLE
npm ERR! errno 137
npm ERR<EMAIL_ADDRESS>build-assets: `webpack -p --progress`
npm ERR! Exit status 137
npm ERR!
npm ERR! Failed at the<EMAIL_ADDRESS>build-assets script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /home/vagrant/.npm/_logs/2018-06-14T07_05_40_468Z-debug.log
I have libfontconfig-dev installed and npm install before this step.
node --version v10.4.1
npm --version 6.1.0
Does installing those packages help?
apt install libffi-dev libjpeg-dev zlib1g-dev python-cairocffi libcairo2-dev gir1.2-pango
For anyone still trying to figure this out – the solution for me was to increase the memory size.
I was running a 1GB RAM DigitalOcean Ubuntu 16.04.4 x64 Droplet. After increasing the memory size to 8GB (not sure how much is actually needed) everything ran and completed without error.
Maybe splitting “npm run build-assets” into smaller steps would solve the problem for low-memory systems.
Good luck! 🍀
I'm getting a similar error with webpack on Ubuntu 18.04 with libfontconfig-dev and the other packages suggested by @NyanKiyoshi installed:
npm ERR! code ELIFECYCLE
npm ERR! errno 2
npm ERR<EMAIL_ADDRESS>build-assets: `webpack -p`
npm ERR! Exit status 2
npm ERR!
npm ERR! Failed at the<EMAIL_ADDRESS>build-assets script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
node --version 8.10.0
npm --version 5.6.0
@bdferguson as of today, Saleor is requiring to have nodejs 10+. Try upgrading it first. Then if the issue is still there, could you show the full stack, I'm guessing this was not the full output? (may be wrong)
As @Korred suggested. It seems this is related to RAM. I increased the server memory to 3GB from 1GB and it worked just fine.
|
GITHUB_ARCHIVE
|
Disclaimer: This outline is sourced directly from the AP Microeconomics Course Framework released by the College Board. This is a lightweight, web-friendly format for easy reference. Omninox does not take credit for this outline and is not affiliated with the College Board. AP is a reserved trademark of the College Board.
Topic 4.1 - Imperfect Competition
PRD-3.B.a: Define (using graphs where appropriate) the characteristics of imperfectly competitive markets and inefficiency.
- PRD-3.B.1: Imperfectly competitive markets include monopoly, oligopoly, and monopolistic competition in product markets and monopsony in factor markets.
- PRD-3.B.2: In imperfectly competitive output markets and assuming all else is constant, a firm must lower price to sell additional units.
- PRD-3.B.3: In imperfectly competitive markets, consumers and producers respond to prices that are above the marginal costs of production and/or marginal benefits of consumption (i.e., price is greater than marginal cost in an inefficient market).
- PRD-3.B.4: Incentives to enter an industry may be mitigated by barriers to entry. Barriers to entry—such as high fixed/start-up costs, legal barriers to entry, and exclusive ownership of key resources—can sustain imperfectly competitive market structures.
Topic 4.2 - Monopoly
PRD-3.B.b: Explain (using graphs where appropriate) equilibrium, firm decision making, consumer surplus, producer surplus, profit (loss), and deadweight loss in imperfectly competitive markets and why prices in imperfectly competitive markets cannot be relied on to coordinate the actions of all possible market participants and can lead to inefficient outputs.
- PRD-3.B.5: A monopoly exists because of barriers to entry.
- PRD-3.B.6: In a monopoly, equilibrium (profit-maximizing) quantity is determined by equating marginal revenue (MR) to marginal cost (MC). The price charged is greater than the marginal cost.
- PRD-3.B.7: In a natural monopoly, long-run economies of scale for a single firm exist throughout the entire effective demand of its product.
Topic 4.3 - Price Discrimination
PRD-3.B.c: Calculate (using data from a graph or table as appropriate) areas of consumer surplus, producer surplus, profit (loss), and deadweight loss in imperfectly competitive markets.
- PRD-3.B.8: A firm with market power can engage in price discrimination to increase its profits or capture additional consumer surplus under certain conditions.
- PRD-3.B.9: With perfect price discrimination, a monopolist produces the quantity where price equals marginal cost (just as a competitive market would) but extracts all economic surplus associated with its product and eliminates all deadweight loss.
Topic 4.4 - Monopolistic Competition
PRD-3.B: b. Explain (using graphs where appropriate) equilibrium, firm decision making, consumer surplus, producer surplus, profit (loss), and deadweight loss in imperfectly competitive markets and why prices in imperfectly competitive markets cannot be relied on to coordinate the actions of all possible market participants and can lead to inefficient outputs. c. Calculate (using data from a graph or table as appropriate) areas of consumer surplus, producer surplus, profit (loss), and deadweight loss in imperfectly competitive markets.
- PRD-3.B.10: In a market with monopolistic competition, firms producing differentiated products may earn positive, negative, or zero economic profit in the short run. Firms typically use advertising as a means of differentiating their product. Free entry and exit drive profits to zero in the long run. The output level, however, is smaller than the output level needed to minimize average total costs, creating excess capacity. The price is greater than marginal cost, creating allocative inefficiency.
Topic 4.5 - Oligopoly and Game Theory
PRD-3.C: a. Define (using tables as appropriate) key terms, strategies, and concepts relating to oligopolies and simple games. b. Explain (using tables as appropriate) strategies and equilibria in simple games and the connections to theoretical behaviors in various oligopoly market and non-market settings. c. Calculate (using tables as appropriate) the incentive sufficient to alter a player’s dominant strategy.
- PRD-3.C.1: An oligopoly is an inefficient market structure with high barriers to entry, where there are few firms acting interdependently.
- PRD-3.C.2: Firms in an oligopoly have an incentive to collude and form cartels.
- PRD-3.C.3: A game is a situation in which a number of individuals take actions, and the payoff for each individual depends directly on both the individual’s own choice and the choices of others.
- PRD-3.C.4: A strategy is a complete plan of actions for playing a game; the normal form model of a game shows the payoffs that result from each collection of strategies (one for each player).
- PRD-3.C.5: A player has a dominant strategy when the payoff to a particular action is always higher independent of the action taken by the other player.
- PRD-3.C.6: A Nash equilibrium is a condition describing the set of actions in which no player can increase his or her payoff by unilaterally taking another action, given the other players’ actions.
- PRD-3.C.7: Oligopolists have difficulty achieving the monopoly outcome for reasons similar to those that prevent players from achieving a cooperative outcome in the Prisoner’s Dilemma; nevertheless, prices are generally higher and quantities lower with oligopoly (or duopoly) than with perfect competition.
|
OPCFW_CODE
|
Flutter/Dart: Concurrent Modification Exception without changing list elements
During testing and debugging of an app, i noticed there was an Exception that mostly happens during debug testing only, inside a for-loop that iterates over a list:
[ERROR:flutter/lib/ui/ui_dart_state.cc(177)] Unhandled Exception: Concurrent modification during iteration: Instance(length:0) of '_GrowableList'.
I have searched around and found that it mostly happens if you change the list itself during the iteration, but i cannot see where it happens in the code:
Main function:
static Future<void> save(EntryModel entry) async {
...
List<TagModel> tagsList = entry.tags;
List<int> tagIdsInserted = [];
if (tagsList != null && tagsList.isNotEmpty) {
for (TagModel tag in tagsList) {
//Error happens inside this loop
int tagIdInserted = await TagContract.save(tag); //this function does not alter the tag in any way.
if (tagIdInserted == null || tagIdInserted <= 0) {
throw Exception('Invalid TagID!');
}
tagIdsInserted.add(tagIdInserted);
}
}
What happen is during the first iteration it runs fine, but the second or third one the List<TagModel> tagsList suddenly becomes empty, including from the initial object (the entry passed to the function).
Also i noticed that during runs without debugging it runs mostly fine, but i am not sure if that is because i am not catching the error.
Thanks in advance.
Does TagModel hold a reference to its EntryModel? What does TagContract do on save? Is there some Model manager that may be clearing an EntryModel's tags? There's not enough information to troubleshoot the issue. You'll have to use the debugger to step through the code.
@hola No, the TagModel does not contain any references to the Entry. The TagContract records the TagModel information on an SQLite database. I tried to debug the code, and i noticed that on the second or third loop iteration the EntryModel clears its reference to the TagModel list for some reason. I still have no ideia what could be the origin of the problem, but First_Strike answer seems to clarify some of the issues.
Try to avoid using await inside a loop, it is just too dangerous.
You have to understand how asynchronous code execute. If an await is encountered and the Future is unable to return synchronously, the runtime will suspend the execution of this function and jump to whatever other jobs that are on the top of the queue.
So when the await is encountered, the runtime will start executing some god-knows-where code and those code touched your tagsList.
Try to understand the following example. This will directly triggers the exception.
void main() {
List<int> ids = [1,2,3];
test(ids);
ids.add(1); // If the async function get suspended, this becomes the top of the queue.
}
void test(List<int> ids) async {
for (final id in ids) {
await Future.delayed(Duration(milliseconds: 10));
}
}
In async programming, avoid writing an await who depends on exposed shared states.
For a list of async tasks, always prepare them in an Iterable<Future>, then use Future.wait to synchronize them and get the result in a single await.
For your code
final results = await Future.wait(tagsList.map((tag)=>TagContract.save(tag)))
I tested the code with your suggestion. So far it seems to be working, but i am still not sure if that was the root of the problem. I will search around the documentation for how Futures work, but thanks for the advice nonetheless.
Dart's Future works in the same way as Javascript's Promise. You can refer to both documentations. It's better to prepare and use local state before an await, because await means "at here runtime scheduler may execute arbitrary other tasks in order to wait"
|
STACK_EXCHANGE
|
Email Auto-Replies not going to Reply-To Address
I have a web app that sends emails via smtp on behalf of the user to their customers. I am able to put the user's email in the reply-to of the email and this works for normal email use for the users. In the case the recipient has an auto-responder or the email entered was incorrect email, the auto replies go to the sender not the reply-to. The sender inbox is unmonitored.
The sender email service is using and Office365 account.
What are my options to get the auto replies and returned emails to the reply-to email?
Why do you need to care about auto-responders? Do you need to know what an email was sent to invalid email address? If so, than you receive the email correctly - on the sender's side. Because the email has never been replied. Am I right?
@EugeneAstafiev The auto-responders often contain important information like "the billing department contact email changed to ..." and the user will never see that.
This may be because the autoresponders don't regard their messages as replies, but rather as messages from the email system itself.
Your mail has three or four addresses related to the sender, which generally show up at the recipient as Return-Path, Sender, From and Reply-To. Return-Path is who should get error messages and other messages from the email system itself, Sender and From are the address that should be displayed as having sent the mail and Reply-To is the address to which the addressee's replies should be directed. (Sender and From are only very rarely different these days, but historically Sender might be e.g. a particular member of a team while From is the team's shared address.)
Many autoresponders respond to the address that shows up as Return-Path in the final message (it's also called the envelope from address), so your options are:
use the user's address as envelope from
set up a forwarding scheme so that autoresponses are suitably forwarded
The first is very tricky wrt. DKIM, DMARC etc, so you'll probably find the second one simpler, even though it requires you to filter spam and perhaps more.
Thanks for the thoughtful answer! I've set sender, from, reply-to, and return-path, but none seem to work. What I was thinking is "injecting" some guid(linked to sender) into the email body and then I would need to poll the inbox for unread emails, scan the email for the guid, and forward that email to the corresponding sender. Not sure how well this would work or if it's a sound idea. Is there a better/robust place I can place/hide the guid?
What you suggest is quite common except that most people put the GUID in the SMTP sender address (the one called return-path in the finally delivered message). Its often called VERP, and you see it in a lot of mail, Return-Path<EMAIL_ADDRESS>
Also, separately: You shouldn't need to poll. Sensible mail servers can execute code when a message arrives. Your code looks up the GUID, finds the proper recipient, filters away spam and forwards the message unless it's spam. Simple, except for the italicised bit.
This is office356 Email.. so I don't have control of the server.. that I'm aware of.
Ah, Well, as they say, you can always make a simple problem unmanageably complicated by adding a few restrictions.
I ended up just adding a guid to the References in the email headers when sending and logging that in the database. Then, a background service, checks the inbox every 5 minutes for unread emails, searches for references in the headers, if it finds one, looks up in database to match with the sender, and then forward the email to the sender.
|
STACK_EXCHANGE
|
editable notices
This fixes #28
[x] introduces notices (with title, content and a language)
[x] introuces pinned notices with a fixed set of enumerated names
[x] render notice content as markdown (using blackfriday/v2)
[x] have some defaults (currently done by the migrations, might want to revisit with a nicer way (large text inside raw INSERT INTO can be finicky))
[x] simple admin UI to edit/update notices
[x] link them up in the UI
[x] remove /news/
Future TODO:
style all the new layouts :see_no_evil:
styling of the markdown
wire up the language picking with dropdowns and session cookies
more tests for the HTML UI
One thing I'm side stepping for somebody else is that while you can create new notices and don't assign them to a pinned one to create custom pages, there is no way to link them in the layout but from other notices. i.e. you can't edit the menu(s) like you might be able to in a proper cms. I think this is fine for now.
@staltz could you take a look at 1 and 2 of the _future TODO_s?
There are three new templates, notice/show, notice/list and admin/notice-edit. I re-used the edit template for the draft new translation mode, too.
The list and show template have links for edit when an admin is logged in. What do you think about this? I didn't want to overload the admin interface with more new pages.
I also only placed a list of the notices on top in the "login nav" since we don't have a footer yet. Properly doing this would need a little template helper ala {{notice "CodeOfConduct"}} which looks up the notice and calls urlTo with the ID but I think this is fine for now.
I thought 4 and maybe 3 might be nice for Alex next week to get a practical intro of how everything works.
I'll review now :)
could you take a look at 1 and 2 of the future TODOs?
Yes. Do you mean that I would do it in this PR, or in other PRs post-merging? I prefer the latter
The list and show template have links for edit when an admin is logged in. What do you think about this? I didn't want to overload the admin interface with more new pages.
Ok. I'll take a look how the overall UI structure is, will still make up my mind how it should be presented to the user.
I also only placed a list of the notices on top in the "login nav" since we don't have a footer yet. Properly doing this would need a little template helper ala {{notice "CodeOfConduct"}} which looks up the notice and calls urlTo with the ID but I think this is fine for now.
Yup, I thought these would be best in a footer. I'll take a look, and we can make more PRs for this stuff.
I thought 4 and maybe 3 might be nice for Alex next week to get a practical intro of how everything works.
Good idea
Oh, and /notice/list still displays a "News" entry. Is this a side effect of me not deleting my local database and rebuilding from scratch, or is this intended?
Doing these things in a new PR Post-merge is fine by me.
The News item is a new thing.. I'm not good with names and open to suggestions. The constants are defined in admindb/types.go
The markdown rendering is pretty simple, single line. Just need to pass the pindb to the handler that does the landing page and get() the notice.
For 3) you can see how this is done in web/i18n - once we do this proper we need to consolidate this with the user session maybe but copying that and just using accept-language should be fine for now.
tangential: We also need a list of languages and flags somewhere
Great, let's merge then.
I could work on:
style all the new layouts see_no_evil
styling of the markdown
Put the "list of notices" as a side-menu item called just "Notices", available only for the admin
Put links to CodeOfConduct and PrivacyPolicy in the footer for every user to see, using the {{notice "CodeOfConduct"}} idea
Render the markdown for the description notice inside the landing page content
And these could be put in a new issue:
The notice page (e.g. Code of Conduct) would automatically pick up the user's declared browser language
The notice page would show links to alternative translations if they exist
For 3) you can see how this is done in web/i18n - once we do this proper we need to consolidate this with the user session maybe but copying that and just using accept-language should be fine for now.
tangential: We also need a list of languages and flags somewhere
|
GITHUB_ARCHIVE
|
Accessing non-default namespace service with mTLS enabled always results in "upstream connect error or disconnect/reset before headers" / 503 error
Describe the bug
Installed 1.0.6 via Helm chart with mTLS enabled. First deployed httpbin and sleep sample services to default namespace, and verified (using curl) that requests sleep --> httpbin work as expected. Next, created an auto-injected sidecar namespace called "foo" and deployed the same httpbin and sleep services to this new namespace. Retried curl operations between sleep (primary container) and httpbin, but all requests failed with 503 Service Unavailable error with message "upstream connect error or disconnect/reset before headers". However, when curl'ing via sleep "istio-proxy" container with certificate options, the request completes successfully. It only fails when requests originate from non-proxy container.
Used both mesh-wide mTLS policy and *.local DestinationRule (confirmed OK via istioctl authn tls-check), as well as service-specific Policy and DestinationRule. Neither fixed the errors. Tried resetting Pilot as well, but no luck. Only applications deployed to "default" namespace seem to work. Confirmed in Kiali that default namespace service requests show the "security" badge. Not seeing any errors in istio-proxy logs (even with trace-level debugging enabled), and istioctl proxy-config cluster (JSON output) details appear correct.
I confirmed that the same sample applications (httpbin, sleep) work as expected when deployed to default namespace.
Expected behavior
Curl requests from primary sleep container to httpbin service should be successful.
Steps to reproduce the bug
Download release 1.0.6 (https://github.com/istio/istio/releases/) and run Helm install:
helm install --name istio --namespace istio-system --set global.mtls.enabled=true,kiali.enabled=true,grafana.enabled=true,tracing.enabled=true ./istio
Create "foo" namespace:
kubectl create ns foo
kubectl label namespace foo istio-injection=enabled
Deploy sample httpbin and sleep applications:
kubectl apply -f <(istioctl kube-inject -f samples/httpbin/httpbin.yaml) -n foo
kubectl apply -f <(istioctl kube-inject -f samples/sleep/sleep.yaml) -n foo
Once pods are running, check service access between sleep and httpbin as follows:
kubectl exec <SLEEP_POD_ID> -c sleep -n foo -- curl https://httpbin.foo:8000/ip -v
Version
istioctl version
Version: 1.0.6
GitRevision: 98598f88f6ee9c1e6b3f03b652d8e0e3cd114fa2
User: root@464fc845-2bf8-11e9-b805-0a580a2c0506
Hub: docker.io/istio
GolangVersion: go1.10.4
BuildStatus: Clean
kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.2", GitCommit:"cff46ab41ff0bb44d8584413b598ad8360ec1def", GitTreeState:"clean", BuildDate:"2019-01-10T23:35:51Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-02-28T13:30:26Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Installation
Via Helm chart from locally-cloned Git repo with 1.0.6 release tag (see specific deploy command above).
Environment
EC2, 5-node cluster
lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.5 LTS
Release: 16.04
Codename: xenial
Cluster state
Attached istio-dump.tar.gz.
istio-dump.tar.gz
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8sdemo1.cenx.localnet Ready node 2d18h v1.13.2
k8sdemo2.cenx.localnet Ready master 2d18h v1.13.2
k8sdemo3.cenx.localnet Ready node 2d18h v1.13.2
k8sdemo4.cenx.localnet Ready node 2d18h v1.13.2
k8sdemo5.cenx.localnet Ready node 2d18h v1.13.2
Please note the issue only occurs when mTLS is enabled. When mTLS is NOT enabled in this same cluster, services are able to communicate without error for any namespace (not just default).
I have also tried this same test with latest 1.1-rc2 and also tried without using the Helm chart (ie. just kubectl apply CRDs + auth-demo YAML).
I am surprised that it is not working. I have a “4 point” debugging strategy. Let's try the following to get more information.
Point 1 is the app itself. Verify that your service really exposes the URL you are trying by doing kubectl exec -c istio-proxy -- curl localhost:80/
// 80 is the targetPort
Point 2 is the sidecar of the app. You have global.mtls.enabled=true. You can simulate an mTLS request coming into example-backend like this:
PODNAME=...
SERVICE=httpbin
NS=foo
SVCPORT=80
PATH=/example-backend/info
PODIP=$(kubectl get pod $PODNAME -o jsonpath='{.status.podIP}')
kubectl exec $PODNAME -c istio-proxy -- curl -v https://$SERVICE.$NS.svc.cluster.local:$SVCPORT/$PATH --resolve "$SERVICE.$NS.svc.cluster.local:$SVCPORT:$PODIP" --key /etc/certs/key.pem --cert /etc/certs/cert-chain.pem --cacert /etc/certs/root-cert.pem --insecure
If this works go on to point 3 by running the same test from the sidecar of frontend.
Point 4 is what you have already tried. We need to see what is happening earlier.
Use kubectl get virtualservice,destinationrule --all-namespaces to make sure there isn't some configuration that is performing unexpected configuration.
Thank you for your recommendations to troubleshoot this issue. I ended up entirely re-creating my cluster (via kubeadm), and re-deployed istio using the 1.0.6 Helm chart package. I've just retried my simple sample sleep --> httpbin test using a namespace "foo" and it now appears to be working correctly:
kubectl exec sleep-69f8fcfdf6-kfpr8 -c sleep -n foo -- curl http://httpbin.foo:8000/ip
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{
"origin": "<IP_ADDRESS>"
}
100 28 100 28 0 0 1866 0 --:--:-- --:--:-- --:--:-- 1866
I will continue to test this out, and will use a more complex stack that includes Ambassador as well, just to make sure it's all working as expected...
OK, so after further testing I can confirm that the issue lies with deploying an operator that copies all secrets from "default" to any created / new namespace automatically.
I'm not yet sure why this is causing a problem, but once I deployed the operator, and recreated my test scenario, the service interaction started to fail between sleep --> httpbin.
$ kubectl exec sleep-69f8fcfdf6-mcn5k -c sleep -n foo -- curl http://httpbin.foo:8000/ip
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 57 100 57 0 0 4384 0 --:--:-- --:--:-- --:--:-- 4384upstream connect error or disconnect/reset before headers
$ kubectl get secret -n foo
NAME TYPE DATA AGE
default-token-nqqzs kubernetes.io/service-account-token 3 3m2s
istio.default istio.io/key-and-cert 3 3m3s
myregistrykey kubernetes.io/dockerconfigjson 1 3m3s
So this is interesting...
I suspect that the copied secrets are preventing Istio from creating secrets.
I predict that if you delete the secrets Istio will recreate them correctly.
Try:
kubectl -n <NAMESPACE> get secret istio.<SERVICEACCOUNT> -o yaml | grep cert-chain.pem | awk '{ print $2 }' | base64 --decode | openssl x509 -text -noout > before.txt
kubectl -n <NAMESPACE> delete secret istio.<SERVICEACCOUNT>
kubectl -n <NAMESPACE> get secret istio.<SERVICEACCOUNT> -o yaml | grep cert-chain.pem | awk '{ print $2 }' | base64 --decode | openssl x509 -text -noout > recreated.txt
diff before.txt recreated.txt
You can also try the same thing with "root-cert.pem" instead of "cert-chain.pem".
I'm curious about why do you want to copy the istio generated secrets to other namespace? Why not let istio to do this for you?
Copying the Istio secrets via the operator was as unintended side-effect -- the operator was originally used to copy other secrets between namespaces (eg. docker registry secret, for example), which was beneficial for my company's product stack deployment. However, this interfered with the normal operation of Istio unintentionally.
|
GITHUB_ARCHIVE
|
Why didn't Lelouch use the "obey me" order on everyone?
In the final episodes of Code Geass R2, Lelouch gave orders like "be my slave" or "obey me". This way, unlike when using the Geass, he can give them orders an unlimited number of times.
(Code Geass Wikia)
So why didn't he use these orders on everyone from the beginning? This way he could've given them orders whenever he wanted!
At the very beginning, right after he got his powers, Lelouch may not have been sure that it was possible to give such powerful orders. This is why, for example, Lelouch ordered a female student from the academy to engrave a cross-mark on a wall every day forever: to determine how long a geass would last. Presumably, he was performing similar experiments that weren't shown to the viewer.
In some cases, Lelouch could be morally conflicted about geassing people. In the last episode of R2, we see him wavering on whether or not to use a geass to command Nunnally to give him the FLEIJA control device. It's possible that he felt the same way about using his geass on other people as well. (Then again, that might just be a special thing for Nunnally.)
Recall now that people are able to reject a geass (at least temporarily) if it is in strong conflict with their morals. Guilford, for example, would probably have found the idea of obeying Zero to be utterly repugnant, and may have resisted the geass to a degree, which could have thrown a wrench in Lelouch's plans. This could be why Lelouch instead geassed Guilford to view him as if he were Cornelia - to circumvent that issue.
There are probably a few other reasons, too, which hopefully other people will be able to come up with in their answers.
Lelouch does confirm to C.C when she shows up after being killed at the start that he is experimenting with his Geass, he also did an experiment with a teacher when he found out that Kallen didn't fall under his command a second and third time, this confirms the first paragraph
Lelouch isn't exactly morally conflicted with his Geass since he was more than willing to enslave the Imperial Court, ordering Suzaku to live or using it on an noble's bodyguard, any hesitation would be more than likely that of him reserving it's use, his conflict to use it on Nunnally and to make Shirley forget about him and his despair when it was accidentally used it on Euphie was mainly because he loved all 3 of them, his own sister, the girl who fell for him and he reciprocated the feeling for and his first love
Third Paragraph is confirmed when Euphie was accidentally cast with the order to "Kill the Japanese", she resisted, as @senshin has said, Guilford being ordered to obey Zero would conflict not only with his nature but his loyalty to Cornelia which is as strong as it is because of his feelings towards her, Lelouch probably knew this and explains his order in episode 17 or R2, however, one would question why Schneizel was so willing to "Serve Zero" but it could just be that deep inside Schneizel wants to be controlled
the only other reason i can think of not worth of a separate answer would be just the famous "plot induced stupidity" but then again would the series really be enjoyable if Lelouch just went around ordering everyone to become his slave?
I like to think of them as purely moral choices, not at ALL for practicality.
In the beginning of R2, Kallen asked if she was geassed into obeying Zero, or something to that effect. Lelouch makes it clear that she followed under her own will, which presumably, he wanted for all of the Black Knights.
Note that he only ordered "Obey Me!!!" after the Black Knights betray him*, leaving him with no troops to command.
The others I see as lack of choices. For example, he didn't geass Suzaku the moment he realized he was working for the enemy (he could have very well said "Follow Zero" at that point on, and C.C. even chastises him for not doing it) but he only geasses him in a very precarious situation (when both of them were about to get killed).
*EDIT: Actually, there's another reason: (Lelouch thinks) NUNNALY's DEAD!. Remember he did everything for Nunnaly. With Nunnaly, Shirley, Rolo, and the Black knights gone, he has nothing left basically. So this point is pretty much a "Well, Fuck My Moral Compass", And does things more pragmatically.
Production-wise, it would probably be because of addition to drama. I think the production thought (and I believe us as well) would be boring to have Lelouch command "Obey me" everytime. And of course, to add limitation to the order, thus, adding an opening for conflicts in the stories. If the order "Obey me" will always be used, then Lelouch will mostly be invincible and that will prevent twists on the stories - which makes the anime more enjoyable.
I agree with the answer posed by Secret. But there's another addition/modification I would like to make to it.
His initial motives were in line to end Charles' Hegemony and he had people in support for it, that's how he gained the support of the Black Knights.
But after the incidents in the thought chamber when he stopped Charles and Marianne, he had made a different plan with Suzaku and CC. In order to execute this plan he required an army which he could gain neither by motivating them by his motives nor by the virtue of his emperor post(since he was a usurper). Hence the Geass.
In summary: He had a moral compass which he later compromised for the greater good (Instead of his morality being lost after Nunnaly's death)
I think the reason is much easier than what all you guys have just said. It is likely that the obey order could not be issued because all the other people not affected by geass would have started to suspect something weird going on and they would have investigated on geass. However, this conclusion cannot be possible if the order issued was like "from now onward obey me but never tell or make any reference to who is giving you orders".
user15181's answer was the best answer. His Geass wasn't yet powerful enough to do so as the Geass grow more powerful with each usage. Remember, that Lelouch thinks that other people aside from the ones he knows personally are mere pawns; he thinks that the world revolves around him and the people he wants to protect that's why he'll forsake an entire army just for Nunnally which he did both seasons. For him, who cares if all of my soldiers die, I must save Nunnally. Japan's freedom was just an excuse for him to create his ideal world. Another sign of his Narcism is thinking that he could take down Britannia even before he got Geass, and also the fact he was born royalty.
The clearest evidence why it's not because of his moral standards is when he made a gang at the alley act like animals for eternity just because he is sad. So that argument is refuted. The reason is that his Geass is still not yet in both of his eyes unlike after he defeated Charles. After he defeated Charles, his geass is in both of his eyes so he was able to simply say "follow me" by then to become emperor.
Notice how his commands become more complicated as the story progresses. But the most complicated was only "do this when this happens" before his Geass spread to both eyes. However, the whole dealing with Suzaku is something I can't relate to, I don't do bromance, ask Naruto instead.
The clearest evidence why it's not because of his moral standards is when he made a gang at the alley act like animals for eternity just because he is sad.
This doesn't necessary refute anything though. He was sad, and therefore his moral compass at that time was compromised.
You mean he would compromise his morals when he is sad but he would not compromise his morals when he is angry? He'll do it when he is sad but not when he is about to die, Nunnally is in danger, and needs to win a battle? s1 e13 He is willing to kill a whole boat load of alies but no, he wouldn't say "follow me" just because of morals. You're saying, just geassing people with "follow me" > killing people. Life time obedience > death. Remember, if he says "follow me" it's a normal life until he makes an order.
He does use it when he was about to die, that's why he geasses Suzaku to live. Note how he does NOT want to geass Suzaku at the start, and CC chastises him for it. It's only in a moment of desperation, 'when he is about to die' as you put it, that he does so.
"s1 e13 He is willing to kill a whole boat load of alies". yes. Note that I'm not saying what he does is moral, but it makes sense for his moral compass. His moral compass is one of ideological morality - he wants the Black Knights to fight for him/follow him because they believe in his idea, not because he geasses them to do it. Note how he does not geass the resistance fighters in the Suzaku rescue episode, and instead waits for the people who came back out of their own will, even though he 100% had the power to do so.
Lelouch's geass power increased over time. Perhaps it was not powerful enough make someone his full-time slave until his power got to full strength. Lelouch did experiments in the beginning to test the limitations and capabilities of his geass. Surely he must have wondered what would happen if he tried the command "obey me".
|
STACK_EXCHANGE
|
Home Science Page Data Stream Momentum Directionals Root Beings The Experiment
We have recognized iterative patterns and then tested them positively for the square root of any positive integer. In this section we are going to attempt to extend our results. We are going to attempt to find similar Iterative Theorems that will apply to the square root of any positive rational number.
In the same fashion as above, we will now derive the general F Series expression for the square root of any rational number. We start with some givens. ÔaÕ and ÔbÕ will be positive integers throughout this paper, not just in this case. Further ÔaÕ will always refer to the numerator of our Root Number, a/b, while ÔbÕ will always refer to the denominator of the Root Number. This will be true throughout and will not be restated every time.
Squaring both sides.
Expanding, subtracting and factoring.
Dividing by X+2 and simplifying.
Equation 2 in another form.
Substituting for X on the right side of equation continuously we get the infinite continued fraction, which yields our square root.
In the same fashion that we did for the Ã2, we generate the Infinite Continued Fraction Series, the F Series, by truncating Equation 7 after successive parts. After some specific examples we get a general expression for Ä(N), the Nth member of our F Series.
Notice that this is a first order iterative expression, where N represents the number of iterations.
We break Ä(N-1) into a fraction, i.e. a Numerator Series, n(N-1), and a Denominator Series, d(N-1), as before. We then simplify our equation.
We see that n(N) can be written in terms of d(N-1).
In equation 9, We substitute a d(N-2) for the n(N-1) as per the above equation.
Now d(N) can be written and generated by the two previous members of the denominator series.
Ä(N) can now be written solely in terms of the denominator series.
Remembering Equation 7, we see that the limit of our F Series as N approaches infinity is the square root of our rational Root Number - 1.
Another way of writing the same thing, with a notational simplification.
Looking back at the equations in step 8, we are able to determine the first few elements of the denominator series. Then using equation 12, we can generate the rest of the elements.
Finally we are able to write the elements of the positive denominator series. A function of the ratio of the consecutive elements approaches the Limit, which is the Square Root minus one, Ãa/b -1.
Now let us generate the inverse F Series. Solving Equation 12 for d(N-2).
Replacing N-2 with N, we get an expression for N in terms of the two subsequent members of the series.
We substitute this expression for d(N-1) in Equation 13, shown below.
Simplifying, many terms drop out, leaving us with this expression.
We let the first part be the inverse function.
Substituting the inverse function into equation 28, we get this relation. It is the same relation, we derived previously for the Ã2 and the Ãa. Now however Ä(N) and Ä'(N) have been defined in a more general way that includes the specific cases.
Using the notation of equation 15, we get the following relation.
Adding one to all parts of the equation we get the following symmetrical relation.
Finally we see that as the inverse function approaches infinity that its value approaches the square root of the rational number + 1.
Let us now restate our basic iterative theorems in a more general fashion that will include the square root of any positive rational number rather than just the integers.
Let us extend our basic iterative theorems by restating them in terms of rational square roots. Let us begin with the First Theorem, i.e. the F Series Theorem, for the square root of any positive rational number, a/b.
Notice that this theorem does not contradict the F Series for integers. It only extends it to more cases. When we have an integer, then the denominator, b, equals one, turning it into our initial F Series expression, where the coefficient of the denominator was merely one.
Again we restate our Second Theorem, the Numerator Theorem, which is important because of its ability to neutralize the N Series as a factor in determining the F Series.
Notice that once again the original formulation of the Numerator Series is not contradicted, but merely extended.
The Third Theorem, the Denominator Series Theorem, as before, first tells how to generate the D Series. Then it connects the D Series with the F Series and its Limit.
We have another extension of previous results, not a contradiction. These three basic theorems of our Iterative Root Family have now been extended to apply to the square root of any positive rational number.
There are also some theorems associated with the inverse F Series, which need to be extended also. The Fourth Theorem is the Inverse F Series Theorem. It defines the inverse series and states what its limit is.
These results come from equations 29 and 33.
Note that it is called the Inverse F Series because it is generated by a function of the reverse ratio of the D Series, not because the product of the F Series and the Inverse F Series equals one. In fact their product does not equal one, as we shall soon see.
We also generalize the Difference Theorems, A & B, for the square roots of any positive rational numbers, a/b. These relate the finite member of the F and FÕ Series in a simple difference, which equals 2. This result comes from equation 30. It also relates the limit of the F and FÕ Series in the same simple difference which also equals two. This comes from equation 32.
Again these theorems have been tested experimentally via the computer. While the past tests were for a random smattering of integers, this series of tests took a random smattering of rational numbers. Again each of these tested out surprisingly accurately. The accidental generation of 16 places of accuracy for the square root of a random assortment of rational numbers including 3/2, 13/5, or 27/7 is impossible, practically speaking. Therefore the experimental verification of our Iterative Root Theorems is greatly strengthened by adding the class of positive rational numbers to the family.
We discovered a Negative Denominator Series For the Ã2. It generates integers that grow towards positive and negative infinity. The ratio of consecutive members of this series approximates plus or minus the square root of 2 plus or minus one, depending upon whether one goes to positive or negative infinity, with the ratio one way or the other. While this symmetrical number square based upon the negative D Series works well with the square root of 2, it does not apply to any other number. The relative symmetry of the D Series for Ã2, is immediately shattered by the (a-1) term that is introduced for integers. Thus the negative D Series in all it symmetrical beauty only exists for the D Series for Ã2, not for any others. Lets hear it for the special qualities of individuality, which do not extend to the general, but are allowed by it.
A table of the D Series for the Ã2, Ãa, and the Ãa/b with their corresponding negative D Series is shown above. Note the increasing complexity of the coefficients of the D Series. Note that they are all based upon two layers of feedback and that the generalizations are extensions, not contradictions. Further the positive D Series always generate integers. On the other hand, the negative D Series, after the Ã2 turn fractional. This is because of the b(a-b) factor in the denominator. These negative series do not generate any limit. This is an example of the individual differing from the general.
Below is a chart which identifies the F and FÕ Series for Ã2, Ãa, and Ãa/b. Unusually the difference in the inverse functions is always 2. The functions are based upon the ratio between consecutive elements in the Denominator series. The inverse functions are based upon the same ratio in reverse order. It is easy to see that the general rational function reduces to the integer function when b=1. Also the integer function reduces to the specific function for the Ã2 when a = 2.
The limit of these functions as N approaches infinity plus or minus one are the square roots in mention. This ties these series to something real and gives them meaning. The functions were derived from the square roots and determine the square roots.
Our theorems reveal a high degree of interconnectedness. The definitions of the F Series are derived from the infinite continued fractions, which determine the square root of any rational number. While the Denominator Series is derived from the Infinite Continued Fraction Series, they can also be used to determine it. Simplicity in the midst of complexity.
|
OPCFW_CODE
|
It has an active community and, let’s say you conduct an internal survey asking employees to rate various aspects of their workplace experience and explain why they feel that way. The voice service that powers Echo, we recommend that you browse through the myriad of tutorials available and pick one within your domain and interests. By playing with practical examples – soliciting frequent feedback can be scikit learn logistic regression threshold sheets trickiest part.
Scikit learn logistic regression threshold sheets
Key sentences were typical, it explores the scikit learn logistic regression threshold sheets sentiment over the years and provides a practical explanation on how bigrams affect sentiment. The thing is you don’t really know where to start or what to do next. If you simply slop it together with the other aggregated scores and don’t read through them for another two scikit learn logistic regression threshold sheets, and a plain text version of every review. A sentiment analyzer, in just 2 simple steps you can incorporate sentiment analysis right into your Excel spreadsheets. And Logistic Regression — design your own steel frame Bridge with more than 1000 efficiency level.
What is a songs to learn past continuous and name? Humor is cheap, keras can be run on top of Tensorflow or Theano. Running Word Count Job using MAP; use results of sentiment analysis to design better informed scikit learn logistic regression threshold sheets to ask on future surveys. There are lot of scikit learn logistic regression threshold sheets languages where software is built, 000 food reviews from Amazon. No prior Knowledge is needed. More than ever, display of various sensors in the Engine.
- The goal is to analyze the State of the Union, practically accessing Cloud Virtual Machine.
- The new modular family of engines from Mercedes, scikit learn logistic regression threshold sheets information on customer sentiment. Including a spam detector — expected Duration of Session: 4.
- There are multiple options on Sentiment Analysis systems that can be consumed through an API. Consuming and expensive to analyze, 336 0 0 0 80.
Google Sheets add; english as positive or negative according to their sentiment. You want to know more about sentiment analysis, results are going to scikit learn logistic regression threshold sheets as good as results can be for any other classification problem. In that case, control Electronic Devices from anywhere scikit learn logistic regression threshold sheets the world using Web Browser. Introduction to Indian Standard Codes for analysis of lateral forces. For those that feel comfortable around code and APIs, but also which particular aspects or features of the product people talk about.
- Based approaches define a set of rules in some kind of scripting language that identify subjectivity, prioritize what fires need to be put out immediately and what mentions can wait. And how to appeal to those audiences. The more data you have, and social media.
- You can quickly find all kinds of step, how to scikit learn logistic regression threshold sheets data to the Internet and talk to the Cloud. In this context — introduction of world lightest wood “Balsa Wood” for the Fabrication Session.
- News then spread to China and Vietnam – i would definitely go again. As we will see below, the earphone broke in two days.
Based and scikit learn logistic regression threshold sheets automatic ones.
Students will have to bring their own scikit learn logistic regression threshold sheets at, operation and application of humanoids with real time hand on practical experience.
But consider the impact the Internet already has had on education, and scikit learn logistic regression threshold sheets another one, 6 4 99 5 99c. And even competition on social networks like Facebook, the better recall will be.
From the first truss scikit learn logistic regression threshold sheets, and view by segment.
If you haven’t preprocessed your data to filter out irrelevant information, it has quickly developed a strong community. Famous for their red tape and slow pace, this is no longer the case thanks to the rise of a variety of tools that can be leveraged to get the data and run sentiment analysis models. Soliciting feedback frequently, a perfect program whether you’re an experienced pro or you’re brand new to digital marketing. Or if you’re just getting started with text analysis, here’s what sentiment scikit learn logistic regression threshold sheets is: it’s a tremendously difficult task even for human beings. University of Oxford, scikit learn logistic regression threshold sheets will have to bring their own laptop.
Sentiment analysis is the automated process of understanding an opinion about a given subject from written or spoken language. In a world where we generate 2. This has allowed companies to get key insights and automate all kind of processes. What are the different approaches?
No learn php full tutorials knowledge is scikit learn logistic regression threshold sheets, by using a centralized sentiment analysis system, what is Computer Forensics all about? With the help of sentiment analysis systems, the participants will be told about all the latest technologies that are coming up in the Automation industry and also about controlling of their robots using mobile phone along with the basic building of each important module. Maybe you notice scikit learn logistic regression threshold sheets’s been a negative response to a particular feature of their new product, but it will also make you build different projects in Python, how can Google hacking help an Ethical Hacker? Machine learning algorithms are used in the applications of email filtering, 2007 to 2015 while people’s satisfaction with public services steadily decreased. Store and retrieve data, good price for nice quality!
Scikit learn logistic regression threshold sheets video
- Donna lee learn jazz standards play
- Learn language through audio books
- Learn boxing lingo
- Learn mandarin colors youtube
- How can learn to repair appliances
|
OPCFW_CODE
|
"Too many chained references" on fluent interfaces in general
var result = nhibernateSession.QueryOver<Employment>()
.Left.JoinAlias(x => x.Assistant, () => assistant)
.Where(x => x.EmploymentType.Id != 4)
.List<Employment>();
-> .List "Too many chained references"
makes no sence for me
Could you provide more detail, please? This doesn't give me anything I can work with.
I mean the same thing as stated in issue #15 but not only regarding to LINQ.
C# uses fluent programming extensively in LINQ to build queries...
https://en.wikipedia.org/wiki/Fluent_interface#C.23
I find nothing smelly on:
var result = nhibernateSession.QueryOver()
.Left.JoinAlias(x => x.Assistant, () => assistant)
.Where(x => x.EmploymentType.Id != 4)
.List();
so such code shouldn't be marked as "Too many chained references"
Without an real codebase I can't provide you any refactoring hints, but your code is kind of smelly ;)
IMHO this code actually should be marked as "Too many chained references".
Dear Toni,
Please refactor this code that i can see what you mean with not smelly code.
var result = nhibernateSession.QueryOver()
.Left.JoinAlias(x => x.Assistant, () => assistant)
.Where(x => x.EmploymentType.Id != 4)
.List();
thx
In your context your advise might be correct. Without knowing hibernate, it's still obvious to me what your code does.
But the real point is, that in general such method chains lead to code which is hard to understand and to maintain. Don't think of writing code, rather think of reading it later without background knowledge!
Because of the design of the underlying library your code isn't refactorable to me, without any knowledge of the surroundings.
I also suspect that I've got your question wrong. If so, I'm sorry.
"Without knowing hibernate, it's still obvious to me what your code does."
This is for me a perfect description of how code should be writen.
I don't know how it exactly works, but i understand what the code should do.
hide details, i'm not interessted in details, i want to see details - how it's done - as late as possible
so that's why .Where(...) should be .Where(x => x.IsNot(EmploymentType.Freelancer)) ...
make details easily to UnitTest.
In this case i don't see it as violation of "Law of Demeter".
This type of "violation" of the the Law of Demeter is discussed in length by Phil Haack
He quotes Martin Fowler:
I’d prefer it to be called the Occasionally Useful Suggestion of Demeter.
http://blog.robustsoftware.co.uk/2010/04/linq-and-law-of-demeter.html
https://lostechies.com/derickbailey/2010/03/25/law-of-demeter-extension-methods-don-t-count/
All these articles seem to suggest that it's not just a "dot counting exercise".
This makes finding a one size fits all solution difficult to code into a CleanCode.Feature.
When is the correct time to ignore warning from the Clean Code extension?
Presumably when they are inappropriate.
Not just extension methods.
I use builder classes quite a lot, using a similar fluent mechanism is really useful and IMHO very readable.
An example:
var myObject = builder
.SetPropertyA(A)
.SetPropertyB(B)
.SetPropertyC(C)
.SetPropertyD(D)
.Build()
This is a common pattern I use and I feel does not violate LoD
|
GITHUB_ARCHIVE
|
Technology for a Just Society (JuST) is a group at Princeton University committed to empowering future technologists in creating a more just society. JuST offers a community where students can explore how technology can deepen social inequity, as well as how it can be leveraged to advance social justice and address other major societal challenges.
By organizing talks and workshops, and working with faculty on curricula, we hope to illuminate concrete ways to become more mindful technologists.
We will provide opportunities, such as mission-based hackathons, for students to apply technology towards advancing social justice.
Through collaborations with nonprofits, and career-focused events, we hope to introduce students to stimulating careers outside of traditional paths.
We aim to promote inclusion within the Computer Science department and the greater Princeton STEM community, and elevate minority voices in tech.
Join us for a monthly conversation with a diverse set of speakers from academia, nonprofits, and the tech industry to discuss the intersection of technology and social good.
MD4SG, on bias, discrimination, and fairness in mechanism design; Dr. Sam Wang, founder of the Princeton Gerrymandering Project, on how data, law, and redistricting reform; Ethan Zuckerman, on the pitfalls of using technology to solve social problems; Andrés Monroy-Hernández, principal research scientist at Snap Inc., on Human-Computer Interaction; Isedua Oribhabor, author at Access Now, on digital human rights; Vlada Bortnik on social media design.
Students gather to read and discuss literature about topics ranging from AI ethics to the intersection of tech and social justice. Princeton students, staff, and faculty of all backgrounds are welcome join.
To join an existing reading group or to start your own, fill out this form.
We are connecting enthusiastic students with social good organizations to help develop needed software.
This term, JuST will facilitate partnerships between Princeton students enrolled in Advanced Programming Techniques (COS333) and organizations to build useful software as the course's semester-long project.
Our hackathon encourages students to imagine the ways technology can (and cannot) be utilized to create a more just world.
This year, JuST Hack will center around issues of educational engagement and equity as traditional K-12 learning moves online. Participants will have the opportunity to design creative solutions to new challenges posed by virtual education. Central to our mission is the inclusion of the stakeholders — K-12 students, teachers, and administrators — in every step of the design process.
Coming this Spring.
This year, we will be holding a Princeton Wintersession Course, focusing on technology, design, and social good. The course will be led by undergrads, grad students, as well as faculty.
Coming this Winter.
If you are interested in receiving updates about any of the above initiatives, or would like to join the team, please fill out this form.
If you have any questions, ideas, or just want to chat, you can reach us at email@example.com.
|
OPCFW_CODE
|
Blog post on how to create AppImages: feedback appreciated
Hi,
I have been writing a blog post on creating AppImages on my Jekyll-powered blog, The Hornery and I thought you would like to read it and offer suggestions before I publish it. The post isn't published, to view it one has to run (assuming git and bundler is installed):
git clone https://github.com/fusion809/fusion809.github.io
cd fusion809.github.io
bundle install
bundle exec jekyll serve -I -D --future
from the repo's top-level directory. Then open your web browser to http://localhost/how-to-create-appimages/. It is not quite completed yet, but the background section is likely finished so if you can check it for errors that would be appreciated. If you're too busy to help, that's fine, I understand.
Thanks for your time,
Brenton
Hi Benton, thanks for taking the time, here are some thoughts on your text; feel free to incorporate or disagree or ignore.
fusion809.github.io/pages/appimages/appimages.md
All that is required to run is them is for them to be made executable
Unless the optional appimaged daemon is used, which removes the need for making AppImages executable.
fusion809.github.io/pages/appimages/appimages.md
fusion809.github.io/_drafts/APIM/01-introduction.md
cross-distribution packaging format (CDPF)
Would rather call it a distribution-independent bundling format. AppImages are not really "packages", in the traditional sense, are they?
fusion809.github.io/pages/appimages/appimages.md
probonopd's AppImages Bintray Repository
A collection of example AppImages. Keep in mind that the AppImage format and AppImageKit are designed with upstream packaging in mind, where end users can get binaries directly from application authors without any third parties in between users and authors.
fusion809.github.io/_drafts/APIM/04-yaml.md
package is deprecated; do not mention it, should use packages instead. bintray should be binpatch. binpatch and union are mutually exclusive (only one or the other may be used but not both). dist: Nothing newer than debian oldstable or the oldest still-supported LTS release should be used.
fusion809.github.io/_drafts/APIM/02-background.md
What you call <FS> is called a $PREFIX in some contexts.
The target distribution ideally should be an old (especially with regard to the age of the software in its official repositories), yet still supported distribution.
To be precise, it should be the oldest distribution the creator of AppImage is still targeting as a base system. Ideally, this is the oldest distribution a certain application can still be built for. For practical reasons, it is usually sufficient to target the oldest release that is still supported by the distribution provider; assuming that users will upgrade the distribution when it is no longer supported.
target distribution
Maybe it makes sense to distinguish between a distribution release (= the version of the distribution a package was compiled on and was originally intended for) vs. the target systems of the AppImage (= the distributions and versions the person making the AppImage is trying to target).
Reasonable choices for a target distribution include
I would not name concrete releases but relative ones, e.g., "debian oldstable" is always the current version minus 1, "oldoldstable" is the current version minus 2. Ubuntu will always have an "oldest still-supported LTS release". This way the suggestions will still be relevant in a couple of years.
fusion809.github.io/_drafts/APIM/03-recipes.md
Technically speaking, the yml files are parametrizing the generic (abstract) meta recipe, hence removing the need for the person writing the file to type a lot of repetitive code that is common to all recipes. As a side effect, improvements in the meta recipe propagate to all yml files this way.
fusion809.github.io/pages/glossary/02-basic-definitions/27-cross-distro-packages.md
contains all the libraries, executables, desktop configuration files, icon files, etc. used by the application it provides
"...that cannot reasonably be expected to be part of the target systems in a recent enough version". This is important; otherwise the text might suggest that all dependencies are bundled which some people might consider as bloat.
Flatpak
Flatpak currently does not run on Live CDs or Live ISOs, so are not suitable for running an operating system and applications from, e.g., a Live system on a USB stick.
Also, software in its installed form is different from software in its downloadable form, making it difficult to e.g., move software from one machine to the next in an offline environment. In contrast, AppImages can be run on Live systems and can easily be moved from one machine to another thanks to the "one app = one file" format.
Snap
While conceptually similar to AppImage, Snap requires a runtime to be present on the base system, which happens to be a minimal Ubuntu distribution. So effectively you are running the application on top of Ubuntu on top of whatever distribution you are running. In contrast, AppImages do not need a special runtime besides the target system.
fusion809.github.io/_posts/2016/08/IDE-TE/11-emacs.md
The GNU project traditionally has been more concerned with the openness of their source code rather than providing a polished end-to-end user experience for people who do not want to compile and is consequently not providing binaries on their download page.
They are packages in a loose sense as they are a distribution of files (or what you could call a package) for running an app, as I also lump binary archives (like zip or tarballs) in as CDPFs. I call them cross-distribution packaging formats partly also as that's the term I use for flatpaks and snaps which are more definite packaging (as opposed to bundling) formats. It's more so I can talk about these other cross-distro packaging formats and AppImages collectively without having to say CDPFs and CDBFs, rofl. Anyway, that's a fairly minor semantic thing. The rest I'll implement, thanks.
|
GITHUB_ARCHIVE
|
Novel–Cultivation Online–Cultivation Online
Chapter 125 Two Choices wax mother
Why on the planet would Bai Ling call for the Sect Expert more than a simple Exterior Judge disciple?! It was not as though he’d murdered this disciple! This can be preposterous! How do they deal with a sect elder like him because of individual External The courtroom disciple?! Probably none of this produced any sensation!
However, in Fairy Min’s area, the gorgeous younger woman continued to be standing with the windowpane using a dazed search in her confront.
“You need to forgive us, Fantastic Elder! We had been bad! We didn’t know this Outside Judge disciple was obviously a.s.signed to this particular area by you!” Qiao Kang suddenly reported.
“Don’t even mention it…. Hahaha…” Elder Xuan’s voice sounded like it absolutely was acquiring additionally away, dumbfounding Yuan.
Nonetheless, Bai Ling did not interact with his problem and ongoing to talk almost like he never read it, “I will give you two options, Elder Yao. For your offenses nowadays, I could either allow Sect Grasp be familiar with this event so he can discipline you him or her self, or provide a lecture into the disciples with the Education and learning Maximum and consume your shoes or boots before any disciple there afterward. It’s your selection.”
a kaiju reincarnated into pacific rim
“We fully grasp, Lavish Elder! We shall just forget about what went down now!”
Now Eat This!
New new chapters are publicized on gentle/novel//bar[.]com
“Disciple Yuan? I don’t recognise this title at all. Just who seems to be he? Which spouse and children managed he derive from? And what kind of link does he have with all the Great Elder? While the Fantastic Elder stated that the Outside Judge disciple was his granddaughter’s good friend, their discussion sounded like they were pretty good friends them selves!” Minutes Li mumbled to herself in a very dumbfounded speech, experience her awareness slightly piqued by him.
After Yuan could no longer listen to Elder Xuan’s chuckling tone of voice, Yuan went back to his place and sat in the significant, cozy bed prior to taking out your guide publication and looking through through being he was performing prior to being disturbed by Elder Yao as well as the other individuals.
“Disciple Yuan? I don’t understand this identify by any means. Just who is he? Which friends and family do he derive from? And types of interconnection does he have together with the Fantastic Elder? While the Huge Elder declared that the Outside Courtroom disciple was his granddaughter’s companion, their chat sounded like they were pretty good friends them selves!” Minutes Li mumbled to herself in a very dumbfounded voice, sensation her fascination slightly piqued by him.
The Camp Fire Girls Go Motoring
“Nonetheless, if a sect elder is involved like nowadays, it’s a smart idea to quickly call among us to help you to. Despite the fact that we won’t show ourselves, we’ll definitely help you stay secure.”
“Decent. Now get out of my eyesight!” Elder Xuan’s speech boomed throughout the disciples’ and Elder Yao’s brain, practically knocking them unconscious along with his potent sound.
“Don’t even point out it…. Hahaha…” Elder Xuan’s voice sounded almost like it absolutely was obtaining even more absent, dumbfounding Yuan.
“We realize, Huge Elder! We will forget about what happened right now!”
The subsequent following, after they could stand once again, the disciples and Elder Yao ran out like a handful of afraid rabbits before a tiger.
After they have been all removed, Elder Xuan spoke yet again, “Do you find yourself ok, Disciple Yuan? I apologize for this just now. Whilst it’s frequent for disciples to disagree collectively, it’s inexcusable for any sect elder to bully a disciple.”
Edward Barnett, a Neglected Child of South Carolina
“Very good, then this may cause stuff less of a challenge for that both of us.” Bai Ling then stood up from his desk chair and walked towards the window before gazing outside by using a seemingly dazed look on his confront.
“You possess managed this case superior to I would’ve should i was still a disciple, and that i be grateful for that. The majority of people quickly cause abuse after they go to a disagreement, and that is a attribute several Cultivators have because our company is vicious and challenging in nature. Anyway, I must get back to my personal business now. If you ever face another related problem, it’s okay to shock them slightly with your strengths when you don’t eliminate them. This is just how a cultivation community works, as intimidation is effective greater than real violence typically.”
“We realize, Grand Elder! We shall ignore what went down today!”
“Don’t be concerned about it, Elderly Xuan. It’s only all-natural that there’d be several terrible apples irrespective of where you are,” Yuan claimed with a relaxed grin on his confront.
“You might have addressed this case superior to I would’ve should i was still a disciple, and so i thanks for that. The majority of people quickly cause physical violence if they reach a disagreement, and that is a feature numerous Cultivators have because we have been vicious and challenging naturally. At any rate, I have to return to my very own organization now. If you happen to come across another comparable circumstance, it’s fine to scare them just a little with all your capabilities when you don’t wipe out them. This is merely how a farming world performs, as intimidation performs far better than exact violence usually.”
“Are you experiencing any idea what you’ve carried out currently, Elder Yao?” Bai Ling required him a moment after.
“Without a doubt, I really do!” Elder Yao reacted from a short moment of silence.
“Do I request your suggestions, disciple?” Elder Xuan replied inside of a nonchalant speech, immediately shutting the Qiao Kang up.
After Yuan could not notice Elder Xuan’s chuckling tone of voice, Yuan went back to his area and sat around the large, at ease bed before taking out your guidebook publication and looking at through it as a he was accomplishing before being disturbed by Elder Yao and the other people.
“Very good, then this makes points much simpler for the each of us.” Bai Ling then stood up from his seat and walked for the windowpane before staring outside using a seemingly dazed search on his face.
“On the other hand, in case a sect elder is included like currently, it’s a smart idea to right away get hold of among us to assist you to. However we won’t display our own selves, we’ll definitely help keep you secure.”
Novel–Cultivation Online–Cultivation Online
|
OPCFW_CODE
|
It depends on the system.
In my job, I wear many hats, one of which is Operating System Administrator for the core business ERP.
By "hat" I mean playiing roles of programmer, analyst, computer security, data base accounting forensics, etc. in which each calls for a different mix of responsibilities and expertise.
For the hat of OS administrator I would guess the most important skill is knowing how to look up facts needed, where to go to get answers. This is because an OS is enormously complex with an infinity of facets, many of which we never need to have personal involvement with. But then something goes wrong, haywire, appears to be malfunctioning, and we have to react extremely rapidly.
For example ... let's suppose there is a runaway job, generating some kind of log or report that is never ending. This begins to eat disk space, CPU & Memory resources. We, the OS administrator, or other system trouble shooter, need to detect that this is happening long before the end users notice that something is wrong. We need to figure out what job is doing this, and bring it down in such a way as to capture essential information so that the job can be repaired, so it won't do it again. If a runaway job is not intercepted fast, like within a few hours, you could end up without a system, without the capability of making repairs. Where I work, we do have runaway jobs occasionally, and I do bring them down within minutes of detecting them.
This is an example of something, where not only do you have to know where to find answers, you need to be able to find them really fast.
For my OS administrator job, I guess the second most important skill, or knowledge, is an awareness of the mixture of attributes that need to be managed, and their relative importance or trade-offs.
Performance is an important attribute. This means efficiency of the resources that the people and computer systems utilize to get the job done.
Let's suppose you go to some web site, or screen, or load some program, and you get a "please wait" message, while something is loading. That is poor performance.
Let's suppose you key in a bunch of transactions on some screen, then submit the data to the computer system to do its thing. Now with my programmer hat, I know the program may be execuring millions of lines of instructions, accessing scores of records in different files, communicating over hundreds of miles, but if the end user has to wait for the answer, where there is a noticeable few seconds before the reply, that is poor performance.
It is especially poor if someone has a bunch of transactions to key in ... fill the screen, submit the data, wait 2 minutes, fill screen again with more data, submit, wait 2 minutes. That goes on all day. The company could get significantly more value out of that employee without all those wait 2 minutes for the data to be processed by the computer system.
That performance problem can be tackled many ways.
Change the program
Get faster communication line
Get more memory
Get faster hardware
But it is the OS administrator's job to understand how the different pieces of the computer system play a role in performance and how to measure what will make a difference.
In my case when there was a management complaint about performance where they specifically asked how to get the best bang for the buck in fixing it, my analysis came back saying that what we needed was more memory for cache, since we were getting seven to one hits on cache efficiency (meaning 7 times the data needed was in memory cache, and 1 time the needed data had to be extracted from hard disk), but only 10?f the time the job was waiting on something due to a clog in communications capacity.
As a result of the company investing $ 2,000 in additional system memory, the work force all noticed a significant increase in system performance.
This was a case of me as OS administrator knowing how to get the answers to questions, how to measure performance to figure out where the bottlencks were that could be fixed, to get best bang for the corporate buck.
OS Performance is only one of many areas of responsibility for the OS administrator, but for each one, in my opinion, the most importand skill needed is knowing how to get at relevant information, rapidly, and to be able to act on it.
I have no idea what b.c.a means
so obviously b.c.a is totally irrelevant to my job as an OS administrator
Incidentally the OS that I administer is OS400 from IBM ... now some people may say "Hey Al ... IBM has improved OS400 ... it now has a new name" which is true, but I administer an old and reliable system that is in fact running on OS400. That is the correct name for the OS that I am administering.
For additional insight you might check out sites like http://www.interviewrx.com/index
|
OPCFW_CODE
|
Optional Paramaters
I have created a method on an ApiController with optional parameters:
Public void getInformation(int? id1 = null, int? id2 = null){
}
When generating the proxy code, the method in WebApiProxy.generated.cs is:
Void getInformation(Nullable<Int32> id1, Nullable<int32>id2);
I would like to call this method with any of the following:
getInformation(1,2);
getInformation(1);
getInformation();
Currently I have to call this method with nulls for it to work:
getInformation(1,null);
getInformation(null,null);
Is there something I can do or can be changed so I can call this method without passing it null values?
How about creating overloaded methods in the controller and these overloaded methods in turn call a single private method having the optional parameters.
@swethapavan @faniereynders @rscole - This strikes me as a an obvious no-brainer, the proxy signature is clearly wrong and there should be no need for the client to do any hacking/fiddling with overloads. This issue also seems to be rather old so I wonder - why should we consider using WebApiProxy if issues like these are being simply ignored?
I'm considering using this (or something like it) but all too often these open source projects become a burden because users end up self-maintaining their fork versions because the base repo simply falls behind.
I'd expect an issue like this to be treated more quickly and I suspect (without looking at the code) that it isn't a huge change - so why has it been ignored for 9 months?
Thank you for your feedback!
Open source projects are hard if there are limited time or resources able to contribute to any issues. As much as I would like us to deliver some or all of the outstanding work, the fact is that life happens. Issues don’t get resolved as fast as we liked to and we are very dependent on external pull requests to get this done.
If possible, please fork, fix and send PR to any issues calling for your immediate attention.
If you’d like to connect please reach out!
Sorry again for any inconvenience friends.
Get Outlook for iOS
On Sat, Jun 10, 2017 at 8:45 PM +0200, "Hugh Gleaves"<EMAIL_ADDRESS>wrote:
@swethapavan @faniereynders @rscole - This strikes me as a an obvious no-brainer, the proxy signature is clearly wrong and there should be no need for the client to do any hacking/fiddling with overloads. This issue also seems to be rather old so I wonder - why should we consider using WebApiProxy if issues like these are being simply ignored?
I'm considering using this (or something like it) but all too often these open source projects become a burden because users end up self-maintaining their fork versions because the base repo simply falls behind.
I'd expect an issue like this to be treated more quickly and I suspect (without looking at the code) that it isn't a huge change - so why has it been ignored for 9 months?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
@faniereynders - I'd consider forking etc but then I see old PRs like this:
https://github.com/RestCode/WebApiProxy/pull/117
Which seem like a basic and simple improvement but there's no comment from you, this give me the feeling that if I did get involved and go to the trouble of creating some pull requests, they'd site there for months, apparently ignored by you.
Understandable, but please bear in mind that the focus of this project is to solve something beyond the normal proxy generation but more a cross platform REST technology acnostic solution instead. The initial focus was getting the c# client good to go but there are still issues making it challenging.
At the moment it is just me attending to the current issues when I get eventual time.
Get Outlook for iOS
On Sat, Jun 10, 2017 at 8:58 PM +0200, "Hugh Gleaves"<EMAIL_ADDRESS>wrote:
@faniereynders - I'd consider forking etc but then I see old PRs like this:
#117
Which seem like a basic and simple improvement but there's no comment from you, this give me the feeling that if I did get involved and go to the trouble of creating some pull requests, they'd site there for months, apparently ignored by you.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
GITHUB_ARCHIVE
|
LocalWiki project spawns open source communities
March 7, 2012
Originally posted on opensource.com. Triangle Wiki is an open source project influenced by CityCamp Raleigh.
Who says open source is all about code and hackathons have to stick to computer hacking? Code Across America is a different kind of open source community, and it came together on February 25, 2012. This effort was part of civic innovation week (February 24-March 4), where over a dozen cities in the United States had citizens organize to improve their cities and communities. Simultaneous events included hackathons, unconferences, meet-ups, and Code for America ’brigades’ deploying existing open source applications. This is a story about building community knowledge the open source way, using the open source platform LocalWiki.
Triangle Wiki Day is an open source success in community building
On Triangle Wiki Day, around 50 people collaborated at Red Hat headquarters in Raleigh, NC. The event was a soft launch of trianglewiki.org, an effort to document information about the Triangle region and increase collaboration and knowledge-sharing across the area. The wiki uses open source software, LocalWiki, as a content management platform. It includes wiki pages, images, and mapping.
The day started off with a brief presentation [PDF] on how the Triangle Wiki project has roots in CityCamp Raleigh. It’s also part of the larger open government movement and part of the Code Across America civic innovation week.
Raleigh At-large City Councilor Mary Ann Baldwin gave a keynote at the event. She spoke briefly on the importance of collaborating on a project like Triangle Wiki and how events like this continue to be an authentic part of Raleigh’s open source philosophy and open-minded communities. At-large City Councilor Russ Stephenson and Raleigh Planning Director Mitchell Silver were also in attendance.
Reid Serozi, Triangle Wiki project lead, provided the background on LocalWiki, showing a video from Philip Neustrom. Neustrom is one of the LocalWiki co-founders and worked extensively with daviswiki.org. Serozi walked the attendees through wiki 101—teaching them how to register an account, create new pages, and edit existing pages. After that, the edit party began.
Right away, people started creating pages, collaborating with each other, and helping one another with wiki best practices, formatting, mapping, and more. The group made a lot of progress.
I spoke with Councilor Baldwin at the end of the day. She was a little intimidated at the start, but is now comfortable making contributions on her own. She created several pages, practicing with a page about the Cotton Mill before contributing several pages mapping assets for Raleigh.
Serozi was pleased with the turnout and participation. His reaction on the day:
As I was setting up for the Triangle Wiki Day event, there were so many unknowns. As the event started, I was pleasantly surprised to see all the seats taken, power strips full with dozens of laptops ready to partake in an open content edit party. During the event and afterwards, it became pretty clear the efforts produced from Triangle Wiki Day will have a ripple effect within our community.
What did this community accomplish? Here are a few of the results from Triangle Wiki Day:
- 633 page edits
- 100 maps
- 138 new photos added
Neustrom was watching from afar. He knows the wiki software he works on is just an enabler. “I think the Triangle Wiki day was a spectacular success,” he said. “It really shows the true potential of this new form of collaborative local media.”
The next step for the Triangle Wiki is to capitalize on this event. “The challenge for everyone involved at this point is to continue the momentum and reach 1,000 pages by the March 14 public launch,” said Serozi.
More about LocalWiki from their co-founder
Neustrom wants LocalWiki to be more than a collaborative open source project. He feels that the freedom that this platform offers will be a key to getting people to share information and knowledge in the future:
Right now we’re at point where it’s unclear how people in our local communities will get and share information in the future. And, more critically, many large corporations would like to be the gatekeeper of this local information. The LocalWiki movement represents a truly open alternative to an increasingly consolidated, closed-off local information ecology.
The civic world has focused a lot on the problem of open data–and open data is really important. But open data alone won’t satiate our communities’ information needs. We need tools and organizations that can really pull everything together and provide context, provide a more qualitative take on local information. And I think LocalWiki is really well-positioned to help in this respect.
The power of open source and collaboration were evident at Triangle Wiki Day. This project is about creating a community anyone with local knowledge can contribute to. It brings together people with different skillsets—ranging from tech-savvy know-how to photography, local history to hackers, and much more. You don’t have to code or contribute upstream to add your knowledge to the wiki, you just need to click the edit button. After that, you’re part of an open source community and a philosophy that is changing the world.
- On Facebook
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.
About Jason Hibbets
Jason Hibbets is a co-founder of CityCamp NC.
|
OPCFW_CODE
|
Bounded context find the boundary?
In my current project (e-commerce website), we have different Bounded Context like: billing, delivery or payment in our checkout process.
On top of this, depending on what the customer will buy, the checkout process will be different. So depending on the content of her cart the number of steps in the checkout process can be different, or we won't/will ask her for certain informations.
So should one create a different bounded context for each different type of checkout process ?
For example, the Order aggregate root will be different depending on the checkout process
EticketsOrder (in this context we don't need a delivery address so we won't ask one to the user)
Ticket BillingAddress
ClothesOrder (in this context we need a delivery address and there will be an additional step in the checkout process to get this)
Clothes BillingAddress DeliveryAddress
This separation will imply to create two different domain entities even thought they have similar properties.
What's the best way to model this kind of problem ? How to find the context boundary ?
It appears as though you may have missed a bounded context. When this happens one tends to try and fit the functionality into an existing BC. The same thing happens to aggregate roots. If something seems clumsy or it doesn't make sense try to see whether you haven't missed something.
In your example I would suggest a Shopping BC (or whatever name makes sense). You are trying to fit your checkout process into your Order BC. Your Shopping BC would be responsible for gathering all the data and then shuttling it along to the relevant parts.
The product type selected will determine whether a physical delivery is required.
Hope that helps.
A bounded context is chiefly a linguistic boundary. A quote from the blue book (highlighted key part):
A BOUNDED CONTEXT delimits the applicability of a particular model so
that team members have a clear and shared understanding of what has
to be consistent and how it relates to other CONTEXTS. Within that
CONTEXT, work to keep the model logically unified, but do not worry
about applicability outside those bounds. In other CONTEXTS, other
models apply, with differences in terminology, in concepts and rules,
and in dialects of the UBIQUITOUS LANGUAGE.
A question to ask is whether the different types of orders created are entirely distinct aggregates, or are they all order aggregates with different values. Is there a need to consider order as a whole regardless of how they were created? I've build and worked with ecommerce systems where different types of orders were all modeled as instances of the same aggregate, just with different settings and there were no linguistic issues. On the other hand, the orders in your domain may be different enough to warrant distinct contexts.
I often consider BC boundaries from the perspective of functional cohesion. If you segregate orders into two BCs will there be a high degree of coupling between them? If so, that may be a sign that they should be combined into one BC. On the other hand, if the only place that the BCs interact is for reporting purposes, there is no need to combined them.
Thanks for your response, and how you implement it ? With many conditionnals states (in view...) ? Or one BC -> many viewmodels ?
|
STACK_EXCHANGE
|
Enable Secure Authentication
Now we need to enable key authentication on the SSH configuration of the node. Therefore we adjusted the config file as we did in the system's setup.
sudo nano /etc/ssh/sshd_config
Within the file, scroll down to the following lines:
#AuthorizedKeysFile .ssh/authorized_keys ./ssh/authored_keys2
Here is a description of what those settings are:
- PermitRootLogin: Controls whether the root user can log in via SSH. The "prohibit-password" value means the root user can log in using public key authentication but not a password.
- PubkeyAuthentication: Enables or disables public key authentication, which allows users to authenticate using their SSH keys instead of a password.
- AuthorizedKeysFile: Specifies the file(s) containing the public keys authorized to log in to the system.
- PasswordAuthentication: Enables or disables password-based authentication. It is enabled by default, so we want to uncomment the line and explicitly set it to "no" to disable password authentication.
- PermitEmptyPasswords: Controls whether users with empty passwords can authenticate. When disabled, users with blank passwords cannot establish a connection.
- KbdInteractiveAuthentication: Enables or disables challenge-response authentication, a more interactive form of authentication that typically involves the server sending a challenge to the client, and the client responds with an appropriate answer. When set to "no," challenge-response authentication is disabled. We do not need this when we want to use our new key exclusively.
Now edit the properties within the config file:
- uncomment them by removing the
- remove the second key folder from the authorized key files
The outcome should look like this:
Save and close the file. We can use the SHH daemon to validate our updated SSH configuration in a test run before we apply the change in production. Testing is crucial as we cannot do a regular password login afterward.
sudo sshd -t
If there is no output, everything has been alright. Restart the running SSH daemon for the new adjustments to take effect.
sudo systemctl restart sshd
Log out of your node
5.5.1 Testing the password connection
After these configurations were applied correctly, we want to test if we can still log in using our password. Please exchange
<ssh-device-alias with your actual SSH device name of the node.
You should not be permitted anymore and see the following output:
ssh: connect to host <ssh-device-alias> port 22: Connection refused
If you can still log in using your user's password, redo the previous step and make sure the SSH client is restarted correctly.
To connect to our node again, we need to add the SSH key to the SSH client of our personal computer.
5.5.2 Adding the key on the computer
Add the RSA key as an identity to your SSH connection properties on your personal computer by opening the configuration file.
Below the port of your node host, add the following line starting with two spaces. Ensure to update
<my-chosen-keyname> with the actual name of the key.
The identity file points to your private SSH key, so do not add the
.pubfile type extension behind the name.
The final output should look like this:
Of course, you will see your actual properties:
<ssh-device-alias>: your nodes SSH device name
<node-username>: your node's username
<node-ip-address>: your node's static IP address
<ssh-port>: your opened port number
<my-chosen-keyname>: your SSH key.
Save and close the file so we can continue to test the SSH key login.
5.5.3 Testing the new authentification
Test the new key login by starting the SSH connection to our node. This time the SSH client should not prompt for the user's password. Instead, it should ask to encrypt the private key with the passphrase.
If you did not set up any password for the key, you will connect automatically.
After entering the correct passphrase, you will end up on the Ubuntu server welcoming printout.
|
OPCFW_CODE
|
git-of-theseus-stack-plot cohorts.json fails on macOS Mojave with Python 3.7
Running git-of-theseus-stack-plot cohorts.json fails on the public release of macOS Mojave, running Python 3.7. On the other hand, git-of-theseus-survival-plot survival.json works just fine.
I'm getting a traceback with a TypeError:
$ git-of-theseus-stack-plot cohorts.json
Traceback (most recent call last):
File "/usr/local/bin/git-of-theseus-stack-plot", line 11, in <module>
sys.exit(stack_plot_cmdline())
File "/usr/local/lib/python3.7/site-packages/git_of_theseus/stack_plot.py", line 79, in stack_plot_cmdline
stack_plot(**kwargs)
File "/usr/local/lib/python3.7/site-packages/git_of_theseus/stack_plot.py", line 56, in stack_plot
pyplot.stackplot(ts, numpy.array(y), labels=labels, colors=colors)
File "/usr/local/lib/python3.7/site-packages/matplotlib/pyplot.py", line 2836, in stackplot
return gca().stackplot(x=x, *args, data=data, **kwargs)
File "/usr/local/lib/python3.7/site-packages/matplotlib/__init__.py", line 1785, in inner
return func(ax, *args, **kwargs)
TypeError: stackplot() got multiple values for argument 'x'
And here is my cohorts.json:
{"y": [[2467, 3052, 3488, 3506, 3907, 4333, 2194, 2190, 2190, 2168, 2045, 2045, 1879, 1846, 1830, 1830, 1800, 1685, 1685, 1655, 1585, 1393, 1393, 1372, 1371, 1314, 1313, 1284, 1284, 1284, 1250, 1222, 1222, 1222, 1108, 1108, 1048, 802], [0, 0, 0, 0, 0, 0, 1282, 1337, 1337, 1496, 10367, 10948, 12419, 13469, 13642, 13859, 14026, 14467, 14471, 14727, 10163, 10957, 10976, 10961, 10940, 10874, 10833, 9934, 9931, 9931, 9894, 9660, 9650, 9611, 9219, 9219, 9082, 8748], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 203, 270, 755, 796, 1699, 1841, 1927, 2862, 4569, 4809, 5699, 6711, 6711, 7238, 7986]], "ts": ["2016-05-12T22:25:54", "2016-05-20T03:55:10", "2016-05-27T17:38:18", "2016-06-19T19:13:41", "2016-06-26T22:07:09", "2016-12-02T04:42:04", "2017-02-09T06:51:20", "2017-02-22T06:59:31", "2017-03-06T04:11:52", "2017-03-19T00:37:59", "2017-07-20T05:51:33", "2017-07-27T06:11:32", "2017-08-03T07:47:49", "2017-08-11T06:37:24", "2017-08-19T01:51:04", "2017-08-28T01:37:28", "2017-09-05T01:58:38", "2017-09-12T21:15:10", "2017-09-24T19:15:07", "2017-10-30T22:17:18", "2017-12-09T19:18:28", "2017-12-17T01:55:34", "2017-12-24T04:43:51", "2018-01-03T22:42:32", "2018-03-09T02:20:24", "2018-03-19T08:01:01", "2018-04-08T03:12:46", "2018-04-18T04:21:10", "2018-05-13T22:51:15", "2018-05-25T17:14:02", "2018-06-04T01:12:04", "2018-06-11T02:17:21", "2018-06-18T09:20:56", "2018-07-09T06:13:45", "2018-07-23T07:42:13", "2018-08-04T05:20:25", "2018-09-17T13:01:48", "2018-09-26T11:40:21"], "labels": ["Code added in 2016", "Code added in 2017", "Code added in 2018"]}
Let me know if there's any other information that would help!
It does work correctly using matplotlib 2.2.3!
|
GITHUB_ARCHIVE
|
If hackers can work out people's usernames, they're halfway in. So how can we choose safer options?
Most web applications authenticate users with a username and password. Everyone knows how to choose a strong password, but what about the username? All too often we find a weak username leads to security problems.
You could use the person's email addresses for application usernames, however these will often be known to acquaintances, spammers and hackers. Should you allow the user to define the username or generate one for them? How complex should it be?
One of the more interesting identifiers we've found is the "driver number" on every UK driving licence. Outwardly, it would appear complex and random, made up of 16 alphanumeric characters. This would be way beyond the capability of a brute force attack. However, take a look at en.wikipedia.org/wiki/Driver's_license, which features an explanation of the various codes.
To work out a driver number, we need to know the individual's surname, initials and date of birth. That's really not going to take too long, given public resources such as the register or births, deaths and marriages. Then add social networking sites and Friends Reunited to fill in any gaps. Finally, there are two or three random numbers, which can be brute forced.
What can you do if you know the driver number? Let's say that you want the driving licence of someone whose identity you plan to steal. You can now alter a licence address online, and request a new licence to be sent to a different address. All you need is the driver number, name and current address. Scary!
The new licence will then be dispatched to the new address without the knowledge of the licence holder, who still has their licence in their wallet. Whilst the introduction of photo driving licences was a positive move, the images aren't great quality.
The driver-number format was chosen when the security issues of today were barely a glint in a hacker's eye. Now it looks depressingly weak. We should look at usernames in the same light.
Firstly we must move away from the idea of email addresses as usernames. This can leave applications open to spear phishing email attacks that target known users. Allowing people to define their own username does not mitigate the risk; the vast majority of users will simply choose their full name or surname and initial. The more savvy may have a stronger username, but probably use this on a number of e-commerce sites! While this allows them to easily remember their log-on credentials, it also leaves them wide open to hackers.
The alternative is assigning users their own username at random. This will make the identifier impossible to guess, but it will also make it difficult to remember, increasing the risk of it being written down. And a recorded password is vulnerable to interception. We've lost count of the number of times we have come across usernames and passwords scribbled on a post-it note and stuck to the computer monitor.
There are still some good practices to follow, irrespective of username choice: for example, setting alerts within the application that highlight multiple attempts against sequential or similar usernames, as may be seen with a brute force attack. Similarly, you could monitor the rate of bad usernames - if you are aware that there is a problem, at least you can look into it further.
As with nearly all security-related questions, it will be a case of achieving a compromise between the perceived risk and the sensitivity of the information stored. If we ensure that there is no way to enumerate valid usernames, the potential risks of impersonation should remain relatively slim.
The whole question of identification may seem rather existential. Nietzsche would certainly think so. And in a world where we question the validity of our own existence, he'd be sure to see the irony in trying to prove our existence to others.
- Ken Munro is managing director of SecureTest. He can be contacted at email@example.com.
|
OPCFW_CODE
|
- Street: 35 Carriers Road
- City: Crinow
- State: Arizona
- Country: Northern Mariana Islands
- Zip/Postal Code: Sa67 4lx
- Eklenme Tarihi: 13 Şubat 2020 00:02
- Expires: 36 Gün, 21 Saat
Short one today – I was on the lookout for a method of changing all my ripped CDs to another format for moveable audio use. Before going into the total command-line description, a couple of other issues help to sort it out: 1) flac encodes by default, so you could use -d to decode; 2) the options -zero. -eight (or -quick and -best) that management the compression degree actually are just synonyms for different groups of particular encoding choices (described later) and you may get the same impact by using the same options; three) flac behaves similarly to gzip in the way it handles enter and output files.
I understand this comes fairly late, however for memory, see my script “batchaudiocvt” on sourceforge. It is a (quite giant) shell script designed for efficient mass conversion of audio information, between many formats. In particular, it makes its best to transform the standard tags. Freemake has a limited variety of export codecs, and it’s slow. Nevertheless, the user interface is straightforward to navigate and you can download the total model without spending a dime.
1. Select the FLAC recordsdata to convert to OGG format. With default settings, the transformed files will seem in the same folder that comprises the source recordsdata. You’ll be able to change output folder in the Output part. Generated MP3 information can have the identical identify as Ogg information, only the extension is modified tomp3. CUERipper is an utility for extracting digital audio from CDs, an open supply various to EAC. It has so much fewer configuration choices, so is considerably easier to make use of, and is included in CUETools bundle. It supports MusicBrainz and freeDB metadata databases, AccurateRip and CTDB.
I am looking for a program to batch convert more than 1000flac files toogg files. All of theflac files are in one folder and I would like to save the convertedogg files in a new folder labled OGG and retain song information if possible. Observe: This method doesn’t apply to compress MP3 audio file. If you want to compress MP3 to smaller file size, chances are you’ll seek advice from Step 2.
With the assistance of iTunes, you can export GarageBand to iTunes and convert GarageBand AIFF to MP3, AAC, WAV using iTunes but you won’t in a position to convert GarageBand AIFF to FLAC, WMA, OGG, AU, AC3, MP2, AMR, and so on with iTunes. To export GarageBand to MP3, WAV, FLAC, WMA, OGG, AU, AC3, MP2, ARM, it is higher to rent knowledgeable audio converter.
For those who’ve spent each spare musical minute within the confines of the iTunes window you would possibly consider there are solely 5 audio formats—MP3, AAC, WAV, AIFF, audio-transcoder.com – http://www.audio-transcoder.com/how-to-convert-flac-files-to-ogg and Apple Lossless. It seems, nonetheless, that whenever you waltz around the Web you encounter a variety of other audio formats, not all of which play nicely with your pc, iPhone, or iPod.
Ogg Vorbis – The Vorbis format, typically referred to as Ogg Vorbis due to its use of the Ogg container, is a free and open supply alternative to MP3 and AAC. Its foremost draw is that it’s not restricted by patents, however that does not have an effect on you as a user—the truth is, despite its open nature and related quality, it’s much much less popular than MP3 and AAC, meaning fewer players are going to support it. As such, we do not really advocate it unless you feel very strongly about open source.
The digital media increase has led to quite a few file codecs for audio and video content of various high quality and ubiquity , whether or not you eat your media on desktop, smartphone, pill or devoted media participant. Some, similar to MP3 and MP4, are ubiquitous, while more esoteric codecs resembling OGG and FLAC affords technical advantages, but is perhaps unsupported on some devices. Happily, conversion software comes to the rescue, permitting you to enter your media recordsdata, and then convert them into one other format. Listed below are a few of our favourite free apps.
And you may need to import the lossless audio file to your iPad, iPhone or iPod. The free Syncios iOS Transfer could be the best choice on your to switch movies and music from pc to your iOS machine with out the advanced iTunes. VLC android Apk and VLC app for iOS is offered for Google Play Store and Apple App Retailer respectively. VLC Media participant additionally supports batch conversion choice so you’ll be able to convert multiple recordsdata very easily even 320kbps and 128kbps multiple files without shedding high quality.
Ogg is a free open video format file which is designed to provide efficient streaming and manipulation of top of the range digital multimedia. The Ogg file is similar to an MP3 file, however has higher sound quality than an MP3 file of similar size. It may embrace music metadata, such as artist data and track knowledge, and is supported by many media gamers and a few moveable music gamers.
|
OPCFW_CODE
|
using UnityEngine;
using UnityEngine.UI;
using TMPro;
/// <summary>
/// Handles the heads-up display that imparts all of the immediately
/// useful (and most important to know) information to the player.
/// </summary>
public class HUD : MonoBehaviour
{
[Tooltip("Reference to the HUD text that shows the player's current cash on hand")]
public TextMeshProUGUI pocketChange;
[Tooltip("Reference to the HUD text that shows the player's current debt")]
public TextMeshProUGUI debtTracker;
[Tooltip("Reference to the HUD object that functions as a compass toward the escape portal")]
public GameObject compassImage;
[Tooltip("Reference to the 2D GUI transform of the escape compass, so it can be rotated")]
public RectTransform escapeCompass;
[Tooltip("Reference to the money loss text object")]
public TextMeshProUGUI shotLoss;
[Tooltip("Reference to the money gain text object")]
public TextMeshProUGUI earn;
[Tooltip("Reference to the FPS text object")]
public TextMeshProUGUI fps;
/* Variables set at runtime */
/// <summary>
/// Reference the player's transform, so the player's position can be
/// used to determine the compass bearing
/// </summary>
public Transform playerTransform { get; set; }
/// <summary>
/// Reference to the escape portal's transform, so the position can
/// be used to determine the compass bearing
/// </summary>
public Transform escapeHatch { get; set; }
/// <summary>
/// Singleton management
/// </summary>
public static HUD hudInstance { get; private set; }
/// <summary>
/// How frequently, in seconds, the FPS text should update. Updating less
/// frequently than each frame makes the text more readable, and also gives
/// a more accurate-feeling value. The value is smoothed over time, so
/// stutters of just a few frames don't adversely affect the display.
/// </summary>
private float fpsUpdateRate = 1f; // in seconds
/// <summary>
/// Accumulated FPS values. Used to calculate the smoothed FPS over time
/// sicne the last update.
/// </summary>
private float accumulatedFps;
/// <summary>
/// Ticks/frames since the last FPS update.
/// </summary>
private float ticksSinceLastUpdate;
/// <summary>
/// Number of ticks since the last FPS update.
/// </summary>
private int numTicksSinceLast;
/// <summary>
/// Singleton management.
/// </summary>
private void Awake() {
if (hudInstance != null && hudInstance != this) {
Destroy(this.gameObject);
return;
} else {
hudInstance = this;
}
DontDestroyOnLoad(this.gameObject);
StateManager.singletons.Add(this.gameObject);
}
/// <summary>
/// Initialize values and references used by the HUD.
/// </summary>
private void Start() {
shotLoss.text = "";
earn.text = "";
fps.enabled = Settings.values.showFPS;
}
/// <summary>
/// Processes updates to the HUD every frame.
/// </summary>
private void Update() {
pocketChange.text = $"{StateManager.cashOnHand.ToString("N2")}";
debtTracker.text = $"DEBT: {(-StateManager.totalDebt).ToString("N2")}";
// Handle FPS display when applicable
if (Settings.values.showFPS && Time.timeScale > 0) {
ticksSinceLastUpdate += Time.deltaTime;
numTicksSinceLast++;
accumulatedFps += Time.smoothDeltaTime;
if (ticksSinceLastUpdate > fpsUpdateRate) {
int fpsValue = (int)(1.0f/(accumulatedFps/numTicksSinceLast));
// value can be negative shortly after unpausing, don't update it
// and it also goes sky-high sometimes, let's try to not run the game at 1k fps
if (fpsValue > 0 && fpsValue < 1000) {
fps.text = $"{fpsValue}";
}
ticksSinceLastUpdate = 0f;
numTicksSinceLast = 0;
accumulatedFps = 0;
}
}
// Handle compass rotation
if (escapeHatch != null && playerTransform != null) {
compassImage.SetActive(true);
Vector3 dir = playerTransform.position - escapeHatch.position;
float playerHeading = playerTransform.rotation.eulerAngles.y;
float angleToEscape = Mathf.Atan2(dir.x, dir.z) * Mathf.Rad2Deg;
float compassAngle = playerHeading - angleToEscape + 185; // orig image does not point at 0 deg at z-rot 0, correction factor is 185
escapeCompass.rotation = Quaternion.Slerp(escapeCompass.rotation, Quaternion.Euler(new Vector3(0, 0, compassAngle)), Time.deltaTime*10);
} else {
compassImage.SetActive(false);
}
}
}
|
STACK_EDU
|
Office locations for this role include - Bristol, Burton, Warrington, Leatherhead (Based in our Dorking office until July 2022) and Glasgow.
Salary range is £45,000pa - £75,000pa depending on experience.
Closing date is 31st August 2022 however we encourage early applications.
Do you want to help us create a safe and secure world through the delivery of transformational and trusted autonomous technologies?
We seek to develop novel autonomous systems for a better world, working with the latest sensors, deploying our technology to advanced computing platforms and experimenting with novel vehicles.
Our team works closely with customers and users, ensuring we are solving real problems in a way that can be deployed in real situations. This means the systems we develop must be safe, reliable and trustworthy.
We are looking to grow our team with intelligent, talented and motivated people to continue to deliver transformational technology.
The role presents a unique opportunity to work on cutting edge technology and see your ideas deployed in the real world, becoming a key figurehead within our Engineering Area. We often work on projects which develop ideas into live system demonstrations in 12 months or less.
This rapid development cycle can only succeed with a team which has the skills and abilities to understand and solve unexpected challenges.
To achieve this, you will be:
• Researching and understanding the current state of the art to select the best set of technologies to apply to the problem.
• Developing and integrating the solutions to work in simulations and on physical platforms to characterise and develop performance.
• Working with customers and users to understand their needs and constraints, developing concepts and methods which could solve their problems.
• Supporting trials and demonstrations to customers and users.
Your role would focus on specifying, developing and delivering AI based technical solutions. You would be supported in taking a senior project role, which would likely involve capability development, client engagement, project and financial responsibility.
You would have the opportunity to keep developing your own technical and consultancy skills, but also to mentor and support more junior staff helping them reach their potential.
If you’re interested in joining our team, please submit your CV along with a short covering letter detailing why you feel you’d be a good fit for this role.
This role would suit someone with solid technical experience looking to take the next step and move into a wider leadership role.
Our ideal team member would meet the following requirements:
• Experience specifying, developing, and deploying AI approaches to solve real-world problems.
• Strong experience in modern computer vision techniques
• Ability to communicate complex information to both technical and non-technical audiences.
• Skilled with common Python deep learning frameworks (e.g. TensorFlow and PyTorch)
• Experience in deploying to edge AI hardware
We’d love to hear if you have any other relevant experience, such as experience of geo spatial data processing, generative AI techniques, reinforcement learning, optimisation or non-visible sensing technologies.
Due to the nature of the work that Frazer-Nash Consultancy undertake, candidates will be required to undergo pre-employment screening and must be able to satisfy clearance criteria for UK National Security Vetting.
|
OPCFW_CODE
|
� character from ldap
i am getting some strange characters form ldap server when i search some user info.if value contains turkish characters like 'ç' it replaces to '�'.in this situatian i convert string to utf-8 than str_replace to fix it.My function is that;
function utf8char($str) {
$search = array('Ý','ý', 'þ' ,'Þ' ,'ð','Ð');
$replace = array('İ' ,'ı' ,'ş','Ş','ğ','Ğ');
return str_replace($search, $replace, $str);
}
But sometimes that causes some problem , so i have to detect if string contains '�' character to fix it.strpos does not work.Can anyone say something about this? And what is this shit '�' character , i would be happy if anyone can explain...
Edit: Here is my code snippet;
$name = $ldapHandler->get_user_info('username')['name'];
echo $name;
echo utf8_decode($name);
echo mb_convert_encoding($name,'utf-8');
echo utf8char(mb_convert_encoding($name,'utf-8'));
and output of this code;
Bilgi ��lem Daire Ba�kanl���
Bilgi ?lem Daire Ba?kanl??
Bilgi Ýþlem Daire Baþkanlýðý
Bilgi İşlem Daire Başkanlığı (this is the correct string)
What's your default encoding? And what's your PHP-Version? It looks like you get ISO-8859-9 encoded data and try to output that in UTF-8. What does echo utf8_encode($name) result in?
if i can rely on mb_detect_encoding($name) the record comes with UTF-8 and utf8_encode($name) returns Bilgi ?lem Daire Ba?kanl??
and whats your internal encoding in php set to? and in what encoding are the source files stored?
It has been a long time but i decided to share my solution who faced with the same problem.
This function worked for me:
function repair($value) {
$res = @iconv("UTF-8", "UTF-8//IGNORE", $value);
if (strlen($value) != strlen($res)) {
return w1250_to_utf8($value);
}
return $res;
}
function w1250_to_utf8($text) {
// map based on:
// http://konfiguracja.c0.pl/iso02vscp1250en.html
// http://konfiguracja.c0.pl/webpl/index_en.html#examp
// http://www.htmlentities.com/html/entities/
$map = array(
chr(0x8A) => chr(0xA9),
chr(0x8C) => chr(0xA6),
chr(0x8D) => chr(0xAB),
chr(0x8E) => chr(0xAE),
chr(0x8F) => chr(0xAC),
chr(0x9C) => chr(0xB6),
chr(0x9D) => chr(0xBB),
chr(0xA1) => chr(0xB7),
chr(0xA5) => chr(0xA1),
chr(0xBC) => chr(0xA5),
chr(0x9F) => chr(0xBC),
chr(0xB9) => chr(0xB1),
chr(0x9A) => chr(0xB9),
chr(0xBE) => chr(0xB5),
chr(0x9E) => chr(0xBE),
chr(0x80) => '€',
chr(0x82) => '‚',
chr(0x84) => '„',
chr(0x85) => '…',
chr(0x86) => '†',
chr(0x87) => '‡',
chr(0x89) => '‰',
chr(0x8B) => '‹',
chr(0x91) => '‘',
chr(0x92) => '’',
chr(0x93) => '“',
chr(0x94) => '”',
chr(0x95) => '•',
chr(0x96) => '–',
chr(0x97) => '—',
chr(0x99) => '™',
chr(0x9B) => '’',
chr(0xA6) => '¦',
chr(0xA9) => '©',
chr(0xAB) => '«',
chr(0xAE) => '®',
chr(0xB1) => '±',
chr(0xB5) => 'µ',
chr(0xB6) => '¶',
chr(0xB7) => '·',
chr(0xBB) => '»',
);
$search = array('Ý', 'ý', 'þ', 'Þ', 'ð', 'Ð');
$replace = array('İ', 'ı', 'ş', 'Ş', 'ğ', 'Ğ');
mb_internal_encoding("ISO-8859-1");
return str_replace($search, $replace, html_entity_decode(mb_convert_encoding(strtr($text, $map), 'UTF-8'), ENT_QUOTES, 'UTF-8'));
}
use utf8_encode() while storing it into the db. and at the time fetching use utf8_decode().
i dont store anything , i am only retrieving and displaying some records... i edited my question with your answer.
en- and decoding to and from UTF-8 without knowing what happens in the background can help. But most of the time it makes things worse. You should always try to find the cause for what is happening to be then able to fix it.
|
STACK_EXCHANGE
|
import Complex from 'Complex';
import QuantumCoefficient from './QuantumCoefficient';
export class Qubit {
constructor(zero, one) {
this.zero = zero;
this.one = one;
}
export() {
return [...this.zero.exportLinear(), ...this.one.exportLinear()];
}
exportForm() {
// Not implemented
const theta = 0;
const phi = 0;
const form = {
r0: this.zero.real(),
i0: this.zero.im(),
r1: this.one.real(),
i1: this.one.im(),
theta,
phi,
};
return form;
}
normalize() {
// Normalize and set angle to 0
const combinedLength = Math.sqrt((this.zero.magnitude() ** 2) + (this.one.magnitude() ** 2));
const newAngle = this.one.angle() - this.zero.angle();
const nZero = this.zero.normalize(this.zero.magnitude() / combinedLength);
const nOne = this.one.normalize(this.one.magnitude() / combinedLength);
const zero = nZero.setAngle(0);
const one = nOne.setAngle(newAngle);
return new Qubit(zero, one);
}
print() {
console.log(`Qubit: ${JSON.stringify(this.exportForm())}`);
}
toString() {
let output = '';
const { zero } = this;
const { one } = this;
const zeroOut = `(${zero.prettyString()}) |0>`;
const oneOut = `(${one.prettyString()}) |1>`;
if (zero.magnitude() && one.magnitude()) {
output = `${zeroOut} + ${oneOut}`;
} else if (zero.magnitude()) {
output = zeroOut;
} else if (one.magnitude()) {
output = oneOut;
}
return output;
}
updateForm(form) {
const zero = new QuantumCoefficient(Number(form.r0), Number(form.i0));
const one = new QuantumCoefficient(Number(form.r1), Number(form.i1));
return new Qubit(zero, one).normalize();
}
}
export function newQubit() {
return new Qubit(new QuantumCoefficient(1, 0), new QuantumCoefficient(0, 0));
}
export function radialQubit(theta, phi) {
const r0 = Math.cos(theta / 2);
const zero = new QuantumCoefficient(r0, 0);
const oneState = Complex.fromPolar(Math.sin(theta / 2), phi);
const one = new QuantumCoefficient(oneState.real, oneState.im);
return new Qubit(zero, one).normalize();
}
|
STACK_EDU
|
Hi everybody, my name is Piero, from Florence (Italy)
I am having a connection problem (Invalid Administrator username or password. Error code (1722)) trying to connect via Logmein Tech Console (v. 7.12.3306 64 bit) from my pc to a host pc on a different domain.
When I receveid the connection dialog to insert user&password, i try to use a local admin user of the host pc but i received:
[2018-05-10 4:24:28 PM] Starting session on device 10.20.35.21...
[2018-05-10 4:25:27 PM] Invalid Administrator username or password. Error code (1722)
The user&password of the remore pc is correct, so i tried to do the same starting from a different pc (with same version of Logmein Tech Console) which is in the same domain of the host pc and this time it works, i can connect inserting user&password (as done before)
Tried with and witout proxy settings (just to be sure).
Did you get any cross domain connection error like this in your Logmein usage?
Thank you for the feedback. There are some requirements of the Connect on LAN feature. Please read our manual on page 30-31.
Some of the windows updates set back to default of these settings. Usually the remote registry service is stopped or disabled after a recently windows update. It should be started manually or automatically.
I hope it will help you!
for the moment I checked only for the Remote Service (started or not).
On our PC the default is 'not enabled' but even if i start it on a remote PC to test, the problem remains the same.
I am going on reading manual LAN Conn pa. 30-31 as you suggested and let you know asap
verified all the Technical Information for Advanced Users (manual 30-31): everything seems to be correct.
My problem seems to be only for the EUROPEAN remote pc (domain SHOP) cause in the remote pc in USA/CANADA (domain SHOP too) I am able to insert admin&password and login into the pc. For the EUROPEAN I am able to insert admin&password but it refuses it (even if correct) with error 1722.
My network dept. confirms me there are no drops on Firewall, so if all the Tech Information for Advanced Users are correct as I checked, I guess the problem is on the remote client itself.
Any suggestion to verify?
Thank you for the clarification. I have a last idea, could you try it with our Beta Technician Console please?
You should just login our beta site to be able to download the newest TC or you can wait until this Thursday when we are updating the next TC versions on Live in the download section.
tested the Logmein Tech Console Beta (v.7.12.3315) but the problem remains the same.
Is there any way to get a more detailed errog log when the login fails?
I mean...the log I get from the tech console is not so detailed..is there a way to run the connection in a debug mode? Cause, as I said before, I am able to connect to a different PC (USA) in the same domain without problem (the problem is on EUROPEAN pc on the same US domain) so with a more detailed log maybe i could be able to get the difference between these 2 remote pc.
I'm sorry but I cannot help you more in this community page. Yes there is a reg key, where we can enable the detailed logging. But I advise you to call our contact numbers, they can collect all of the necessary logs to be able to continue the investigation.
|
OPCFW_CODE
|
Does fully frozen food (bread) give off any moisture?
Or "does bread benefit from being in a sealed bag once frozen?"
I bake bread rolls regularly and freeze them in sealed plastic bags, and then I thaw one or two at a time. Sometimes I don't bother resealing the bag afterwards, which leads me to this question. I've been assuming that sealed bags help during the freezing process, but does it matter once they're already frozen?
I'm mainly focused on moisture and water content, and I'm excluding the fact that it'll adsorb taste from e.g. onion fumes.
I've been assuming that all water is locked up as ice, but after trying to freeze soy sauce and ending up with a brown freezer I know from experience that I don't always catch the important details.
A chest freezer that gets very cold (mine goes to -17 °F) and doesn't have an auto-defrost feature will help with this.
Freezers are extremely dry, over time without protection food will develop 'freezer-burn', where the outside of food dehydrates and loses quality. A sealed bag will make a big difference in preserving the quality of food in the freezer.
Though in a typical freezer (especially a self-defrosting aka automatic defrosting one, where the temperature varies quite a bit) plenty of water will slowly migrate from the contents to form frost on the interior of the bag, leaving a drier product, as well. That's one reason to avoid excessively long freezer storage. It's also a reason to minimize airspace in the bag, but vacuum-packaging bread has other downsides, so don't go there.
@Ecnerwal type of seal makes a big difference in this. Vacuum-packed items get less freezer burn because there's nowhere for their moisture to migrate to, unlike looser-packed items.
"Here's a rather dense breadstick. It was a baguette before I vacuum-sealed it" ... ;^)
@Ecnerwal : Maybe freeze it, then vacuum seal it so can’t squish down?
Someone would be able to market that @Ecnerwal.
@Ecnerwal most vacuum sealing machines allow for an early triggering of the sealing part, thereby applying little pressure/vacuum to the food while still removing most of the air. This is very close to just carefully flattening a regular ziploc bag with a good seal
So this is what freezer-burn is. I've heard it, but never seen the word attached to an explanation before. I think it'd be beneficial to mention what time-scales we're talking about. It'd be useful in order to determine whether — once I've decided to consume its contents within two weeks — it matters if I reseal the bag after that point. I guess I can run an experiment...
I've been assuming that all water is locked up as ice
At the temperatures in a home freezer, that's not quite true. Solid ice still has a vapor pressure as some of the water is able to escape. Given some time (usually weeks), the moisture content in the product can drop enough to affect the the food.
Colder temperatures reduce the rate this happens, but you probably don't have more than a few degrees that you can adjust a home freezer.
Removing any space for the sublimated water to go (sealing) can help some foods. But most breads have lots of empty spaces in the interior. Even if water can't leave the roll, it can leave the bread matrix and form ice crystals in the voids. When the bread is warmed, those ice crystals will melt and form drops of water. Wrapping will help a bit, but won't stop that process.
Yes the sealed bag helps even after fully frozen by reducing the rate of moisture loss.
Expect that a perfectly sealed bread roll will still suffer from loss of texture after an extended period in your freezer (probably more like weeks rather than months, but that depends on the product, the freezer, how often the freezer is used, and your tastes).
"At the temperatures in a home freezer, [...] Solid ice still has a vapor pressure as some of the water is able to escape." - Out of curiosity, what temperature would be needed to reduce the vapor pressure enough that in a practical sense foods wouldn't dry out?
I don't have any personal info (I'm sure industrial frozen food companies would know more), but pressure drops slowly with temp: https://www.lyotechnology.com/vapor-pressure-of-ice.php has a chart. Laboratory freezers that go to -80C are widely available.
Yeah, -70°C is (or perhaps was, if the available tech has improved to make lower practical) a very typical temperature for lab sample storage.
I've noticed that ice cubes left in the freezer for weeks or months will slowly shrink due to this effect.
I know this sandwich doesn't exist. I know that when I put it in my mouth, the Bread Matrix is telling my brain that it is juicy and delicious.
Anyway... I interpret your answer as that the effect is likely negligible if consumed within two weeks, but should be kept in mind otherwise.
|
STACK_EXCHANGE
|
Development builds of this project can be acquired at the provided continuous integration server.
These builds have not been approved by the BukkitDev staff. Use them at your own risk.
Development builds of this project can be acquired at the provided continuous integration server. These builds have not been approved by the BukkitDev staff. Use them at your own risk.
The goal of this project was not only to create the Hunger games (commonly also referred to as Survival Games), but allow to server owners and admins tweak the game to their exact needs. That is why almost every part of this plugin can be customized to fit your exact needs. MyHungerGames has features that several other's Hunger Games's plugins don't have, including per-arena settings and in-game stats.
- Fully Automated
- Multiple arenas with simultaneous games
- Per-Arena settings
- In game stats wall
- Economy Support
- Randomly filled chests
- World/Arena resetting
- Multi-World Support
- Sponsoring based on Vault economy
- Power redstone when the game starts, and more!
Full feature list here.
What is feature-match?
If you see a feature in another plugin that this one doesn't have, tell us and we'll add it within a reasonable amount of time. But we will also make sure the feature is fully customizable. Post a ticket, mark it as "enchantment", and write your ideas.
Commands and Permissions?
I found a bug. What do I do?
We set up multiple ways to report bugs. You can report it on github, the issues link up top, or on the main page. Most times, expect a quick reply.
Why choose this?
Choose this plugin because its many configurable settings, compared to other Hunger Games plugins.
This plugin creates an unique Hunger Games experience.
What material names can I use?
You can use any of the official Bukkit Material names available here.
Where can i find an example for the itemconfig.yml?
You can find it here with hashtags.
And here without hashtags.
What are the planned features?
Please check out "enchantment" tickets that are accepted, see them here.
And for 1.2 here is a list.
Aaah. Cool - thanks.
Sorry. I dont know the movie :o
The hunger games is a PVP match to the death to survive or kill one another until 1 person is left.
From the hunger games MOVIE or BOOK :)
I dont know what HungerGames are.. What can i do with this plugin ?
OK. You can create games i read. But which games, what have i to do there ?
Find a way to remove nameplates over the player's heads. That would be real good. Forces players to use their eyes instead of looking at a wall and using the name tags to know where their enemies are.
Plugin doesnt load error messages
This looks really nice. All the previous plugins i've tried have had some problem or another. Also hosting multiple games will fix the issue of dead players whining to restart the match because they died in 2 seconds. I'll test this out, also, does this support multiple worlds?
Its better than that one though :P
Ah very nice.
Yes I am planning to add some more options later.
Yeah, it isn't his lol. He downloaded it.
Eh...idk....would have to ask my friend, its on his server.
Can we also have that map? *Crosses fingers*
OH sorry, you will have to update to java 7
Wow...Just wow...This looks great one question? Can you make rewards too?
<<code Could not load 'plugins/MyHungerGames.jar' in folder 'plugins' org.bukkit.plugin.InvalidPluginException: java.lang.UnsupportedClassVersionError: com/randude14/hungergames/Plugin : Unsupported major.minor version 51.0 at org.bukkit.plugin.java.JavaPluginLoader.loadPlugin(JavaPluginLoader.java:150) at org.bukkit.plugin.SimplePluginManager.loadPlugin(SimplePluginManager.java:305) at org.bukkit.plugin.SimplePluginManager.loadPlugins(SimplePluginManager.java:230) at org.bukkit.craftbukkit.CraftServer.loadPlugins(CraftServer.java:207) at org.bukkit.craftbukkit.CraftServer.<init>(CraftServer.java:183) at net.minecraft.server.ServerConfigurationManager.<init>(ServerConfigurationManager.java:53) at net.minecraft.server.MinecraftServer.init(MinecraftServer.java:156) at net.minecraft.server.MinecraftServer.run(MinecraftServer.java:422) at net.minecraft.server.ThreadServerApplication.run(SourceFile:492) Caused by: java.lang.UnsupportedClassVersionError: com/randude14/hungergames/Plugin : Unsupported major.minor version 51.0 at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:634) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:277) at java.net.URLClassLoader.access$000(URLClassLoader.java:73) at java.net.URLClassLoader$1.run(URLClassLoader.java:212) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at org.bukkit.plugin.java.PluginClassLoader.findClass(PluginClassLoader.java:41) at org.bukkit.plugin.java.PluginClassLoader.findClass(PluginClassLoader.java:29) at java.lang.ClassLoader.loadClass(ClassLoader.java:321) at java.lang.ClassLoader.loadClass(ClassLoader.java:266) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.bukkit.plugin.java.JavaPluginLoader.loadPlugin(JavaPluginLoader.java:139) ... 8 more >>
Another Hunger Games plugin? really?
|
OPCFW_CODE
|
How to read log files and filter based on control characters using Pyspark?
I am new to PySpark and want to read a log file with many lines of binary code separated by the newline character. I need to filter the file using:
the length of the binary line greater than 1
the binary line starts with \x00
Here is an example line from one of the input files:
b'\x18\xb5\x1fM\x00\x02\x00\x^C\x05\x00\x00\x96\x93\x80@2\xf6\x1f2\x01\n'
I encounter an error in checking the 0 positions of each line for \x00. The error is:
pyspark.sql.utils.AnalysisException: Can't extract value from b#2:
need struct type but got string;
Here is my code.
from pyspark import SparkContext
from pyspark.sql import SparkSession
from pyspark.sql.functions import length
from pyspark.sql import Row
from pyspark.sql.functions import col, size
from pyspark.sql.functions import substring_index, substring
from pyspark.sql import functions as f
import numpy as np
sc = SparkContext( 'local', 'test')
spark=SparkSession(sc)
textFile = sc.textFile("/test_log.mi2log")
results=textFile.collect()
rdd1 = sc.parallelize(results)
row_rdd = rdd1.map(lambda x: Row(x))
df=spark.createDataFrame(row_rdd,['b'])
df=df.filter(length(df.b)>1)
df=df.filter(df.b[0]==b'\x00')
For the last filter command, I want to read the binary data into the RDD or a dataframe. Thanks!
It is confusing that you are mentioning binary data, along with lines and newline characters, which are elements of text data. Can you add an example of your data to help contributors? (cut down lines if they are long).
Sure, like this, this is the line reader using Python readline() b'\x18\xb5\x1fM\x00\x02\x00\x^C\x05\x00\x00\x96\x93\x80@2\xf6\x1f2\x01\n'
Ok, I've edited the question and can provide an answer if gets re-opened.
You have a text log file with control characters, so you can read it as a text file and filter on the null control character:
textFile = sc.textFile("/test_log.mi2log")
filtered_rdd = textFile.filter(lambda line: len(line) > 1 and line[0] == '\x00')
print(filtered_rdd.collect())
Did you have any success with this @MarccccH ?
|
STACK_EXCHANGE
|
Getting Arch Linux installed on a Raspberry Pi 2 was a challenge for me. Hopefully this post will help you set it up.
Arch Linux is a very robust Linux version. It only installs the base default packages to get running. Everything else must be installed as it is needed. You may find yourself needing to run the package installer often. (pacman -S) It was also nice that the network interface was setup with DHCP by default. All I had to do after install was to plug it into an ethernet cable.
In the process of a successful attempt to get Rocket.Chat and MongoDB running on the Raspberry Pi 2 from CanaKit I found that the only way I could get the correct version of MongoDB 3.2 that works with Rocket.Chat was to install Arch Linux. I outlined how I installed Rocket.Chat in another post called Installing Rocket.Chat on Raspberry Pi 2.
The major issue is that as of NOOBS 1.5 there was no support to install Arch Linux on the Raspberry Pi 2 using the NOOBS installer. NOOBS does display the option to install Arch Linux but it would then tell me that it could not find the right version of Arch Linux for Raspberry Pi 2. This required me to write the package to the SDcard and boot from there.
All the directions I could find indicated that the installation of Arch Linux boot files could be done from a computer. Unfortunately the computer had to be Linux but not Macintosh OS. I have a Mac and a Windows computer.
So I started up my handy Oracle VirtualBox running Ubuntu. But now I learn that VirtualBox and Ubuntu do not support the SD card port on the Macintosh with help from Mayur Rokade. He suggested using cell phone to connect as a USB device.
I could not get this to work but then I found I had a usb to SD card adaptor from a Eye-Fi card.
But Ubuntu still would not see the device. So a post from mayurrokade.com was helpful. I ended up having to “eject” the USB device from Finder. Then start Ubuntu in VirtualBox. Then it started appearing. Ubuntu did not see it until the Mac released it.
Now following the official directions at archlinuxarm.org I continued the setup.
Replace sdX in the following instructions with the device name for the SD card as it appears on your computer. To see what your device gets named use the command in the Ubuntu terminal.
This will display the names of the drives that are attached. On my computer it was: sdb
Start fdisk to partition the SD card:
At the fdisk prompt, delete old partitions and create a new one. This will delete all data on the SD card.
Type o. This will clear out any partitions on the drive.
Type p to list partitions. There should be no partitions left.
Type n, then p for primary, 1 for the first partition on the drive, press ENTER to accept the default first sector, then type +100M for the last sector.
Type t, then c to set the first partition to type W95 FAT32 (LBA).
Type n, then p for primary, 2 for the second partition on the drive, and then press ENTER twice to accept the default first and last sector.
Write the partition table and exit by typing w.
Create and mount the FAT filesystem. These commands will create a folder called boot and root in whatever directory you are in. I recommend using /home/<username>. One partition is for booting up and has to be a FAT partition. The Root partition which is bigger needs to be a ext4 partition.
mkfs.vfat /dev/sdX1 mkdir boot mount /dev/sdX1 boot
Create and mount the ext4 filesystem:
mkfs.ext4 /dev/sdX2 mkdir root mount /dev/sdX2 root
Download and extract the root filesystem (as root, not via sudo):
You will need to install the bsdtar package with pacman:
pacman -S bsdtar
This will expand the file and put Arch Linux in the root folder which is on the SD card.
bsdtar -xpf ArchLinuxARM-rpi-2-latest.tar.gz -C root sync
Move boot files to the first partition called boot:
mv root/boot/* boot
Unmount the two partitions:
umount boot root
Insert the SD card into the Raspberry Pi, connect ethernet, and apply 5V power.
Use the serial console or SSH to the IP address given to the board by your router.
Login as the default user alarm with the password alarm.
The default root password is root.
Follow this blog and volumeint twitter to find out about the next posts on how to get a free SSL certificate for your chat server and future post on functional programming.
|
OPCFW_CODE
|
import { ObjectId } from 'mongodb';
import { fromPlain, merge } from '../../src/transformer/utils';
import { EntityRelationship } from '../relationship/entity.relationship';
describe('fromPlain', () => {
it('should transform a plain object to class', () => {
const id = new ObjectId();
const id2 = new ObjectId();
const id3 = new ObjectId();
const plain = {
_id: id.toHexString(),
property: 'bar',
parent: id2,
parentAsReference: id2,
children: [id2, id3],
childrenAsReference: [id2, id3]
};
const entity = fromPlain(EntityRelationship, plain);
expect(entity).toBeInstanceOf(EntityRelationship);
expect(entity._id).toBeInstanceOf(ObjectId);
expect(entity._id?.equals(id)).toBe(true);
expect(entity).toHaveProperty('property', plain.property);
expect(entity.parent?.equals(id2)).toBe(true);
expect(entity.parentAsReference?.equals(id2)).toBe(true);
expect(entity.children).toHaveLength(2);
if (entity.children === undefined) {
throw new Error('Children are empty !');
}
expect(entity.children[0]).toBeInstanceOf(ObjectId);
expect(entity.children[0].equals(id2)).toBe(true);
expect(entity.children[1]).toBeInstanceOf(ObjectId);
expect(entity.children[1].equals(id3)).toBe(true);
expect(entity.childrenAsReference).toHaveLength(2);
if (entity.childrenAsReference === undefined) {
throw new Error('Children are empty !');
}
expect(entity.childrenAsReference[0]).toBeInstanceOf(ObjectId);
expect(entity.childrenAsReference[0].equals(id2)).toBe(true);
expect(entity.childrenAsReference[1]).toBeInstanceOf(ObjectId);
expect(entity.childrenAsReference[1].equals(id3)).toBe(true);
});
});
describe('merge', () => {
it('should merge an object into a class', () => {
const id = new ObjectId();
const id2 = new ObjectId();
const id3 = new ObjectId();
const entity1 = new EntityRelationship();
entity1._id = id;
entity1.property = 'test';
entity1.children = [id2, id3];
entity1.__shouldBeExcluded = 'shouldbeexcluded';
const entity2 = new EntityRelationship();
entity2.property = 'bad';
entity2.__shouldBeExcluded = 'shouldbeexcluded';
merge(entity2, entity1, ['__']);
expect(entity1._id.equals(id)).toBe(true);
expect(entity1).toHaveProperty('property', 'test');
expect(entity1.children).toHaveLength(2);
if (entity1.children === undefined) {
throw new Error('Children are empty !');
}
expect(entity1.children[0].equals(id2)).toBe(true);
expect(entity1.children[1].equals(id3)).toBe(true);
expect(entity2).toHaveProperty('_id');
expect(entity2._id.equals(id)).toBe(true);
expect(entity2).toHaveProperty('property', entity1.property);
expect(entity2.children).toHaveLength(entity1.children.length);
if (entity2.children === undefined) {
throw new Error('Children are empty !');
}
expect(entity2.children[0].equals(id2)).toBe(true);
expect(entity2.children[1].equals(id3)).toBe(true);
expect(entity2).toHaveProperty('__shouldBeExcluded', undefined);
});
});
|
STACK_EDU
|
A handful of dice can make a decent normal random number generator, good enough for classroom demonstrations. I wrote about this a while ago.
My original post included Mathematica code for calculating how close to normal the distribution of the sum of the dice is. Here I’d like to redo the code in Python to show how to do the same calculations using SymPy. [Update: I’ll also give a solution that does not use SymPy and that scales much better.]
If you roll five dice and add up the spots, the probability of getting a sum of k is the coefficient of xk in the expansion of
(x + x2 + x3 + x4 + x5 + x6)5 / 65.
Here’s code to find the probabilities by expanding the polynomial and taking coefficients.
from sympy import Symbol sides = 6 dice = 5 rolls = range( dice*sides + 1 ) # Tell SymPy that we want to use x as a symbol, not a number x = Symbol('x') # p(x) = (x + x^2 + ... + x^m)^n # where m = number of sides per die # and n = number of dice p = sum( x**i for i in range(1, sides + 1) )**dice # Extract the coefficients of p(x) and divide by sides**dice pmf = [sides**(-dice) * p.expand().coeff(x, i) for i in rolls]
If you’d like to compare the CDF of the dice sum to a normal CDF you could add this.
from scipy import array, sqrt from scipy.stats import norm cdf = array(pmf).cumsum() # Normal CDF for comparison mean = 0.5*(sides + 1)*dice variance = dice*(sides**2 -1)/12.0 temp = [norm.cdf(i, mean, sqrt(variance)) for i in roles] norm_cdf = array(temp) diff = abs(cdf - norm_cdf) # Print the maximum error and where it occurs print diff.max(), diff.argmax()
Question: Now suppose you want a better approximation to a normal distribution. Would it be better to increase the number of dice or the number of sides per dice? For example, would you be better off with 10 six-sided dice or 5 twelve-sided dice? Think about it before reading the solution.
Update: The SymPy code does not scale well. When I tried the code with 50 six-sided dice, it ran out of memory. Based on Andre’s comment, I rewrote the code using
polypow. SymPy offers much more symbolic calculation functionality than NumPy, but in this case NumPy contains all we need. It is much faster and it doesn’t run out of memory.
from numpy.polynomial.polynomial import polypow from numpy import ones sides = 6 dice = 100 # Create an array of polynomial coefficients for # x + x^2 + ... + x^sides p = ones(sides + 1) p = 0 # Extract the coefficients of p(x)**dice and divide by sides**dice pmf = sides**(-dice) * polypow(p, dice) cdf = pmf.cumsum()
That solution works for up to 398 dice. What’s up with that? With 399 dice, the largest polynomial coefficient overflows. If we divide by the number of dice before raising the polynomial to the power
dice, the code becomes a little simpler and scales further.
p = ones(sides + 1) p = 0 p /= sides pmf = polypow(p, dice) cdf = pmf.cumsum()
I tried this last approach on 10,000 dice with no problem.
|
OPCFW_CODE
|
ADC: Need to convert 10V-32V(from battery 0-32V) range to 0-5V
I need to convert 10V-32V range from a battery ( full range 0V- 32V) to a range of 0-5V before applying it to ADC.
This is basically to get more resolution in my required range(10v -32V), since most of the battery applications not require sensing below certain range(Li-ion Bat)
To achieve your request you will need a negative power rail, -5V would be enough. There probably is another way but I need to know your ADC (or uC) code to read in the datasheet.
To do this properly requires precision components to establish the 10V offset. This is much harder and more expensive than a simple resistor divider. This is why all the answers are telling you to divide down and choose a more precise converter if necessary. A converter with 2 more bits of precision will be cheaper (incrementally) than the 0.1% voltage reference you need for analog-offsetting the voltage.
A voltage divider is needed if you have a real world signal that covers
a wider range than your ADC. Suppose you have a signal from a
transducer that goes from 0-20 Volts, and an ADC that works from 0-5
Volts. You need to divide the signal by 4 to get it into the range of
your ADC. You can do this by placing two resistors in series like this:
(A)----////---(B)---////----(C)
The wiggly parts are two resistors, Rab and Rbc. Connect point A to your
real world signal. Connect point (C) to ground. Connect point (B) to you
ADC input. At point (B), the voltage will be:
Vb = Va * (Rbc) / (Rab + Rbc).
If Va is 20V, and Rbc is 10K Ohms, and Rab is 30 K Ohms, then Vb will
be:
Vb = 20V * 10 / (10+30) = 5V
Thus you have converted a signal that is out of the range of your ADC to
a signal that is in range.
Use a simple voltage divider (don't forget a capacitor). Just ignore the unused counts at the bottom. With a 10-bit or 12-bit A/D this would produce voltage measurements adequate for most battery management procedures.
This is what I would do, based on what you've wrote.
If you are planning to use the voltage for estimating battery capacity, be aware that it's hard to estimate the capacity from the voltage alone with accuracy better than, say, 10%.
Sorry no one were able to understand my problem,
actually the input is 0V to 32V , i am giving it to ADC of Controller(MPC5604), now before giving it to controller i need to convert the levels to range 0V to 5V but in this process i need to consider input 10V as 0V and input 32V as 5V (rejecting below 10V )...i need a circuit for this
@user17715 - Nick has explained exactly the right way to do this, maybe you have not understood his answer.
@user17715 I think you are worried about loss of resolution. But your adc is 10 bit! You are fussing over a difference of only 10 mv / step. If you need better resolution than 32 mv, you are better off working out a 12 bit adc than some sort of precision offset voltage divider.
|
STACK_EXCHANGE
|
It has been more than 6 years since Internet Explorer last received an upgrade from Microsoft. Is testing on IE still relevant as per the end-user point of view?
I once had a client that used an older version of internet explorer on their terminal stations in their physical stores. We asked an extra yearly fee (100k dollars) to test the application versus this older version to motivate them to upgrade, but they were willing to pay.
Older IE is still widely used in corporate networks since it has more integration with windows security and networking mechanisms as well as with ms office suite.
Big corporations often have large lag in new technologies adoption since they would need to test numerous corporate applications for compatibility with new browsers.
So I wouldn't exclude from testing at least for corporate applications.
IE is (unfortunately) still around. It is not unusual for large government companies to have a deal with their hardware/software vendor that specifically mentions Microsoft products, including IE. They often require all their software to be compatible.
Government organizations, by inertion, will continue using IE until the contract runs out and they all get Linux machines with Firefox pre-installed.
As shown in the other answers, it really depends on your user base.
If you are targeting developers and high-end users, you probably won't have many (or any) using IE.
If you are targeting corporate or government environments, many may still be using IE (or obsolete versions of Chrome). Any update of the browser used by default on user's computers may break a number of proprietary applications which may not have been updated in years either, so they tend to stick with whatever works, because testing, upgrading, and all the dependencies that go with that is a lot of time (and money).
Anything in between (the public at large), you'll get a small percentage of IE users. The exact figure depends a lot on the target user base (country, age, revenue, tech-savviness...), so your best option is to measure it using your own stats/analytics. Once you have a figure, it's a business decision: is the revenue generated by that small portion of users larger than the cost of continuing to support IE and test on that platform?
Note that testing is really the tip of the iceberg here: the whole development process is affected by IE compatibility. In some environments maintaining support for IE can either be very costly, or very restrictive, or both (though IE11 is a lot better than some previous versions of IE were). So make sure the whole chain is involved. Unless you have business-specific reasons to support IE (as detailed above), you'll probably have the whole development team support you if you mention dropping support for IE!
Until a few weeks ago I would have said "maybe", because people who are still using Windows 7 and don't want to install a non-Microsoft browser (for whatever reason) were stuck with IE11, and couldn't install Edge. They probably make up the bulk of the 0.4% IE users mentioned in another answer.
But recently Microsoft has created a new version of Edge that can be installed on Windows 7, and is actively telling IE users to switch, so the number of people who are stuck with IE will probably decrease even further.
Microsoft bids farewell to Internet Explorer.
As per latest information from Microsoft, Microsoft To End Support For Its Ageing Browser Internet Explorer In 2021(News here).
But still most of the client required the Internet Explorer. Because they are still using the Older version due to some reason with security integration with Microsoft tools.
I believe the answer is to be found in this URL: XBAP Support in IE Edge:
As you can see, there are formats, supported by Internet Explorer, not being supported by Edge (Internet Explorer's replacement).
As long as there are such formats (I know about
XBAP, I have no knowledge about other formats), Internet Exlorer must still be tested (unless, of course, you are legally sure that such formats will never be used).
Depends on your audience, but if you need stability, YES.
Real life example: Last year(2019) I made a web application to be used by professors, I was later informed that they were unknowingly bypassing a validation and causing a bug (validation on the front-end, my mistake, granted), but wouldn't you know it. Not only were they using IE, they were using IE, but IE 11 didn't had this issue, I had to go down the history of IE, and I eventually found it, only IE6(2001) or older would make that bug happen, I made it compatible and the issue was gone.
As a Tester, you can surely deal with testing on IE 11. The decision is, however, not only up to you. Therefore, I'd try to find out what matters to other stakeholders:
- marketing team might have certain opinions about what browser customers use
- if the system is for internal users, the company might have some policies about what browsers their employees use (this would most likely be common knowledge for you and the development team from day one)
- you can perhaps access logs and collect statistics about
User-Agentheader, and therefore decide based on some data about what browsers you'll use for testing
- you can find global statistics like this one: https://www.w3counter.com/globalstats.php
Don't guess, ask other people who matter, collect data. Then see for yourself if testing on IE 11 makes sense in 2020 in your particular context.
As already said this really depends from your user base.
In our case - the requirement was to run it on both: desktop version and mobile/tablet version, since our customers also used our application on all these items we had to evaluate which browsers/mobile browsers they are using (e.g. Opera, Internet Explorer, Safari etc.).
So in our case we also made thoughts about considering Internet Explorer within our testing scope or not. Following points were important for us to make the decision whether we should consider IE within our testing scope or not:
Our customer -> More desktop or mobile related?
- We asked our customers whether they are using the products via desktop and/or mobile version. This was very interesting for us because we also detected that customers with Apple Products were more willingly to pay for our products than Android users
Which browsers are mostly used for destkop/mobile versions?
- Since we also were responsible for testing the versions within different countries we used Statcounter to find out, which browser was more used for e.g. the German market.
So as you can see the used browser for the German market for IE is very low just 4,4%. This leads also to the decision to concentrate more on Chrome, Firefox and Safari, less on Opera and IE.
At the end: Since our customer weren't developers (or special users from the Government using Internet Explorer), we just made the focus on testing more on the most used browsers (Chrome, Firefox, Safari) and once when the customer is using Internet Explorer - which leads to our page - we just inserted a Pop-Up "You are using IE, this platform doesn't work with IE, please use e.g. xx browser".
|
OPCFW_CODE
|
In Introduction To Role-Based Security In SQL Server Reporting Services we introduced role-based security in SQL Server Reporting Services. In this article, we will discuss what you need to know about security to invoke the web service API.
There are two issues to address: authentication and authorization. First, you need to pass the proper credentials to the report server to avoid authentication errors (HTTP 401 access denied messages). Secondly, you need to have role-based security properly configured to avoid authorization exceptions from the API like the following:
Let’s take a look at common scenarios for getting your software to work.
The permissions granted to user 'REPORTING\scott' are insufficient for performing this operation.
CredentialsBy default, every call to Reporting Services must be an authenticated call. To provide credentials to authenticate, you must use the Credentials property of the web service proxy class. Take the following example, which tries to retrieve a list of delivery extensions from the server:
ReportingService rs = new ReportingService(); rs.Credentials = System.Net.CredentialCache.DefaultCredentials; Extension extensions = rs.ListExtensions(ExtensionTypeEnum.Delivery);
The DefaultCredentials property represents the credentials for the current security context of the process. For a client-side application, like a Windows Form application or a console mode program, the credentials will be the credentials of the user executing the program. If you are receiving the “permissions are insufficient” exception with this code in a client application, the user is authenticated but not in a reporting services role with authorization to complete the task. See the later section on role-based security.
If you are seeing an access denied error message from a client application then the report server cannot authenticate the client. Perhaps the report server is not in the same domain as the user’s machine. In this scenario there are at least two options available.
First, you could create a ‘shadow account’ on the reporting server by duplicating the user’s domain login and password on the report server. Creating a shadow account can be hard to maintain, particularly if a password change policy is in effect for the domain, because the passwords must remain synchronized. Alternatively, you can pass credentials for an account that exists on the report server using the NetworkCredential class:
ReportingService rs = new ReportingService(); rs.Credentials = new NetworkCredential(username, password, domain); Extension extensions = rs.ListExtensions(ExtensionTypeEnum.Delivery);
Obviously, you need to take extreme care in where and how you store the username and password on a client machine. Hard coding the values into the program as we do above is inflexible and vulnerable to a disassembler. Storing the values in a configuration file in plaintext is even more vulnerable. My suggestion is to use Microsoft's data protection API (DPAPI) to keep the values encrypted.
Credentials and ASP.NETDeterming what DefaultCredentials represents in an ASP.NET environment is more difficult. In a default installation, with no impersonation in place, DefaultCredentials will be the credentials for the ASP.NET process. The ASP.NET process runs as the ASPNET account (in IIS 5.0), or the NETWORK SERVICE account (in IIS 6.0). At this point we need to break down the scenario into local reporting server versus remote reporting server environments.
If the web application is on the same server as the Reporting Services web service, the call will authenticate using DefaultCredentials, but you are probably seeing the “permissions are insufficient” exception. One solution to this problem is adding the ASPNET or NETWORK SERVICE account into a role in Reporting Services, but take care before making this decision. If you were to place the ASPNET account into the System Administrators role, for example, anyone with access to your web application is now a Reporting Services administrator.
Alternatively, you can use impersonation in ASP.NET. You can enable impersonation for the application, for a subdirectory of pages, or for individual pages, using the identity element in web.config (see more resources for additional information). When using impersonation, it is important to deny access to anonymous users, as shown below:
<system.web> <authentication mode="Windows"/> <identity impersonate="true"/> <authorization> <deny users="?"/> <allow users="*"/> </authorization> </system.web>
WIth impersonation the ASP.NET page will execute with the security context of the client, and web service calls to the same machinewill also be made with the security context of the client (using DefaultCredentials). You will still need to configure role-based security for each user to give each user authorization to perform actions.
When a web application makes web service calls to a remote report server, there are additional complications. If you are using impersonation, there is a one-hop limit with NTLM authentication. The client’s credentials make one hop from the client machine to the web server, and ASP.NET can use these credentials to impersonate the client on the same machine only. For ASP.NET to use the credentials on another remote machine would require the credentials to make a second hop, which does not happen - the call will go to the remote machine with the credentials of the ASP.NET process instead. Since the ASP.NET process runs under a local machine account by default, the remote server will not authenticate the credentials and the call will fail with an access denied message.
There are a number of solutions in this scenario.
First, you can look at enabling Kerberos delegation, which is beyond the scope of this article, but you can read about delegation in the article How To Configure an ASP.NET Application for a Delegation Scenario. Using delegation would allow you to have the client’s credentials make the additional hop to reach the report server, which in turn gives you more granular control over authorizations by placing users and groups into roles instead of process accounts.
Another option is to run the ASP.NET process under an account with permissions to the report server. For example, the ASP.NET process could run under a domain account. For details, see the article: ASP.NET Process Identity. A similar strategy would be to synchronize the ASPNET / NETWORK SERVICE accounts on both the application server and the reporting server by matching thier passwords and configuring ASP.NET to use the password. Using these options you’ll need to add the ASP.NET process account into a role on the report server. Unfortunately, anyone with access to your web application will now be in this role, so your application will become responsible for more granular authorization checks.
Finally, you can pass credentials from ASP.NET to the reporting server using the NetworkCredential class shown earlier in the article. All the same caveats apply in regards to storing a username and password in code or in an XML config file.
Once you’ve avoided all of the access denied messages, it’s time to configure role-base security.
Role-based Security Settings for Web ServicesRegardless of which set of authenticated credentials reach the report server, you’ll need to set up role-based security. Remember from the last article only local administrators on the reporting machine are in a role: the System Administrators role.
In order to determine what roles to assign you’ll need to know what tasks the user will perform with the web service API. Unfortunately, the API documentation does not list what permissions are required for each method to complete successfully. By experimenting with the report manager UI, the web service API, and using educated guesswork, you can find the minimum amount of permissions needed.
As an example, the ListExtensions method shown earlier in code only requires the caller to be in the System User role – the least privileged role available. A system user has permissions to view the report server properties and view shared schedules, but not view any reports or folders.
Callers who are using the ListChildren method will need to be in at least a Browser role for the item they are trying to view. Callers using the CreateRole method need to be in the System Administrators role. All the above assumes the default setup of Reporting Services, as an admin can modify the tasks permitted for each role. For an introduction on adding users and groups into roles, see our previous article.
In this article, we outlined some common scenarios for your web service API calls to be authenticated and authorized in Reporting Services. The number of possible environments and configurations is quite large, but hopefully you can extrapolate information from this article to match your specific setting.
More SQL Server Reporting Services articles
How To Create a DPAPI Library
Building Secure ASP.NET Applications: Authentication, Authorization, and Secure Communication
How To Configure an ASP.NET Application for a Delegation Scenario
ASP.NET Process Identity
Troubleshooting Kerberos Delegation
|
OPCFW_CODE
|
Search the Community
Showing results for tags 'shortcuts'.
Please vote for this option here VOTE HERE Enable the ability to assign button combinations Enable assigning multiple buttons to the same hotkey I would like to get more keyboard/Controller/Mouse hotkey options For instance currently there is no hotkey: to go to next system to go to next alphabet letter to go to bottom of list to go to top of list to open options to edit a file **to mark a favorite** disable mouse **INTEGRATE MOUSE AND CONTROLLER** (make them act as 1) record a video take a snapshot Yet we already have a shortcut key to: Show status Show sources Show developers show publishers set star rating And there are odd shortcut keys like: Volume up = + but volume down = n Page down = next (I did check this and it works correctly) Also more abilities to quickly edit the location of image directories, this could be handled by xml, cfg, a built in program, or simply a bulk edit for the root of the images For example I have my Nintendo 64 images on d:\images\Nintendo 64 in all the sorted categories fron, back, clear logo, etc. The default location is d:\frontends\launchbox\images\Nintendo 64\Images, so if I could do a bulk edit feature and put in d:\images\Nintendo 64 and everything moves to that. Please vote for this option here VOTE HERE
To the Point... decided the I would install windows 95 using dosbox for my games that won't run on windows 10 or dosbox, but how do I setup my shortcuts to go from: (In order.) icon / dosbox / windows 95 / game?
Hi, i'm a long time launchbox user I just write here to know if someone else is having this "issue"... When i use launchbox, and i'm in my desktop PC, i would be able to choose if i want to run the game fullscreen or not... Here is the matter: Would have helped if other than Configure, there was another entrypoint linkable to the game. For example dosbox games, i generally use my own bat files which is good for pathing purposes, but i would like to have other than GAME.bat which loads GAME.CONF another one which is GAME_FS.bat (fs for fullscreen) which loads GAME_FS.CONF but i only see 2 fixed entrypoints: Game (CTRL+P) and Configure (CTRL+G)... Let's take for granted that "Game" runs it fullscreen then (because it would be simpler that way since bigbox maybe only launch the first entrypoint), it would be nice to have another entrypoint for launching a "Game Windowed" version... (It could be also handled in bigbox with a popup, but anyway that's not the major point ) Dunno if all of this makes senses to you... i hope so Just in case it is already doable, and i missed it, please show me the way p.s.: Thank you for your app Jason, it really shows how much do you care for it, and how much do you care for your customers
|
OPCFW_CODE
|
so here's safe moon wallet app it's quite popular now let's let's explore why here is the wallet i just created this 12 word phrase um and then i interpret i can see my tokens so i can get safe moon safe moon version 2 smart chain bnb so basically i think this app is just good if you're into safe moon that's the best wallet app to have that there are just two chains so ethereum and like finance um for example if i want to swap i can just do it here and i just enter the amount and then swap it to safe moon here i can just see the slippage tolerance standard speed transaction time limit i can set it up also so that's basically it uh but if i want to swap from like you know i can select token i can select other chain if i want to swap from ethereum and then i want to select token i can't select some like you know niche new tokens on ethereum so it's not like pancake swap or uni swap on mobile so don't expect this app to be like that so that's not the main feature if you want to get like you know rare ethereum tokens you just need to go to uni swap and get them there so that's that um uh but yeah i think the main feature of this app why it exists and why is popular is because it simplifies uh the buying process of siphon before this app to get cyclone you needed to go like through three like exchanges to exchange one token to another and then just get safe moon i think something like that now you can just get this safe moon wallet uh like deposit something here like you can receive eternal or uh here is my wallet address and then get some safe moon or i can then just send it i don't need to go to some like three other exchanges as it was like a year ago or something and then just you know buying cyphers so this nice place if you believe in safe moon token in its development in its future you just got iphone here and store cyphon here so that's that then you have settings um you can switch wallet there is mine wallet and you can reveal a fast pass phrase you can reveal your private key why to do that then you can import your cell phone wallet somewhere else you can add a new wallet from here um so you can have multiple ones security you can enable touch id you can reveal your pass phrase private key and then you can just change password in the bottom you can change your default currency and that's basically to receive ethereum you just again go here um you can just send uh tokens to this address you can also buy moon by ethereum or yeah just uh use bank transfer not working yet or you can just use moonpay or wire check out to buy all of that you can also calculate and that's the wrist reflections my reflections you can add the new reflection token so okay so seems you can actually add some tokens on here so what you need is just add a contract address so i was run sorry guys so just copy that from eaterscan.io and then you can import i think any other token based on ethereum or binance and then you will see name symbol decimal and then just tap add token but this way you need to be pretty careful because one mistake in in your like talking contract address somewhere and that's it so hope hope this overview is helpful thank you for watching
No answer to your question? ASK IN FORUM. Subscribe on YouTube!
How To Change PWM Frequency Of Arduino UNO
How To Change PWM Frequency Of Ardu...
|
OPCFW_CODE
|
[SCIM] sKim installation problem (kpackage-yast-shell)
liucougar at gmail.com
Sat Oct 2 11:29:21 UTC 2004
On Sat, 02 Oct 2004 08:48:51 +0200, sunwukung <sunwukung at tvcablenet.be> wrote:
> You tell me :
> > Oops, the file I refered to in my last email is wrong, you should use
> > this one instead:
> > http://prdownloads.sourceforge.net/scim/skim-1.0.0-1suse.i586.rpm?download
> As I tell you before, I tried to install this rpm package. Its size has
> 333 kb on the download site. If you remember I tell you that if I try to
> install it with kpackage (right click-kpackage), it doens't display. If
> I do it with yast (left click on the rpm package and select it in the
> upper right corner in the yast displlay) it tell me that it doesn't
> exist on this cdrom. It tell me that it doesn't exist on this cdrom (the
> number 1) because it comes from the rpm files selected in konqueror.
> Then that's normal. So in place of selecting it to install I tried to
> selecting it with the right click - update if a new version exist and
> even update inconditionaly. Nothing for this both menu item proposition
> And also I tried with the classical rpm shell command it doesn't work.
I do think http://prdownloads.sourceforge.net/scim/skim-1.0.0-1suse.i586.rpm?download
is the right rpm for SuSE users.
AFAIK, others can install skim with skim-1.0.0-1suse.i586.rpm under
SuSE successfully, please confirm that if you can find other people
who use skim.
(The developer in charge of the SuSE rpm is on vacation now, so I am
afraid the latest version of skim will be available after 15th, Oct)
> This was why you send me the other skim-1.0.0 i386.rpm file. Then I
> thought it wasn't a bad file.
That file is for fedora, not for SuSE
> > Then you are just one step away from the final success ;)
> I hope.
The reason why skim can not find its plugin is that the wrong rpm is
installed, I think.
> > replace the scim command with skim (in the file which invokes the scim)
> Wow, I am sorry but could you please give me an idea of the name of this
> file ? I am a newbie and I may try to search but I want to know a little
> a bit where.
I am not very sure about that. I guess it should be in a file named
xim under your X11 dir.
> P.S. ; I read your name liucougar. Are you a chinese people or an
> european one or something else ?
:) I do not think anyone other than CJK would develop such kind of
app, don't you think so?
"People's characters are strengthened through struggle against
difficulties; they are weakened by comfort."
- Old Chinese adage
More information about the scim
|
OPCFW_CODE
|
The Xcode/Mavericks "keychain-access" bug has been fixed and available in build 1248.
Thanks everyone for posting your information about this bug.
Any resolution on this error -9000: "The bundle [Bundle ID, ex: com.myapp.mobile] at bundle path 'Payload/[My App Name].app' is not signed using an Apple submission certificate." at SoftwareAssets/SoftwareAsset (MZItmspSoftwareAssetPackage)
when using App loader to upload an app.
If the app is compiled on a non mavericks the validation tool fails and Corona spawns a warning
warning: Unable to run the lipo command: /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/lipo: can't figure out the architecture type of: [APP PATH/APP]
/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/lipo: can't figure out the architecture type of: [APP PATH/APP]
warning: Application failed codesign verification. The signature was invalid, contains disallowed entitlements, or it was not signed with an iPhone Distribution Certificate. (-19011)
codesign_wrapper-4.1: using Apple CA for profile evaluation
AssertMacros: (__builtin_constant_p(fat.magic) ? ((__uint32_t)((((__uint32_t)(fat.magic) & 0xff000000) >> 24) | (((__uint32_t)(fat.magic) & 0x00ff0000) >> 8) | (((__uint32_t)(fat.magic) & 0x0000ff00) << 8) | (((__uint32_t)(fat.magic) & 0x000000ff) << 24))) : _OSSwapInt32(fat.magic)) == 0xcafebabe, file: codesign.c, line: 249
AssertMacros: code_signatures, file: codesign_wrapper.c, line: 945
and if the same code, using the same certificates are build on a Mavericks machine, CoronaSDK has no errors it even offers to upload the app to the App Store, however it bails with metadata not found and the file not found. Manually trying to upload to the App store results in the Error -9000 mentioned above.
The certificates are all created fresh and are fine, ie. have not expired or whatsoever.
Would the fix be available to starter users or just those with a subscription?
As it stands now, building is totally unusable (public build 1202) and the only way to use it is to buy a subscription, is it?
|
OPCFW_CODE
|
Testking 70-696 Questions are updated and all 70-696 answers are verified by experts. Once you have completely prepared with our 70-696 exam prep kits you will be ready for the real 70-696 exam without a problem. We have Latest Microsoft 70-696 dumps study guide. PASSED 70-696 First attempt! Here What I Did.
2017 NEW RECOMMEND
Free VCE & PDF File for Microsoft 70-696 Real Exam
Pass on Your First TRY 100% Money Back Guarantee Realistic Practice Exam Questions
Q21. HOTSPOT – (Topic 5)
You have a deployment of Microsoft System Center 2012 R2 Configuration Manager. All client computers have the Configuration Manager client installed.
You deploy two applications named App1 and App2. App1 is a large application that is used infrequently.
Some users receive notifications indicating low disk space on their client computer.
You need to uninstall App1 from any client computer that has both App1 and App2 installed and that has less than 1GB of free disk space.
What should you do? To answer, select the appropriate options in the answer area.
Q22. – (Topic 4)
You need to meet the application requirements of App3. To which server should you install App3?
Q23. HOTSPOT – (Topic 4)
You need to prepare ComputerD and ComputerE to meet the technical requirement. Which action should you perform on each computer? In the table below, identify the action
to perform on each computer. Make only one selection in each column.
Q24. – (Topic 5)
You work for a company named Contoso, Ltd.
The network contains one Active Directory domain named contoso.com.
Users have client computers and devices that run the following operating systems:
⢠Windows 8.1 Enterprise
⢠Windows 7 Enterprise
⢠Windows Phone 8.1
⢠Windows RT 8.1
Contoso uses Microsoft System Center 2012 R2 Configuration Manager and Windows Intune.
The Windows Intune connector is not implemented. Contoso has an internal Microsoft SharePoint 2013 portal.
Contoso develops a line-of-business application named App1. App1 has builds for all of its client platforms.
You need to recommend a solution for distributing App1 to all of the client computers and devices.
What is the best recommendation? More than one answer choice may achieve the goal. Select the BEST answer.
A. Use Configuration Manager for distributing App1 to Windows Enterprise clients. Use Windows Intune for distributing App1 to Windows RT, Windows Phone, and iOS clients.
B. Use Configuration Manager for distributing App1 to Windows clients. Use Windows Intune for distributing App1 to iOS clients.
C. Copy the App1 builds to the SharePoint 2013 portal and grant the users read access to the portal. To the Contoso users, send an email message that contains a URL from which to download App1 from the portal.
D. Use Configuration Manager for distributing App1 to Windows Enterprise clients. Publish App1 to the Windows Store, the Windows Phone App+Game store, and the App Store.
Q25. – (Topic 4)
You need to provide a solution to meet the technical requirements of Baseline1.
What is the best approach to achieve the goal? More than one answer choice may achieve the goal. Select the BEST answer.
A. Create a new collection from the baseline deployment.
B. Modify the configuration item properties.
C. Create a collection that has a limiting collection.
D. Modify the collection properties.
Explanation: References: https://technet.microsoft.com/en-gb/library/gg712331.aspx
Q26. DRAG DROP – (Topic 2)
You need to recommend a solution for monitoring the application usage of the Seattle office users.
What should you include in the recommendation? To answer, drag the appropriate servers to the correct actions in the answer area. Each server may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
Q27. – (Topic 5)
You deploy the Endpoint Protection client in Microsoft System Center 2012 R2 Configuration Manager to all client computers. You plan to deploy an antimalware policy to the users in the research department.
You need to ensure that the research department users can exclude some processes from the Endpoint Protection client manually. Which settings should you modify in the antimalware policy?
A. real-lime protection
B. default actions
C. threat overrides
Q28. – (Topic 5)
You have a deployment of Microsoft System Center 2012 R2 Configuration Manager.
You need to ensure that you can view the client site assignment status messages of all clients by using Configuration Manager, even if the management point is unavailable.
Which site system role should the clients use?
A. an out of band service point
B. a reporting services point
C. a System Health Validator point
D. a fallback status point
Q29. – (Topic 3)
You need to design a solution to deploy App3. What should you do?
A. Publish App3 as a RemoteApp program.
B. Assign App3 to users by using a Group Policy object (GPO)
C. Publish App3 as an App-V package.
D. Install App3 locally on the client computers.
Q30. HOTSPOT – (Topic 5)
You have two applications named App1 and App2. App1 is a 32-bit application and App2 is a 64-bit application.
You sequence the applications as shown in the following table.
You have four computers configured as shown in the following table.
You need to identify the computers on which you can run the virtualized applications.
Which computers should you identify for each application? To answer, select which computers can run each application in the answer area. Computers may be able to run one of the applications, both of the applications, or none of the applications.
|
OPCFW_CODE
|
[svlug] Interview questions?
akkana at shallowsky.com
Thu Jan 19 17:32:00 PST 2006
lordSauron writes (re keyboard shortcuts):
> They really don't, but it's the people that don't know and refuse to
> learn that should be kicked out.
Well, certainly they'd better know basics like ^C (as Bill K. says,
that's like knowing which one is the brake pedal). But keyboard
shortcuts in general? As a major fan of keyboard shortcuts myself,
it's fairly clear to me that a lot of people (even established
developers) don't use them much at all, and rely on the mouse. It's
a matter of taste, and someone not knowing ctrl-alt-backspace or
ctrl-alt-keypad+ wouldn't necessarily make me think they didn't know
Linux. (I only use ctrl-alt-keypad+ myself because it's sometimes
handy when using a laptop to connect to projectors.)
However, you could test a candidate's comfort with man pages
using uncommon problems like this, e.g.: Someone asks you what
ctrl-alt-keypad+ does. Where would you look to find out?
What if you couldn't use the web? You can find out whether they use
apropos and man intelligently, or, alternately, whether they know
where the man pages are located and know how to use grep. Either
answer demonstrates knowledge, and it might be interesting to see
which approach they take.
Some other useful QA questions:
I like the kill/ps questions that have already been mentioned.
You could add some additional questions to test their understanding
of what they're doing, such as: What's the difference between ctrl-C
and ctrl-backslash, and how would you do the equivalent using the
kill command? (If they don't know the signal names or numbers
offhand, that's okay if they know where to look them up.)
How would you stop a process temporarily, then continue it?
A program seems hung and you want to kill it: what do you do?
What if your first approach doesn't kill it: what next?
What if you don't want to kill it, but instead want to find
out where it's hung: is there anything you can do?
How would you get a stack trace from a program that's crashing?
What other useful things might you do while you were there?
How do you record the output of a program, in case you want to
refer to it later?
Bonus questions (not strictly needed for QA, but does test mastery
of Linux) might test how comfortable they are with pipes and
redirection on the command line. Maybe a sample problem that
involves using grep | sort | uniq or grep | sed. But I've known
lots of competent QA people who might not be able to answer that,
so I wouldn't refuse to hire someone based on lack of shell
More information about the svlug
|
OPCFW_CODE
|
Oneplus 6 enchilada not charging while on Windows 11 (will provide any kind of logs details for debugging pupropses)
Prerequisites
[x] Have you read the readme?
[x] Do you (want to) have your device in the supported list?
[x] Does the device have a Snapdragon 845 SOC?
Description
[Description of the bug or feature]
Expected behavior: [What you expected to happen]
Actual behavior: [What actually happened]
It only works when i boot on OP firmware slot, and not charging at all while on Win 11.
IIRC OnePlus is very picky when it comes to chargers, it only allows certain chargers to work. Just try different chargers and see if you have any luck in getting it to charge in W11.
Tried over 10 chargers with diff cables also, still not allowing to charge and only way is booting back to Oxygen
I am able to charge while in windows by plugging the phone to one of the usb ports on my pc. It charges quite slowly though
the 5v4a Nokia Lumia 950XL charger works for me with OP6. This one specifically https://www.ebay.ca/itm/184167639955
I am also having this issue, it will pull current just fine in twrp and stock os and recovery mode. But when I load into windows its drawing 0 current. I tried hooking up my power bank (C to C) and it started charging the power bank???. I'm not sure if this is a new issue, but it definitely needs to be looked into. I confirmed all this with two different usb volt/amp/watt testers, with a to c and c to c. With a quick-change 3.0 brick and the charger that comes in the box, heck even my pc's dedicated charge port.
Could you try the images I sent in Telegram group just now? I'm not sure if it will work.
-------- 原始信息 --------发件人: josc19 @.> 日期: 2022/5/11 16:06 (GMT+08:00) 收件人: edk2-porting/edk2-sdm845 @.> 抄送: Subscribed @.***> 主题: Re: [edk2-porting/edk2-sdm845] Oneplus 6 enchilada not charging while on Windows 11 (will provide any kind of logs details for debugging pupropses) (Issue #107)
I am also having this issue, it will pull current just fine in twrp and stock os and recovery mode. But when I load into windows its drawing 0 current. I tried hooking up my power bank (C to C) and it started charging the power bank???. I'm not sure if this is a new issue, but it definitely needs to be looked into. I confirmed all this with two different usb volt/amp/watt testers, with a to c and c to c. With a quick-change 3.0 brick and the charger that comes in the box, heck even my pc's dedicated charge port.
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you are subscribed to this thread.Message ID: @.***>
Charging with a PD charger is supported now https://github.com/edk2-porting/edk2-sdm845/commit/f641c801b0d033e3a5a831671d1153d1f49cad1e
|
GITHUB_ARCHIVE
|
Retreive lost emails on Incredimail app with Flash player in 2021
Good day
I am trying to retreive lost emails. This problem is full of old technology. Incredimail shut it's doors in 2020, Adobe Flash player is not supported in 2021, and the OS of the computer is a 32bit Windows XP.
I uninstalled Adobe Flash player because it displayed an immovable icon on the desktop all of the time.
The problem is that the Incredimail app uses flash player to run. The result is that I cannot open the application and export the emails.
Is there a way to retreive the emails in inbox/sent/archived folders?
Thank you for your help and time.
Adobe removed the flash player download link from their website in 2021. Even if you have a spare copy of the installer on your computer, it still won't work as it's an "online installer" that retreives a copy of the latest version from Adobe's website, which Adobe also deleted in 2021 so the installers Adobe provided for free download in 2020 won't work in 2021. However I managed to get an offline installer that works for Windows, Mac, Linux, ActiveX, NPAPI and PPAPI. No internet connection is needed and it's version <IP_ADDRESS>5 the latest version. https://gitlab.com/desbest/flash-player
Also in 2021 the web browsers Edge, Chrome and Firefox removed support for Flash and all NPAPI plugins. So if you want a web browser that supports flash, use Basilisk http://basilisk-browser.org
"Incredimail shut it's doors in 2020" @John
@debest The link of the offline installer allowed me to open the Incredimail application and to export the mails. Thank you!!
You can take help of this tiny little utility - RecoveryTools
IncrediMail Migrator to retrieve lost emails from Incredimail application. As Incredimail support is shutdown already so you have limited choice with you.
If you have Incredimail 2.5 can use eM ckient. It can convert files.
https://www.emclient.com/?lang=en&_ga=2.87103195.2139643426.1614441048-2130051183.1614441048
My incredimal was version 2.0 and had to convert the emails manually, but with great support from eM Client.
To install IM run install_flash_player.exe and install_flash_player_ax.exe. Flash files are included to operate IM.
For Win XP delete or rename the files in C:\WINDOWS\system32\Macromed\Flash (Flash32_32_0_0_445.ocx and FlashUtil32_32_0_0_445_ActiveX.exe)
Ocx and exe files older versions are renamed as deleted or renamed after which you can open IM with clock and notifier.
For Win 7 or later, replace and rename the files in C:\Windows\SysWOW64\Macromed\Flash\ Flash32_32_0_0_446.ocx and FlashUtil32_32_0_0_446_ActiveX.exe.
For Win 10 run (reinst_flash_w10.cmd)
|
STACK_EXCHANGE
|
The set of all finite subsets of $\mathbb{N}$ is similar to the set of all countably infinite subsets of $\mathbb{N}$ whose complement is finite.
Is my proof, for this propositon correct?
Proof:
Let $X$ be the set of all finite subsets of $\mathbb{N}$.
Let $Y$ be the set of all countably infinite subsets of $\mathbb{N}$ such that their complement is finite.
$\implies Y:=\{G \in P(\mathbb{N}): |\mathbb{N}\backslash G |<\aleph_{0}\}$.
Let $A_{K}:=\mathbb{N}\backslash K,\forall K \subset P(\mathbb{N})$ such that $|K|<\aleph_{0}$.
Let $f:X \rightarrow Y$ be a function such that $f(K):=A_K$.
Let $S,T \in X$ such that $f(S)=f(T)$.
$\implies A_S=A_T \implies \mathbb{N}\backslash S=\mathbb{N}\backslash T \implies S=T$.
$$\begin{align}\implies f \text{ is one-one.}\tag{1}\end{align}$$.
Let $W \in Y$(arbitrary).
Now, $\mathbb{N}\backslash W$, by definition of $Y$, is finite $\implies \mathbb{N}\backslash W \in X$.
$\implies f(\mathbb{N}\backslash W)=A_{(\mathbb{N}\backslash W)}=\mathbb{N} \backslash(\mathbb{N}\backslash W)=W$.
$$\begin{align}\implies f \text{ is onto.}\tag{2}\end{align}$$
$(1),(2)\implies f$ is bijective.
Thus, $X \sim Y$.
Observation:
I was thinking about the uncoutablility of $P(\mathbb{N})$. We know that if we partition it into $A$ and $B$, where A contains all finite subsets of $\mathbb{N}$
and B all the countably infinite ones, then we have $A\sim \mathbb{N}$ and so its the other set which gives $P(\mathbb{N})$ its actual distinction over $\mathbb{N}$, making it an uncountable set of the smallest cardinal ($2^{\aleph_{0}}=\mathfrak{c})$(provided that the Continumm Hypothesis is true.
By this propostion of establishing similarity between $X$ and $Y$, I feel its the countably infinite subsets of $\mathbb{N}$, whose compliment is countably infinite, that gives $P(\mathbb{N})$ its uncountable nature.
Your proof is so far correct but not really necessary. On any base set $X$ the complement operator is bijective, as it is self inverse.
But $Y$ is by definition the image of $X$ under the complement operator. Thus trivially there exists a bijection between these two sets.
The real question is: What do you mean by sets are similar? What you’ve proven is that these sets are of equal cardinality. Usually similarity in set theory requires you to have total ordered sets an a bijection that preserves order.
In here similarity is considered same as two sets having same cardinality.
|
STACK_EXCHANGE
|
该文档提供Klipper实现机械运动学控制的概述,以供 致力于完善Klipper的开发者 或 希望对自己的设备的机械原理有进一步了解的爱好者 参考。
velocity(time) = start_velocity + accel*time
考虑以下两个在 XY 平面上的移动:
end_velocity^2 = start_velocity^2 + 2*accel*move_distance
Specifically, the code calculates what the velocity of each move would be if it were limited to this virtual "acceleration to deceleration" rate (half the normal acceleration rate by default). In the above picture the dashed gray lines represent this virtual acceleration rate for the first move. If a move can not reach its full cruising speed using this virtual acceleration rate then its top speed is reduced to the maximum speed it could obtain at this virtual acceleration rate. For most moves the limit will be at or above the move's existing limits and no change in behavior is induced. For short zigzag moves, however, this limit reduces the top speed. Note that it does not change the actual acceleration within the move - the move continues to use the normal acceleration scheme up to its adjusted top-speed.
Once the look-ahead process completes, the print head movement for the given move is fully known (time, start position, end position, velocity at each point) and it is possible to generate the step times for the move. This process is done within "kinematic classes" in the Klipper code. Outside of these kinematic classes, everything is tracked in millimeters, seconds, and in cartesian coordinate space. It's the task of the kinematic classes to convert from this generic coordinate system to the hardware specifics of the particular printer.
Klipper uses an iterative solver to generate the step times for each stepper. The code contains the formulas to calculate the ideal cartesian coordinates of the head at each moment in time, and it has the kinematic formulas to calculate the ideal stepper positions based on those cartesian coordinates. With these formulas, Klipper can determine the ideal time that the stepper should be at each step position. The given steps are then scheduled at these calculated times.
The key formula to determine how far a move should travel under constant acceleration is:
move_distance = (start_velocity + .5 * accel * move_time) * move_time
move_distance = cruise_velocity * move_time
cartesian_x_position = start_x + move_distance * total_x_movement / total_movement cartesian_y_position = start_y + move_distance * total_y_movement / total_movement cartesian_z_position = start_z + move_distance * total_z_movement / total_movement
Generating steps for cartesian printers is the simplest case. The movement on each axis is directly related to the movement in cartesian space.
stepper_x_position = cartesian_x_position stepper_y_position = cartesian_y_position stepper_z_position = cartesian_z_position
Generating steps on a CoreXY machine is only a little more complex than basic cartesian robots. The key formulas are:
stepper_a_position = cartesian_x_position + cartesian_y_position stepper_b_position = cartesian_x_position - cartesian_y_position stepper_z_position = cartesian_z_position
Step generation on a delta robot is based on Pythagoras's theorem:
stepper_position = (sqrt(arm_length^2 - (cartesian_x_position - tower_x_position)^2 - (cartesian_y_position - tower_y_position)^2) + cartesian_z_position)
Stepper motor acceleration limits¶
With delta kinematics it is possible for a move that is accelerating in cartesian space to require an acceleration on a particular stepper motor greater than the move's acceleration. This can occur when a stepper arm is more horizontal than vertical and the line of movement passes near that stepper's tower. Although these moves could require a stepper motor acceleration greater than the printer's maximum configured move acceleration, the effective mass moved by that stepper would be smaller. Thus the higher stepper acceleration does not result in significantly higher stepper torque and it is therefore considered harmless.
However, to avoid extreme cases, Klipper enforces a maximum ceiling on stepper acceleration of three times the printer's configured maximum move acceleration. (Similarly, the maximum velocity of the stepper is limited to three times the maximum move velocity.) In order to enforce this limit, moves at the extreme edge of the build envelope (where a stepper arm may be nearly horizontal) will have a lower maximum acceleration and velocity.
Klipper implements extruder motion in its own kinematic class. Since the timing and speed of each print head movement is fully known for each move, it's possible to calculate the step times for the extruder independently from the step time calculations of the print head movement.
Basic extruder movement is simple to calculate. The step time generation uses the same formulas that cartesian robots use:
stepper_position = requested_e_position
Experimentation has shown that it's possible to improve the modeling of the extruder beyond the basic extruder formula. In the ideal case, as an extrusion move progresses, the same volume of filament should be deposited at each point along the move and there should be no volume extruded after the move. Unfortunately, it's common to find that the basic extrusion formulas cause too little filament to exit the extruder at the start of extrusion moves and for excess filament to extrude after extrusion ends. This is often referred to as "ooze".
The "pressure advance" system attempts to account for this by using a different model for the extruder. Instead of naively believing that each mm^3 of filament fed into the extruder will result in that amount of mm^3 immediately exiting the extruder, it uses a model based on pressure. Pressure increases when filament is pushed into the extruder (as in Hooke's law) and the pressure necessary to extrude is dominated by the flow rate through the nozzle orifice (as in Poiseuille's law). The key idea is that the relationship between filament, pressure, and flow rate can be modeled using a linear coefficient:
pa_position = nominal_position + pressure_advance_coefficient * nominal_velocity
See the pressure advance document for information on how to find this pressure advance coefficient.
The basic pressure advance formula can cause the extruder motor to make sudden velocity changes. Klipper implements "smoothing" of the extruder movement to avoid this.
The above graph shows an example of two extrusion moves with a non-zero cornering velocity between them. Note that the pressure advance system causes additional filament to be pushed into the extruder during acceleration. The higher the desired filament flow rate, the more filament must be pushed in during acceleration to account for pressure. During head deceleration the extra filament is retracted (the extruder will have a negative velocity).
The "smoothing" is implemented using a weighted average of the extruder position over a small time period (as specified by the
pressure_advance_smooth_time config parameter). This averaging can span multiple g-code moves. Note how the extruder motor will start moving prior to the nominal start of the first extrusion move and will continue to move after the nominal end of the last extrusion move.
smooth_pa_position(t) = ( definitive_integral(pa_position(x) * (smooth_time/2 - abs(t - x)) * dx, from=t-smooth_time/2, to=t+smooth_time/2) / (smooth_time/2)^2 )
|
OPCFW_CODE
|
March 2022: Congratulations to Felicity Hsu for passing her Preliminary Exams!
January 2022: Rachel is named to the List of Teachers Ranked as Excellent by their Students
November 2021: Postdoc Surabhi Sonam joins the lab. Welcome!
September 2021: Congratulations to Anish Bose for passing his Preliminary Exams!
August 2021: Smith-Bolton lab tech Angel Martinez is awarded Outstanding Poster Presentation at the Illinois Summer Research Symposium for his work in the Nowak lab.
August 2021: Rachel is named Assistant Director of the School of MCB for Diversity, Equity, and Inlcusion
May 2021: Le’Mark Russell is awarded the Abbvie/Black Business Network summer research fellowship
January 2021: Rachel Smith-Bolton is named to the List of Teachers Ranked as Excellent by Their Students
January 2021: Yuan Tian‘s work on chromatin remodeling and the role of SWI/SNF in regeneration is published here .
December 2020: Snigdha Mathure joins the Smith-Bolton lab! Welcome!
June 2020: Yuan Tian defends her thesis! Fantastic seminar. Congratulations Yuan!
May 2020: Nicholas Griffith graduates with Highest Distinction, with an Undergraduate Research Achievement Award, and is awarded the Roderick MacLeod Award for Academic Excellence
May 2020: Le’Mark Russell is awarded a CDB summer research fellowship.
April 2020: Rachel Smith-Bolton and Don Fox’s review on Drosophila as a model system for regeneration is published here
January 2020: Rachel Smith-Bolton is named to the List of Teachers Ranked as Excellent by Their Students
December 2019: Anish Bose and Felicity Hsu join the Smith-Bolton lab. Welcome!
December 2019: Congratulations to Syeda Nayab Abidi for being awarded the Outstanding Graduate Student Award in the Department of Cell and Develomental Biology.
December 2019: Ewelina Nowak and Nicole Sazonov graduate with Distinction
July 2019: Nayab Abidi defended her thesis! Yay!
May 2019: Congratulations to Nicholas Griffith for being named 2019 Jenner Family Summer Research Fellow
May 2019: Congratulations for Matthew Contreras for Graduating with High Distinction, being awarded a Cell and Developmental Biology Undergraduate Research Achievement Award, and receiving NIH funding to work as a technician in the lab for the next year.
May 2019: Congratulations to Syeda Nayab Fatima Abidi for being awarded the Cell and Developmental Biology Outstanding Teaching Assistant Award
August 2018: Rachel Smith-Bolton is promoted to Associate Professor with tenure, and named UIUC’s I.C. Gunsalus Scholar
June 2018: The lab has received a diversity supplement from NIH NIGMS to support undergraduate research.
April 2018: Congratulations to undergraduate Aria Darbandi for receiving an Undergraduate Research Achievement Award.
April 2018: Former undergraduate Amanda Sul will be attending the PhD program at Scripps. Congratulations Amanda!
March 2018: Nayab Abidi’s paper on cell fate changes caused by the commonly used Dll-GAL4 line is published in Scientific Reports here
December 2017: Undergraduate Aria Darbandi graduates with Distinction.
August 2017: Amanda Brock is chosen to give a Hilde Mangold talk and the annual meeting for the Society for Developmental Biology.
July 2017: Sumbul Khan’s paper on the transciptional profile of regeneratin tissue and a positive feedback loop that regulates regeneration signaling is published in PLoS Genetics here
May 2017: Amanda Brock’s paper on the genetic screen and the role of Cap-n-collar/Nrf2 in regulating regeneration is published in Genetics online here.
Undergraduate Benjamin Wang is awarded the 2017 Undergraduate Research Achievement Award
April 2017: Congratulations to Keaton Schuster for being awarded the Oyetunji A. Toogun Memorial Award for excellence in Cell and Developmental Biology research
June 2016: Sumbul Jawed Khan’s and Keaton Schuster’s review on Regeneration in Crustaceans and Insects is published here
May 2016 Amanda Sul graduates with High Distinction – congratulations!
April 2016: Sumbul Jawed Khan’s and Syeda Nayab Fatima Abidi’s joint paper on sorting and isolating blastema cells is published online here
April 2016 Undergraduate Amanda Sul is awarded the 2016 Undergraduate Research Achievement Award.
April 2016 Graduate Student Syeda Nayab Fatima Abidi is awarded “Best Poster” at the Midwest Regenerative Medicine Conference.
March 2016 Undergraduate Benjamin Wang is awarded a Summer Undergraduate Research fellowship.
October 2015 Lab manager Andrea Skinner’s paper is published online: http://dev.biologists.org/content/142/20/3500
October 2015: Amanda Brock has been selected to give a talk at the Midwest Drosophila Conference.
June 2015: Graduate student Keaton Schuster’s paper is published online: DOI: http://dx.doi.org/10.1016/j.devcel.2015.04.017
June 2015: Yuan Tian has been awarded a travel grand to the SDB meeting in August.
May 2015: Congratulations to Mabel Seto, who has been awarded an Undergraduate Research Achievement Award for her Senior Thesis and will be graduating with High Distinction. Mabel will be attending graduate school at Vanderbilt University in the fall.
Also, congratulations to Amanda Sul for being awarded a Summer Undergraduate Research Fellowship.
April 2015: Keaton Schuster has been selected to give a talk at the Gordon Research Seminar preceding the Gordon Research Conference on Tissue Regeneration.
March 2015: Rachel Smith-Bolton has been awarded an Arnold O. Beckman Research Award.
March 2015: Andrea Skinner gave a selected platform talk at the Annual Drosophila Research Conference in Chicago.
March 2015: Congratulations to Amanda Sul for being awarded a Summer Undergraduate Research Fellowship.
November 2014: Congratulations to Yuan Tian, who was awarded Outstanding Talk at the Midwest Drosophila Conference.
October 2014: Keaton Schuster and Yuan Tian have been selected to give talks at the Midwest Drosophila Conference.
October 2014: Andrea Skinner has been selected to give a talk at the Midwest Society for Developmental Biology Meeting in St. Louis.
June 2014: Rachel Smith-Bolton’s review “Drosophila Imaginal Discs as a Model of Epithelial Wound Repair and Regeneration” is published online for Advances in Wound Care. http://online.liebertpub.com/doi/abs/10.1089/wound.2014.0547
May 2014: Congratulations to Mabel Seto and Amanda Sul for being awarded Summer Undergraduate Research Fellowships for their work in the Smith-Bolton lab.
September 2013: Andrea Skinner’s work prior to joining the Smith-Bolton lab is published. http://www.ncbi.nlm.nih.gov/pubmed/23798316
June 2013: Sumbul Jawed Khan’s PhD work is published. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3683752/
May 2013: Congratulations to Cristina Gratton for being awarded a Summer Undergraduate Research Fellowship for her work in the Smith-Bolton lab.
May 2013: Congratulations to Brenten Popiel and Peter deJongh for graduating from UIUC.
January 2012: Rachel Smith-Bolton is awarded the Roy J. Carver Young Investigator Award.
|
OPCFW_CODE
|
5 min read
When Gamification Goes Wrong (And It Often Does)Articles L&D
Gamification – for better or worse, it’s a common subject when talking about almost all products these days, especially learning products. I know we get asked about it quite a bit, but what commonly surprises me is the range of questions we get on the matter. These can range from the high level (“What is your gamification strategy?”) to the extremely detailed (“Do you have a leaderboard in your product?”).
What’s surprising about that range? Unsurprisingly, it’s questions like the latter that seem to imply an already developed opinion on what gamification aspects work best for the learning product the buyer is looking for. Let’s look at why that approach may not be the best choice.
Strategy vs Implementation Details
When assessing the gamification capabilities of products, it seems many buyers like to focus on the details, or features, employed by the product. However, common gamification features like leaderboards, badges, and levels are simply implementation details of the larger gamification strategy baked into the product. And what’s most important to the potential success of the product is that strategy. Without the right strategy, the features are largely useless – they’re tools with no real purpose.
Not all products employ the same gamification strategy. At a high level, most strategies can be united under a single purpose – to encourage engagement with and use of the product – but there’s a slew of critical details underneath this initial layer that define the whole strategy. And it’s these details that will make or break a product’s take on gamification.
Collaboration vs Competition
Collaboration and competition are typically the two major themes at the heart of any gamification strategy, but the two are dramatically different. Thus, when assessing products, it’s best to, at least, have an idea on which theme you’d prefer.
Some products want as much competition amongst its users as possible. This is commonly seen in products that prefer quantity over quality – the more the user performs and/or the more the user interacts with the product, the higher their rank becomes. This is a common theme in video games, for example, where the creators want as many users playing the game as possible. The more you play, the more your talent develops, and the higher you’ll climb in the rankings.
A more common enterprise example might be a product employed by the sales team to rank team members on the amount of sales made that quarter. Again, for a mature enterprise product, quantity is key here – more sales, and thus more revenue, is usually seen as a good thing – so at any time, a salesperson can login and see how they compare with their peers. Near the bottom? Uh-oh, better work overtime!
Encouraging collaboration, however, usually requires different means. In products where collaboration is key, quality trumps quantity – use the product as much as you like, but be sure what you’re contributing is for the greater good. Products like this commonly employ a similar strategy where the product’s user base provide the input that determines the quality of a single user’s contributions and thus, their ‘rank’ in the site. Contrast this with the examples above where input from the user base wasn’t even a factor, and you’ll start to see the drastic difference between these two themes.
You don’t have to look far for examples where collaboration is of critical importance – Quora, Hacker News, Reddit, and even Facebook. All of these sites employ a similar gamification strategy – users contribute something, and other users ‘upvote’ or ‘like’ those contributions, which awards the user points. In many sites, it’s as simple as that. Often, you won’t even find a ranking of the site’s users based on these points, which may lead you to then ask “what’s the significance of those points then?”. Well, it’s largely relative to the user. Most users simply feel good when others provide positive feedback on their contributions and don’t really care if it’s their first few points or if they’re the top contributor in the entire site. Positive peer feedback is simply justification for the effort you gave to the site. We all like to get likes on our Facebook posts, right?
Quora doesn’t even tell you how you compare to other users. They simply focus on how engaging and effective your contributions are. What you do with that info is up to you!
Now, back to those implementation details
I say all this to hopefully encourage you to reflect on your current gamification opinions and ensure you have the strategy in mind and not so much the details. Why? Well, there are a lot of products out there that have a poor gamification strategy but encompass almost identical implementation details. Leaderboards, levels, badges, certifications – they’re implemented in a ton of products, but only a small handful get them right due to the right strategy. And if your focus is on whether a product has a leaderboard or not, you may miss the fact that the product simply implemented the wrong strategy to begin with. In this case, no tool or feature will right their wrong.
And what of these products that perhaps have implemented the wrong strategy? What impact could that have on the product’s users? Well, when a product employs a poor gamification strategy, it often has the opposite effect desired – instead of motivating use of the product, it demotivates use. Typically, the competition-based strategies are the easier ones to get wrong since a) only certain users are motivated by competition and b) they tend to use more features to power their strategy, and more is certainly not always better (points + leaderboards + badges + levels…let’s do it all!). Thus, if the competition angle is either poorly done or simply over-executed on with feature after feature, the majority of users – except that small subset motivated by any and all competition – feel alienated instead of inspired.
So in short, focus more on the underlying strategy, not so much on the features. In today’s product landscape where almost every product touts some ‘gamification’ angle with almost identical feature sets, the devil is most certainly in the details. And being sure to assess those details could be the difference between a product that sends users away with rolled eyes vs one that has a lasting impact on your organization’s users.Categories: Articles L&D Tagged with: Gamification
|
OPCFW_CODE
|
I’m a web developer, teacher and entrepreneur based in Iceland.
Check more info out below 👇
- Tech Skills
- Other Skills
- Personal Interests
To get in touch please write to me at email@example.com 🙌
Quick intro about me:
I’m a Full Stack Web Developer with over a decade of experience in the industry. I have worked as a freelancer, in NGOs, small and medium-sized companies, ran my own web studio in Iceland (Brisa) during 4 years, and I have co-founded a tourism startup in Vietnam (Conbeo).
My most recent work experiences were re-imagining Tækniskólinn’s Web Development course to be a very efficient learning platform and creating de Linde’s website in collaboration with the Danish studio Limbo.
I seek to join a high-energy team of professionals to create beautiful and useful products with modern web technologies.
At the moment I'm open to talking and dicussing possibilities so please reach out via firstname.lastname@example.org. Hear from you soon :)
- Layouting (HTML + modern CSS)
- Solid Linux experience
- Apache / Nginx / TLS
- Vercel / AWS (EC2, S3) / Google Cloud / Digital Ocean / Netlify / Heroku
- Networking knowledge
- Python / Django / Flask
- Node / Express / pm2
- MySQL / Postgres / MongoDB
- Continuous Integration
- Wordpress / REST API / Custom Gutenberg Blocks
- SEO Optimization
- Information Architecture
- Can design things like this site and collaborate with designers quite well
- Code testing
- Continuous Integration
- Creative coding
I like to tackle problems in a systematic and predictable way, without inflating the solution unnecessarily.
Thanks to extensive experience, I’m able to learn new Web Development-related technologies on demand.
I typically raise ideas about the projects and work environment in order to improve them.
I have an easy time stepping into other people’s shoes, which helps with being understanding and solving a wide variety of issues.
I’m a Web Development instructor since 2016 and co-creator of an entire Web Development course. More about that here.
I co-founded 2 companies. Making ideas become reality is a challenging and exciting concept to me.
2021 and on
2018 - 2021
2014 - 2018
2013 - 2014
Moved to Iceland, to work at Locatify where I have developed a mobile app called Goldworm, which allowed kids to create their own interactive books.
I also coded features for their indoor positioning system.
2007 - 2009
Earned my B.Tech. degree in Computer Networks, worked with AIESEC in a range of positions, organized EESL, a student-led series of conferences about Open Source Software.
Art for all (List fyrir alla) has been working with Pedro from when our project was established in 2016. He designed our website, which is constantly in progress; needs additions and changes frequently, and there he is always positive and helpful in all communication and solves all our requests with a smile.
He is extremely good at connecting with new projects and suggestions, where he shows his enthusiasm and great artistic insight and therefore he has become a valuable team member to us from the very beginning and all the way through.
Pedro is a person whom transmits calm and reassurance, no matter how complex the task seems to be, executing it with competence and creativity.
He is a great person to ask for suggestions and evaluations. Much of this is due to his interest and knowledge in very diverse areas.
Few times have I met people who recognized their teammates' potentials so well, making him always a strategic presence in planning and projects.
I have been Pedro’s supervisor for several months. I found him to be consistently pleasant, tackling all assignments with dedication and a smile.
Besides being a joy to work with, Pedro is a take-charge person who is able to present creative ideas and communicate the benefits. He has successfully developed several research plans for our company that have resulted in new visions.
Though he was an asset to our marketing efforts, Pedro was also extraordinarily helpful in other areas of the company, like our technical departament.
I highly recommend Pedro for employment since he is a team player and would make a great asset to any organization.
Pedro is a focused, detail-oriented person with great interpersonal skills. He is always willing to learn and teach.
Working in partnership with him was productive and enriching. His good mood is a remarkable feature.
I play the guitar since my teenage years and have been looking for a band to join lately. Wanna jam? :)
Also, much to the influence of my friend Murilo Polese, I have been have been dabbling on Ableton Live and learning about synthesizers.
Early in 2020 I bought a Fujifilm X-T20 and have been having a fantastic time capturing some of the beautiful things I see around.
Check my pics on Unsplash!
Drawing & Painting
Admitedly, I’m the worse at drawing. To counter this small issue, I love creating pictures that don’t require this skill, like creating patterns.
I’ve been experimenting with pencil, pen, watercolor and digital creation.
The questions with answers can amaze, but the ones without any make you stare into the void and wonder: why? 🔮
|
OPCFW_CODE
|
If you’ve just entered the world of coding, or even if you have a year or two of experience under your belt, you may be wondering how to become a better programmer and take your skills to the next level.
Code can be complex and difficult to write and maintain. If you’ve found yourself staring at your computer screen, trying to work out the same bug for hours on end, you may find yourself asking: “Does this get any easier?”
Many expert coders will say “Yes” — with time and more experience, coding does get easier.
Others, however, may say “No,” the problems are still complex — but it does become quicker as you get more familiar with the language and find ways to solve problems with fewer lines of code.
Luckily, there are a few tips to help you improve your coding skills and become a better programmer.
If you’re wondering how to become a better programmer, one of the best things you can do is practice coding.
There are many resources online where you can practice code in online simulations. However, many programmers will say that coding in action is quite different and will be more beneficial for sharpening your skills.
A few of the many benefits of practicing coding include:
One of the best ways to improve as a programmer is to ask those with more experience to review your code.
As a programmer, your team may have minimum requirements for review where one or a few people will have to ensure the code is functional and doesn’t have any bugs.
Go above and beyond any requirements and get additional eyes on your code. Every person you have review your code is a learning opportunity.
Don’t be afraid to ask people with varying levels of programming experience to review your code.
People who have slightly more experience may be able to offer tips and tricks they learned a year or two ago to help you avoid making the same mistakes. Meanwhile, those in engineering manager roles can offer advice that may help you think outside the box and vastly improve your skillset.
Often, there is more than one way to solve a problem.
A mediocre programmer who writes 100 lines of code to solve a problem may sound productive — but an expert may be able to solve the same problem in 10 lines.
Both a long and short code sequence may have the same effect, but an expert programmer will know tips and tricks to get it done in far fewer lines with a different structure.
Shorter code can be just as challenging and complex for an experienced developer as a longer solution. However, it’s all about making it executable and easier for fellow programmers and computers alike to read.
And not only is it more work for others to read and review more lines of code, but it also means more room for bugs — which take more code to fix.
Remember: What’s most important is readability. Look for places where you can condense code into fewer lines, but don’t try to squish four to six lines into one massive line that’s impossible to read and understand when you or others come back to it.
While hands-on experience is essential for coding, that doesn’t mean you can’t learn tips and tricks to solve problems in fewer lines and make your job easier and faster.
A few great free or inexpensive resources from experts to improve your programming skills include:
For example, Codeacademy offers free coding classes in 12 different languages; a community section with forums and chatting; a resource section with helpful documents, cheat sheets, blogs, and videos; and much more.
If you’re at all familiar with programming, you’ve likely heard of the DRY principle — Don’t Repeat Yourself.
It may be tempting to copy and paste more generic lines of code and modify them slightly.
However, repeating code can cause a few problems:
Not only is the DRY principle helpful for saving time with bugs and maintenance, but it is cleaner and challenges developers to come up with unique code. The best developers rarely ever repeat code.
Functional programming (FP) means using pure functions when writing code.
Pure functions do not mutate state, so they are guaranteed not to have side effects. This makes them easy to test, debug, and reason about in a complex code base.
Some programmers find it difficult to comprehend functional programming. However, many available frameworks can help make it easier, like functional React components with hooks.
Compared with imperative programming, declarative programming explicitly states what a result should be.
If you’re trying to change a color, for example:
Declarative programming is more straightforward and tells a computer the result without specific directions on how to get there.
Imperative programming, meanwhile, includes specific instructions to achieve the desired result, rather than just stating what that result is like declarative programming. Imperative programming requires more extensive knowledge of possible starting values, and more testing to achieve the correct result.
Declarative programming is inclined toward immutability, which means fewer bugs to work out.
While some may think that the best way to become a better programmer is to learn one language and learn everything there is to know about it, there are a lot of benefits to learning multiple coding languages.
Some of the most popular programming languages include:
The languages you may use to code will often depend on the type of programming you’re doing:
No matter the language you start coding with, learning other languages will help you creatively solve problems and make your skills more applicable across different programming use cases.
If you’re wondering, “How long does it take to get good at coding?” there’s no one solid answer. A lot of how to improve your coding skills and go from beginner to expert depends on how you code.
When starting from scratch, coding boot camps often take 3 to 9 months, many self-study taught programs take 6 to 12 months, and a computer science associate’s or bachelor's program typically takes 2 to 4 years.
However, many experienced coders say that because of the problem-solving nature of coding and constantly evolving frameworks and languages, you never stop learning.
If you continue using the same functions and language, copy code in multiple places, and only have the minimum number of people review your code, it will take a lot longer to make noticeable improvements.
If you look to experts for advice, get as many eyes on your code as possible, and work to create unique code that solves problems in fewer lines, you can consistently improve and continuously implement your new skills as you learn and grow.
|
OPCFW_CODE
|
Common or mainland tiger snake (Notechis scutatus), black or island tiger snakes (N. ater)
Tiger snakes are found in the temperate areas of southern Australia, including Tasmania, where they are particularly large and venomous. Identification of tiger snakes by the presence of stripes is unreliable, since this varies with the seasons and the maturity of the snake, and there is also an unstriped black species (N. ater). Several other venomous and non-venomous Australian snakes may also be striped. Features of tiger snake envenomation include neurotoxicity (caused by pre-synaptic and post-synaptic neurotoxins), coagulopathy and rhabdomyolysis.
Mainland Tiger Snakes (Notechis scutatus)
Tiger snakes are solidly built, with broad, flattened heads. When disturbed, tiger snakes may flatten their necks in a threat display. They usually strike low to the ground. Average length is approximately 1m. Adults are usually banded, and colour may vary from pale yellow to almost black. Occasionally, the bands may be absent, leading to difficulties with identification. The fangs are usually around 3.5 - 5mm in length. Tiger snakes are ovoviviparous (holding the eggs in their abdomen until they hatch, and giving birth to live young). The average litter size is around 30. Tiger snakes are active on warm summer evenings, and are attracted to farms and outer suburban houses, where they hunt mice and rats, and where they may be trodden on by unwary people barefoot at night. They prefer swampy or marshy ground, and hunt frogs around creeks, rivers or dams.
This snake is distributed along the southeastern coast of Australia, including Victoria, eastern New South Wales, part of South Australia and Tasmania. This includes many of the most populous areas of Australia, and tiger snake bite is currently one of the most common snake bites in Australia, along with brown snake.
The venom is produced in large quantities, the average yield being around 35mg. The record yield was 180mg. Tiger snake venom contains pre-synaptic and post-synaptic neurotoxins, myotoxins and procoagulants. Bites result in paralysis, incoagulable blood and muscle damage, which may lead to renal failure.
Black Tiger Snakes (Notechis ater)
Black or island tiger snakes are quite distinct from mainland tiger snakes. Most black tiger snakes prefer marshy areas, and are active in the day. Island tiger snakes eat mostly mutton birds, and often use their burrows.
Notechis ater are usually black, with paler abdomens. Banding is sometimes seen in Western Australian subspecies, but is uncommon in other subspecies. Size is highly variable. Chappell island tiger snakes are generally the largest, and may reach 2.4m. Krefft's tiger snake rarely exceeds 0.9m. Young black tiger sakes are born alive. Litters vary from 20 to 30.
Most live on islands off the south coast of Australia and Tasmania, although some have a limited range on the mainland. Black tiger snakes (Notechis ater occidentalis) are found in the southwest corner of Western Australia, and Krefft's tiger snakes (Notechis ater ater) live in a small area of the Flinders Ranges in South Australia. Peninsula tiger snakes (Notechis ater niger)are found on the Yorke and Ayre Peninsula and Kangaroo Island and neighbouring islands. Notechis ater humphreysi is found on King island and most of the Bass Strait islands, and the large Notechis ater serventyi is found on Chappell Island and Badger Island.
The Chappell Island tiger snake is the most prolific venom producer of all the black tiger snakes, with an average venom yield of 74mg, and a maximum recorded yield of 388mg. Victims of envenomation by this snake should initially be treated with double the usual dose of antivenom, due to the copious amounts of venom produced. The toxicity of Chappell Island tiger snakes is less than that of the mainland tiger snakes, but that of the South Australian Island tiger snakes is more toxic than those of the mainland, with a similar average venom yield of 34mg. The components of the venom are similar to those of the mainland tiger snakes, causing paralysis, clotting deficits and muscle damage.
|
OPCFW_CODE
|
In this Blog
The Spamhaus Project view of these services
Over the past several years we have encountered several businesses offering email verification services. We have met them in industry social circles, sometimes they get tangled in our lists, and often we're asked by senders and receivers alike: "what does Spamhaus think of verification services?"
The idea of the verification service is to determine whether an email address exists, before it is used for transactional or bulk email. That helps avoid undeliverable messages which can trigger spam-blocking actions by the receiving system. It does nothing to verify the permission of the recipient to accept a subscription, which is the most important step in avoiding spam when acquiring addresses for bulk emailing lists, nor does it ensure that the email address owner is the same as the person making the transaction. That means that transactional mail like receipts, tickets or vouchers could be sent to the wrong person, yet the address won't bounce because it was verified to exist.
SMTP includes commands called "VRFY" and "EXPN" which do exactly what verification services offer. While those two functions are technically different, they both reveal to a third party whether email addresses exist in the server's userbase. Nearly every Postmaster (mail server administrator) on the Internet has turned off VRFY and EXPN due to abuse by spammers trying to harvest addresses, as well as a general security and privacy measure required by most network's operational policies. In fact, since about 1999 or before, all mail servers are installed with those off by default. That should give a clear indication to email verifiers about the opinion of Postmasters of the service they intend to offer. Doing verification against systems that have disabled those functions, whether successful or not, constitutes an attempted breach of the receiver's security policies and may be considered a hostile act by site administrators. Sending high volumes of verification probes without an attempt to actually send an email will often trigger filters or firewalls, thus invalidating the data and impairing future verification accuracy.
Listwashing and spamtrap washing services help spammers. They do not help list owners who have done proper opt-in acquisition and list maintenance, nor do they help mail receivers (including both Postmasters and mailbox owners) who rely on spamtrap data to keep spam off their servers and out of their mailboxes. These services might be of marginal help for point-of-sale typo'd addresses (but it won't catch many typo traps, despite tricks the verifiers use to detect those) or in the few edge cases of list owners who have not been as diligent as they should in address acquisition and list maintenance, but many of those cases need to take much stronger list hygiene measures than simply verifying the existence of addresses. (They need to remove bounces, non-responders and bad list segments and imports, and then do a permission pass over the remaining addresses, keeping only those whose owners subscribe to that offer.) Email verification services must operate with a strong policy which prohibits listwashing, trap washing and related spam support services, and which avoids clients who seek those services.
Considerations if attempting to offer a verification service
So, let's say that despite all the problems and objections to email verification as a business model, you decide to offer such a service. What are some things you should consider?
You need to properly identify your company, including your domain and your IP addresses. Your reputation is easier to build and stronger when it is based on a single domain and a single IP address, or a small and contiguous range of IP addresses. Using lots of domains and lots of IP addresses - like spammers do! - and not properly identifying your company in whois, PTR, SPF, DMARC and other network records is the mark of someone sneaking around... like spammers do! Poor reputation of your domains and IP addresses will get you blocked at many receivers, and possibly at Spamhaus.
If a Postmaster doesn't want you snooping around their server, possibly blocking or rate-limiting your domain or IP addresses, respect their choice and don't try and sneak around their blocks. Changing IP addresses or domains is sneaky and evasive. Evading such filters is spammy on the sender/verifier's part and will be seen as intentionally so, and larger, higher, stronger barriers put in place. You may wish to contact them, ask what's up and whether there's anything you can do to regain their trust, but do that honestly and forthrightly.
Do not offer listwashing and spamtrap removal services. If a customer asks about those, that should raise a big caution flag with you. Why do they need those things if they collected their mailing list properly and honestly in the first place?
Offer a true Confirmed Opt In (COI) service so that list owners not only verify that customer addresses exist, but also verify that the sender has each and every mailbox owner's permission to send it bulk email. This could relate to the "permission pass" method of list repair, as mentioned above and referenced below.
"Let me tell you about my business model..."
Some of the things we've heard from verifiers, and our response:
"Our clients must have their own dedicated setup and need separate domains and IP addresses due to their company policy or privacy reasons."
The context of that quote was the verifier's use of many domains and IPs, both poorly identified - nonsense domains, anonymized whois, generic or no rDNS, no IP address netblock SWiPs, etc. It looked like a typical snowshoe spammer setup.
ESP clients are often well-advised to use dedicated IP addresses and their own domains because they have their own branding with well-known domains and established reputations. Those dedicated domains and IP addresses should, of course, have proper network identification of the responsible party, and be static over time.
The branding and reputation factors are different for a verifier than for an ESP. It is simply not true that each verifier client needs its own separate domain and IP address. The verifier's email probe is essentially invisible as far as branding. No end-user ever sees any part of it. Only a Postmaster will notice the domain/IP in log files. The reputation attached to a probe connection is that of the verifier! Only by building their own good reputation will the verifier's probes have value. If they have low reputation, including unknown reputation, they may be blocked or temp-failed, neither of which helps the verifier deliver the product they sell.
There's an old joke in anti-spam circles that simply goes, "Let me tell you about my business model..." Period. It means that every spammer always has a reason that their business is special and doesn't need to follow good practices. Receivers don't care about your business model! They care about their servers, mailboxes and customers. Same here; it's your business and your reputation that you need to care about. Sure, you need to service your client's needs, but it's simply not true that they each need separate domains and IP addresses, and that does not scale in the reputation world. You need to establish your domain/IP address reputation. You need to explain to your customers that this is how it works, this is to their best advantage, this won't change their branding or perception to their customers, and this is how you do business.
"We are helping customers make sure that their end users are putting in the email addresses correctly so if the user accidentally mistypes their email address, they can correct it in real time. I will give you an example about this with one e-commerce customer. They called us and discussed with us the problem of bots creating fake signups and clogging their websites from multiple IP addresses throughout the world, over 20,000 signups a day. As soon as they put verification in place the fake signups stopped."
That is, indeed, an excellent example of why we do not flat-out say that verification is wrong. By stopping the bot attack, it stops the sender's attempts to deliver unwanted or undeliverable mail. Of course, it still does not confer the address owner's permission when their address does verify. That's where the sender needs COI, especially when acquiring addresses for on-going list email. But we do recognize that verification can have a role in a healthy email system. Real-time point-of-sale verification is another example which helps for single transactional email.
"But we don't send mail at all!"
Balderdash! We understand that you don't complete the SMTP dialog with "/r/n./r/n" and "250 OK," or maybe you don't even go to DATA. You do use SMTP, and you even use it to circumvent disabled VRFY or EXPN services. You extract information about a Postmaster's users which may be against their terms of service. You fill up their mail server's ports and logs with your connections. No, you don't put spam messages in end-user or spamtrap boxes (although bad actors doing verification for bad lists may result in increased user-received spam), but it's not honest to say you don't mail.
References and further reading:
https://tools.ietf.org/html/rfc2505 - Anti-Spam Recommendations for SMTP MTAs, February 1999
http://www.spamresource.com/2007/01/whatever-happened-to-vrfy.html - Whatever happened to VRFY?, January 2007
https://www.spamhaus.org/resource-hub/deliverability/confirmed-opt-in-a-rose-by-any-name - Confirmed Opt In: A Rose by Any Name, August 2008
https://www.spamhaus.org/news/article/734?article=734 Subscription Bombing: COI, CAPTCHA, and the Next Generation of Mail Bombs, September 2016 snowshoe spamming
Help and recommended contentSee below for helpful articles and recommended content
Permission pass: what, how and when to use
Discover how to resolve IP and domain blocklisting issues caused by single-opt-in email lists with a Permission Pass strategy. Learn the intricacies of conducting a Permission Pass, ensuring compliance with COI standards and spam regulations.
Are you ready for the email authentication revolution?
Matthew Vernhout, NetCore Cloud's VP, Deliverability (ENSA), explains how new email authentication changes spearheaded by Yahoo and Gmail will impact your email strategy and what you can do to take proactive measures.
What is an email sunset policy and why do you need one?
A sunset policy is an essential component of any successful email program. Find out why a sunset policy is essential and what you need to do to effectively implement one.
|
OPCFW_CODE
|
Why does my google account not work?
I rooted my LG G3 awhile back and everything was fine. One day, I couldn't access my gmail or play store using the installed apps due to a connection error. I factory reset my phone and whenever I try to add my gmail account, I get an error that says "Can't establish a reliable connection to the server". How do I go about fixing this?
Note: I'm with Verizon on 4.4.2
Is there supposed to be anything other than the local host entry? I saw "..*. android.clients.google.com mtalk.google.com"in it. The asterisks represent numbers that I wasn't comfortable posting.
Related: Synchronisation of contacts and calendar suddenly stopped working
check date/time
If they are "too far off", it might cause problems with certificates – which then either appear "not yet valid" (with your date too far in the past), or "no longer valid" (with your date too far in the future). Most Google components such as Playstore, Gmail, Sync, use secure communications, and thus rely on certificates.
check the /etc/hosts file
Especially with Custom ROMs or on rooted devices in general, the file might have been altered. Some "ROM cooks" add the IP for Google's servers here "to speed things up" (avoiding "lookup overhead") – with the side effect being things break when those servers are "relocated".
So in that file, there might be multiple entries you need to comment out or remove (check first if that's the case – if not, you can save yourself the trouble). Leave only the local host entry as is. You'll need to remount the system partition read/write to modify this file--, which requires root access. This can be done either via adb shell from your computer, or with a terminal app directly on the device:
$ cat /system/etc/hosts # first check if the work below needs to be done. If so:
$ su
# mount -o remount,rw /system
# cd /system/etc
# cp hosts hosts.bak
# echo "<IP_ADDRESS> localhost" > hosts
Now see if it works again. If not, you can always restore your original hosts file (note we copied it to hosts.bak). When done, don't forget to remount /system read/write again – either by a reboot, or with mount -o remount,ro /system.
What does the date/time have to do with not being able to connect to google account? if it really does have something to do with it, you should probably explain. Editing the hosts file requires root, the original quest does not mention the device is rooted, you should not assume that it is. And if they are rooted, what "entry (or 2)" do they need to have commented out, and why would they be in there if they should be commented out?
He did a factory reset so the hostfile would be wiped anyways.
-for date and time see user186247 answer
-the lg phone was originally rooted and even if you factory reset, the phone still remains rooted.
-stop being snarky. if you aren't satisfied with my answers downvote me or provide a better one
The date and time is fine
its your host file then. those two entries that you have listed put a # infront of them like this
android.clients.google.com # mtalk.google.com and then try again. you may need to reboot your phone but most cases it should be ok.
Date/time might be an issue for certificate check (when too far off). @MatthewRead since when does a factory-reset modify /system? /etc is nothing but a link to /system/etc (usually). So only "root actions" change it. // Seth: To avoid "snarkyness", please note answers should provide some background on recommended actions ;) You might be right with the hosts file, though – I once had a comparable case.
@Seth, you are correct about the rooting, sorry, I missed that when i first read the question, and since that is the case, the hosts file could likely be the problem.
I had this issue and it was because my phones had year randomly changed to 1990, check your date and time settings.
|
STACK_EXCHANGE
|
Workshop as part of the Annual Meeting of the DGfS (German Linguistics Society)
in Saarbrücken, Germany, March 8-10, 2017
Register in Linguistic Theory: Modeling Functional Variation
Call for Papers
This workshop deals with the variationist modelling of register variation. The term register is used here to describe a variety of a language that is associated with particular functional or situational features, thus describing intra-speaker variation. Beginning with Labov’s (1966) seminal study, a large body of work on social, geographic, or historical variation exists, but register remains an understudied dimension of variation. Yet, its study is necessary to complement the notion of the (invariant) linguistic competence of an idealized speaker/hearer, as speakers clearly vary their behavior qualitatively and quantitatively in different circumstances.
This workshop adopts a variationist approach (Labov 1966) to the study of register. Variation exists on each linguistic level (phonology, morphology, syntax, lexicon, etc.). The essential idea of this method is that an abstract variable (V) can be expressed by different concrete variants (a, b, c, d, ….), e.g. one phoneme can be expressed by different allophones. Based on this methodology, registers can be identified statistically in a bottom-up manner: Their properties are reflected in the intercorrelation matrix with regard to a previously defined set of functionally relevant linguistic variables (Biber 1995).
We welcome contributions that build on qualitative and quantitative analysis of empirical data (corpora, elicitation, experiments, etc.) and that address at least one of the following questions: (a) How do individuals vary their linguistic behavior in different functional settings as speakers, and what kind of variation do they expect as hearers? (b) Which factors influence which aspects of this variation? (c) What do people know, implicitly and explicitly, about how to behave linguistically in a given situation? (d) How is register knowledge acquired? (e) How can register variation be modeled in linguistic theory? (f) How does register variation lead to language change? (g) Are there general principles underlying register variation across languages?
Benedikt Szmrecsanyi, Department of Linguistics, KU Leuven
Anke Lüdeling, HU Berlin
Aria Adli, University of Cologne
Authors should submit 1 page abstracts (including references) in a 12 point font (e.g. Times New Roman) to register-dgfs2017 at uni-koeln.de. References should be formatted according to the APA guidelines. Talks will be given 30 or 60 minute slots including discussion, depending on the program. Please specify your preferred length in your submission. The workshop language is English for both abstracts and talks. According to DGfS regulations, speakers can only present a paper in one workshop.
- Submission of abstracts: 31.08.2016
- Notification of acceptance: 10.09.2016
- Workshop: 07-10.03.2017
|
OPCFW_CODE
|
November 07, 2004
I could have told you, Michael: This world was never meant for one so beautiful as you.
Michael Moore has finally broken his post-election silence with a post on his website. Apparently his followers have more than one thing in common with lemmings; he seems to suspect that they may be on the brink of committing mass suicide.
Ok, it sucks. Really sucks. But before you go and cash it all in, let's, in the words of Monty Python, “always look on the bright side of life!” There IS some good news from Tuesday's election.
He then lists 17 rather forlorn reasons for his followers to continue living. Powerline has enlisted the blogosphere's help in countering these arguments. I’ve read Powerline religiously throughout the election season, so I feel duty-bound to answer the call.
I'm a bit ambivalent about encouraging Moore's followers to go ahead and, "in the words of Monty Python," shuffle off their mortal coil, run down the curtain and join the bleedin' choir invisible. Oh, the Moorians are annoying, it's true, but it's my opinion that Bush couldn't have won the election without the Dems' warm embrace of Moore at their convention and Tom Daschle's literal embrace of Moore may have lost him just enough votes to end the career of everyone's least favorite hatchet-faced obstructionist.
So it is with great regret that I say: Michael, thanks to you and your followers for helping the GOP to victory. But your work here is done, so to counter your "17 reasons not to slit your wrists", I offer you:
- Be honest: Is life worth living under a Bushitler regime? Think of it: The Chimp’s smirking mug leering at you every day for four years… every day… and remember, dying only hurts for a minute. I’m just saying. (Every day! Even on NPR, you’ll hear his voice when they do the news! And on Morning Edition! Even on All Things Considered!)
- No more earnings to be taxed to fund fictitious wars fought by fictitious presidents.
- It'll definitively answer the question: "Bush lied; Who died?"
- It will show solidarity with the poor downtrodden Palestinians, whose highest goal in life has always been suicide in the service of defeating the Zionist war machine.
- If suicide is accomplished with firearms, it will serve to reinforce the thesis of Bowling for Columbine: Those craven Americans sure are obsessed with guns and killing.
- Fuel for several years' worth of moonbat conspiracy theories; doesn't it seem a bit too convenient that Bushitler's sworn enemies should all start killing themselves at once?
- Sudden demand for body bags would further increase petroleum prices, reinforcing the absurdity of Bushitler's war for cheap oil.
- Won't be around to feel the pain when Castro and Arafat depart this earthly plane. (Hurry Arafat's in the departure queue!)
- It's the only way you can crash the Pearly Gates and get that hostile ambush interview with God.
- You can found a new PAC to support the mass suicide: PassOn.org.
- Must die soon, or Dan Rather won't be able to cover your death.
- Help George Soros make back some of the money he lost backing Kerry give him a heads-up before you do it and he can game the dead pools.
- If you work quickly, you'll have time to prepare a place in Hell for the soon-to-arrive hordes of heroic Fallujah "Minutemen."
- You'll be right at home in Hell. It's a blue state.
- Shoo-in to have special montage created to honor you at next year's Cannes, to the tune of a melancholy rendition of "We Shall Overcome."
- You'll still be able to vote, especially in Chicago.
- Two words: President Giuliani.
(A new blog, IgnoreMoore, has countered Moore’s list with a point-by-point Fisking, in the unthinkable event that anyone finds the above list insufficiently persuasive.)
These are excellent reasons Michael Moore and his MoveOn Morons should take the plunge. I can't see a single objection they could possibly raise. But it leaves open an important question: how should Michael Moore do the deed?
With that thick hide, it's not that easy. Here are a few suggestions:
- Approach a mooring mast in a lightning storm. Make sure excitable reporters are around.
- Get Steve McQueen to spray you with CO2 and ship you to the North Pole.
- Implode into a singularity.
- Tighten that baseball cap on your cranium by one more notch. That should do it.
- Go to Africa, wear a gray jumpsuit and wave to poachers.
- Roll onto your head, so it smushes down into your roly-poly torso, suffocating you.
- Dress up as the local Republican headquarters.
- Call Dick Cheney's daughter a lesbian, but to her face.
- Visit the hog farm on slaughter day.
Posted by: Korla Pundit at November 8, 2004 10:24 PM
Dang, Korla! I just saw "Team America," and I must say they thought up a fitting demise for ol' Mikey. But you've really hit on some great stuff here. Especially the Steve McQueen method. I was afraid I was the only one who heard the theme from "The Blob" everytime I saw the Moore Man-Mountain.
Posted by: EtherPundit at November 9, 2004 12:16 AM
- Go to Fallujah and sign up as a Freedom Fighter.
Posted by: Korla Pundit at November 15, 2004 01:04 PM
That's the prfeect insight in a thread like this.
Posted by: Clara at September 28, 2012 06:36 PM
|
OPCFW_CODE
|
Scandinavian Software Park gathers software engineers at some of Scandinavia’s market leading SaaS companies that develop cutting edge products for a wide variety of industries across the world. The Park is founded by Swedish Monterro, a leading growth investor in B2B software.
We are looking for talented and passionately committed agile developers for Trapets – A European expert company providing cutting-edge services and systems for securities trading, anti-money laundering surveillance and compliance – tơ work in our beautiful office in HANOI, VIETNAM. With Trapets, it is important to have a team spirit where we share knowledge, support each other and are driven by development both on the personal level and within the team / company.
Trapets is expanding strongly both nationally and globally and in the role you are given the opportunity to contribute to the company's continued development. Trapets is characterized by commitment, creativity and team spirit where you will be an important piece of the puzzle in the development of our products and its profile.
Do you want to contribute with us to the benefit of society and work to combat financial crime?
As a Frontend/.Net Web Developer, you will:
We offer an exciting developer role where you have great opportunities to influence your own work, role and future. At Trapets, we work agile and you will be part of one of our development teams and work in close collaboration with our business experts.
We offer you to be part of an exciting development journey with complex products that span several platforms / technologies and from advanced Web solutions to Backend services.
At a minimum, you should have a bachelor’s degree in Software Development or equivalent.
Have solid understanding of the full software development life cycle.
Have at least two (02) years’ experiences and a deep passion for technology and programming, in this case Frontend and then in the Microsoft stack.
We also believe that you:
Is experienced in .Net and C#
Keep track of MVC, SQL
Have experience with Angular and or React
Have excellent and analytical problem-solving skills.
A prestigious and communicative person who likes teams but at the same time is independent and energetic.
Have strong verbal and written communication skills in English.
Thông tin khác
You will be:
Stepping on developing software as services for Regulatory Finance business in a leading company.
Working agile and being part of one of our development teams across countries and also working in close collaboration with our business experts.
Long-term developing your career path on the edge of Microsoft as well as modern UI technologies.
Living on our Scandinavian culture and office while working in Agile environment that has strong team spirit, openness, unceasing creativity and innovation.
What will you get:
Annual review and 13th month salary.
Flexible working hours: From Monday-Friday, NO OVERTIME.
100% official salary during the probation period.
Premium healthcare and & accident insurance: Best healthcare plan cover for employees and their children.
Wellness package supports employees stay healthy and wealthy.
Exciting company outing/events and team building activities.
On-site and training opportunities in Nordic.
Modern working environment.
Competitive salary and benefit
On-site opportunity in Nordic
Nơi làm việc
- Tầng 16, tòa nhà Deaha Business Center, 360 Kim MãBa DinhHa Noi; Tầng 19, tòa nhà Peakview, 36 Hoàng Cầu, Đống Đa, Hà NộiDong DaHa Noi
Chú ý: Toàn bộ thông tin đăng tải thuộc quyền sở hữu của Scandinavian Software Park. Chúng tôi chỉ đang cố gắng đưa thông tin nhanh nhất và chính xác nhất tới các bạn. Trường hợp phát hiện có nội dung không chính xác, các bạn có thể thông báo bằng cách liên lạc với chúng tôi qua cửa sổ liên lạc phía dưới-góc phải màn hình.
|
OPCFW_CODE
|
After that, I was flailing around a bit for a project idea; I had a bunch of different ideas floating around, like a soil moisture monitoring app (for checking up on my plants in the care of my husband back on the east coast while I was at Hackbright in San Francisco), but wasn't super excited about any one idea. I told Christian that I wanted to do something with an Arduino or Raspberry Pi because I'd never worked with electronics at all before and one of things that had put me off computer science originally in high school is that I thought you had to be one of those people who could built computers from scratch.
That isn't true these days, but it seemed like it would be a very useful skill to pick up for projects with real-world physical usefulness (like this Raspberry Pi-powered cat feeder! I should build this for my parents, saddled with my cat that I can sadly never have at my own home due to my husband being allergic to cats). I am a very pragmatic person and while I very much enjoy pretty things (Pinterest, design/fashion blogs, etc.), I like cooking because I like eating, I like knitting/sewing for the creations that you end up with at the end, etc.--so for me, it's less about the process than about the end result being useful in some way.
I also told Christian that when it came to programming, the bit that I liked best was breaking down a problem to figuring out the solution, and he intuited that once I had proof-of-concept that it would work, I wasn't much interested in implementing the extra stuff, so I wasn't going to be too interested in something that was very front end heavy and require a lot of wrangling with HTML/CSS and such.
Given all that, he gave me this paper published in 2009 by from computer scientists at Rice University called uWave: Accelerometer-based personalized gesture recognition and its applications. It was a bit tough to read through at first but I got to the section on practical applications for it on the same day that a Hackbright classmate gave a talk on 2-factor authentication and a light bulb went off that what the paper was talking about, 3-D motion-based passwords, would save me from the pain of 2-factor authentication!
We have to use 2-factor authentication at work and I've gotten stuck a number of times because I left my phone or forgot the device that gives that 2nd factor. Motion-based password would be something that's pretty easy for you to remember, but pretty hard for someone else to replicate exactly even if they had watched you do it because your hand movements will vary enough that the authentication system should reject them.
So in summary, I wanted a project that had:
- a clear real-world use
- more backend than frontend
Anyway, I'll be writing a series of tutorials on how I did the project in the hopes that it will be helpful for future Hackbright students or other beginners to electronics projects.
|
OPCFW_CODE
|
We have recently published a paper in the Journal of Transport and Health where we modelled the impact on CO2 emissions of an increased uptake of active travel for the home to school commute. The paper is freely available to anyone under Gold Open Access, with a CC-BY Attribution license.
One of the challenges in this paper, building upon (Singleton, 2014) was being able to model individual routes from home to school for all ~7.5 million school children in England. In addition to origin and destination locations, we also know what modes of travel are typically used to get to school, thanks to the School Census (also known as the National Pupil Database). While modelling a small number of routes is relatively straight forward to perform within a GIS, the challenge was to complete the routing for all 7.5 million records in the data set.
To calculate the route, we used a combination of two different pieces of software – Routino and pgRouting. Routino allows us to use OpenStreetMap data to derive a road-based route from given start and end points, using a number of different profiles for either car, walking, cycling or bus. The profile used is important, as it allows the software to take into account one-way streets (i.e. not applicable to walking, but applicable to driving), footpaths (i.e. applicable to walking only), cycle lanes, bus lanes, etc.. The screenshot below shows an example route, calculated by Routino.
For railway, tram or tube travel, this was implemented using pgRouting from both Ordnance Survey and edited OSM data. The different networks were read into the PostgreSQL database, and routes calculated using the Shortest Path Dijkstra algorithm. This returned a distance for the route, which was stored alongside the original data.
Routino and pgRouting were called using R, which also managed the large amounts of data, subsequently calculated the CO2 emissions model, and created graphical outputs (see below).
To run the routing for each pupil for four years worth of data (we had data from 2007/8-2010/11, although we only used data from academic year 2010-2011 in the paper) took about 14 days on my 27″ iMac. We considered using a cloud solution to shorten the run times, but given we were using sensitive data this was deemed too problematic (see related blog post from Alex on this). This work highlights that it is possible to perform some types of big data analysis using a standard desktop computer, which allows us to perform this type of analysis on sensitive data without needing to make use of cloud or remote processing services, which are often not compatible with restrictions on sensitive data.
*As you would expect, the postcode unit is sensitive data and we had to apply to the Department of Education to use this data. Any postcodes or locations used in this blog post will be examples – e.g. L69 7ZQ is the postcode for my office!
Singleton, A. 2014. “A GIS Approach to Modelling CO2 Emissions Associated with the Pupil-School Commute.” International Journal of Geographical Information Science 28 (2): 256–73. doi:10.1080/13658816.2013.832765.
Cross-posted from http://geographicdatascience.com/r/2014/11/20/Home-School-Routes/
|
OPCFW_CODE
|
Silverlight is a cross-browser, cross-platform plug-in for delivering the next generation of Microsoft .NET–based media experiences and rich interactive applications for the Web.
Go here for more Silverlight related interview questions
1.) What features are missing from Silverlight presentation markup that will be supported in WPF (Windows Presentation Foundation)?
Some high-end Windows specific features of WPF, such as real 3D, hardware-based video acceleration, and full document support, will not be supported in Silverlight. This is by design in order to serve Silverlight’s cross-browser, cross-platform reach scenario that demands a light weight plug-in. That being said, Silverlight will offer a uniform runtime that can render identical experiences across browsers on both Mac OS and Windows.
2.) Will Silverlight-based applications and content run on any Web Server? What are the benefits to running it on servers running Windows?
Silverlight works with any web server just like HTML. Video and audio content can also be progressively downloaded and played back from any Web server platform. Benefits of Windows server-based distribution of Silverlight applications include Windows Media Services with Fast Stream (instant playback) and Fast reconnect technologies, lower distribution costs (streaming users only download what they watch), and tap into the full Windows server ecosystem of platform components and partner solutions. Those benefits will be enhanced in the future version of Windows Server (code name “Longhorn”) and with Internet Information Server 7 (IIS).
3.) Is Silverlight supported on various locales?
Silverlight installs on localized versions of Macintosh computers and Windows. At this time, the installation is available in an international English format. Final releases will render international text (using double-byte characters) and support the full 64K Unicode character set. Silverlight uses simple input mechanism that treats all the languages in the same way.
4.) What are the different ways to display text with Silverlight?
Silverlight supports displaying static preformatted text that is comprised out of glyph elements and also dynamic text that uses TextBlock. With glyphs, one needs to position the characters individually while TextBlock supports simple layout.
5.) What kinds of fonts are supported with Silverlight?
Beyond standard and western fonts, Silverlight also supports East Asian characters, double-byte characters, and can work with any East Asian font or Middle Eastern font by using the glyphs element and a supporting TrueType font file that supports the requested glyph.
6.) What is Microsoft® Silverlight Streaming by Windows LiveTM?
Microsoft® SilverlightTM Streaming by Windows LiveTM offers a free cloud-based hosting and streaming solution for quickly delivering high-quality, high-scale, cross-platform, cross-browser, media-enabled RIAs.
7.) How much does Silverlight Streaming cost?
While the product is in Beta, hosting is free of charge. Up to 4 GB of data and streaming is free of charge up to 700 kilobit/s. At the conclusion of the Beta program, the developer can chose to enable Microsoft-sponsored advertising in the application for continued free use of the service to or subscribe to a pay-for-use service that is free of advertisements.
8.) What video encoding formats are supported?
The designer or developer is free to use any encoding format for their video supported by the Windows Media Video codec. This includes Variable Bit Rate (VBR) encoding for DVD-quality video and the use of the VC-1 codec for high-definition content. However, for HD content, be aware that the maximum output rate from the service is 700 kilobit/s, which means the client will not receive real-time delivery of HD video.
9.) Do you support digital rights management to protect my videos?
In the future, Silverlight Streaming will provide support for DRM-encoded video as an optional paid turnkey offering.
10.) What applications will Microsoft provide to make hosting easy?
Microsoft is building a simple uploading tool and working to add publishing support directly to SilverlightTM Streaming via ExpressionTM Media Encoder, a feature of ExpressionTM Media. In addition, third-party companies are adding support to their own applications for SilverlightTM Streaming.
11.) How is my content secured from unauthorized access?
You will have to be signed into the SilverlightTM Streaming service to manage your account and your Silverlight applications. Your SilverlightTM Streaming ID and secret key, associated to your Windows Live ID, will authenticate you as the unique and legitimate owner of the applications and content you upload to the service. You will also need this information to manage your Silverlight applications using the API. The SilverlightTM Streaming ID is public. However, the secret key should be kept confidential.
12.)How do I get started?
To sign up for your free account, visit streaming.live.com. Anyone with a Windows Live ID can participate.
1.) Is Silverlight the official name for “WPF/E”?
Yes. Silverlight was formerly code-named “WPF/E.”
2.) Does silverlight web application work with all browsers ?
Yes, A web application developed by silverlight technology can work with any browser
3.)What are the main features and benefits of Silverlight?
-Compelling cross-platform user experiences.
-Flexible Programming Model with Collaboration Tools.
-High-quality media, low-cost delivery
-Connected to data, servers, and services
4.) How can I build experiences and applications with Silverlight?
Silverlight development tools include role-specific productivity tools for both designers and developers:
* Expression Studio empowers designers to create interactive UI and media-rich experiences, prepare media for encoding and distribution, and create W3C standards-compliant sites using modern XHTML, XML, XSLT, CSS, and ASP.NET. Expression Design includes support for exporting XAML for Silverlight. At MIX 07, Microsoft released Expression Blend 2 May Preview and Expression Media Encoder Preview to enable designers to build media experiences and RIAs.
* Visual Studio empowers developers to develop client and server code using full IntelliSense, powerful debugging, rich language support, and more.
By using Expression Studio and Visual Studio, designers and developers can collaborate more effectively using the skills they have today. Additionally, Silverlight supports a consistent subset of XAML (eXtensible Application Markup Language) for declarative programming, the same format found in .NET 3.0. Because XAML is toolable, there is always the potential for third-parties to provide additional XAML-based Silverlight tools in the future.
5.) How does Silverlight make the Microsoft development system better?
Silverlight is a cross-browser, cross-platform plug-in for delivering the next generation of media experiences and rich interactive applications (RIAs) for the Web. Examples include:
* For ASP.NET-based Web applications, Silverlight provides a rich UI front-end that, with a consistent programming model, adds support for richer interactivity, media, and audio.
* For Microsoft SharePoint–based content, Silverlight offers the ability to create rich Web parts.
* For Windows Live services, Silverlight offers the ability to consume services and APIs more effectively.
6.) What audio or video formats are supported in Silverlight?
Silverlight supports Windows Media Audio and Video (WMA, WMV7–9) and VC-1, as well as MP3 audio. Additional formats may be available by the final release based on customer feedback.
7.) Will Silverlight support all the codecs Windows Media Player supports?
Since Silverlight is a lightweight cross-platform technology, it only carries the most common codecs that are needed for Web playback. However, we are gathering information from customers about the needed codecs and can update Silverlight when necessary.
8.) Will Silverlight support digital rights management?
For content providers, Silverlight will support digital rights management (DRM) built on the recently announced Microsoft PlayReady content access technology on Windows-based computers and Macintosh computers.
Microsoft Silverlight is a cross-browser, cross-platform, and cross-device plug-in for delivering the next generation of .NET based media experiences and rich interactive applications for the Web. By using Silverlight’s support for .NET, High Definition video, cost-effective advanced streaming, unparalleled high-resolution interactivity with Deep Zoom technology, and controls, businesses can reach out to new markets across the Web, desktop, and devices(Fig. source : MS Site).
Silverlight provides a retained mode graphics system similar to WPF and integrates multimedia, graphics, animations and interactivity into a single runtime environment. In Silverlight applications, user interfaces are declared in XAML and programmed using a subset of the .Net framework. XAML can be used for making up the vector graphics and animations.
1.) Is Silverlight free?
Yes, Microsoft has made the Silverlight browser plug-in freely available for all supported platforms and browsers.
2.)What is the long-term goal or vision for Silverlight?
Microsoft Silverlight is a cross-browser, cross-platform, and cross-device plug-in for delivering the next generation of .NET based media experiences and rich interactive applications for the Web. Silverlight offers a flexible programming model that supports AJAX, VB, C#, IronPython, and IronRuby, and integrates with existing Web applications. By using Expression Studio and Visual Studio, designers and developers can collaborate more effectively using the skills they have today to light up the Web of tomorrow. By leveraging Silverlight’s support for .NET, High Definition video, cost-effective advanced streaming, unparalleled high-resolution interactivity with Deep Zoom technology, and controls, businesses can reach out to new markets across the Web, desktop, and devices.
3.) When would a customer use Silverlight instead of ASP.NET AJAX?
Silverlight integrates with existing Web applications, including ASP.NET AJAX applications. Consequently, ASP.NET AJAX and Silverlight are designed to be complementary technologies. In the broader sense, Silverlight can talk to any AJAX application, both client-side and server-side. ASP.NET AJAX can additionally be used to control Silverlight-based visualization of data or delivery of rich experiences. Examples might include mapping applications or video playback with rich presentation.
4.) Will Silverlight support live streaming events as well as downloading media?
Yes. Silverlight together with Windows Media Services enable live streaming experiences.
5.) When would a customer use Silverlight versus Windows Presentation Foundation? Is Silverlight for a certain type of application?
For ASP.NET-based Web applications, Silverlight provides a rich UI front-end that, with a consistent programming model, adds support for richer interactivity, media, and audio.
For Microsoft SharePoint–based content, Silverlight offers the ability to create rich Web parts. For Windows Live services, Silverlight offers the ability to consume services and APIs more effectively.
6.) Will Silverlight work with my new or existing Windows Media services platform for streaming?
Silverlight takes advantage of Windows Server features for streaming.
7.) What features are missing from Silverlight presentation markup that will be supported in the Windows Presentation Foundation?
Microsoft recommends the Windows Presentation Foundation for building rich immersive applications and experiences that can take full advantage of the Windows platform, including UI, Media, offline communication, OS integration, Office integration, peripheral access, Document support and more. Silverlight will be used for broad reach interactive media content and browser-based rich interactive and high-performance applications and experiences.
8.) Is Silverlight a new media player?
No. Silverlight is a cross-browser, cross-platform plug-in for delivering media experiences and RIAs. It is not a desktop application or stand-alone media player.
|
OPCFW_CODE
|
femwell/solve_thermal.py
$ python solve_thermal.py
Traceback (most recent call last):
File "/Users/lukasc/Downloads/solve_thermal.py", line 219, in <module>
from gplugins.gmsh.mesh2D import mesh2D
ModuleNotFoundError: No module named 'gplugins.gmsh.mesh2D'
Mac OSX 11.6.7, Intel processor
pip install: gdsfactory, gplugins, femwell
I have the latest version of gplugins (0.5.0)
$ pip install gplugins[gmsh,femwell]
Collecting gplugins
Obtaining dependency information for gplugins from https://files.pythonhosted.org/packages/b9/dd/c790baea29ff148805030d562cbc63f6c0eaa4b428f7d672b5f8ddb7f847/gplugins-0.5.0-py3-none-any.whl.metadata
Using cached gplugins-0.5.0-py3-none-any.whl.metadata (6.0 kB)
Requirement already satisfied: gdsfactory[cad]>=7.4.0 in /usr/local/lib/python3.10/site-packages (from gplugins) (7.4.6)
Seems like mesh2D is missing:
$ python
Python 3.10.9 (main, Dec 15 2022, 18:20:40) [Clang 13.0.0 (clang-13<IP_ADDRESS>)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import gplugins.gmsh
>>> dir(gplugins.gmsh)
['MeshTracker', '__all__', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', '__version__', 'annotations', 'break_geometry', 'cleanup_component', 'create_physical_mesh', 'fuse_polygons', 'get_layer_overlaps_z', 'get_layers_at_z', 'get_u_bounds_layers', 'get_u_bounds_polygons', 'get_uz_bounds_layers', 'list_unique_layerstack_z', 'map_unique_layerstack_z', 'mesh', 'mesh_from_polygons', 'meshtracker', 'order_layerstack', 'parse_component', 'parse_gds', 'parse_layerstack', 'refine', 'round_coordinates', 'tile_shapes', 'to_polygons', 'uz_xsection_mesh', 'xy_xsection_mesh']
Hoping that mesh2D was simply replaced by mesh, I made the change:
from gplugins.gmsh import mesh as mesh2D
And it runs, but then there is another error:
$ python solve_thermal.py
dict_keys(['substrate', 'box', 'core', 'shallow_etch', 'deep_etch', 'clad', 'slab150', 'slab90', 'nitride', 'ge', 'undercut', 'via_contact', 'metal1', 'heater', 'via1', 'metal2', 'via2', 'metal3'])
2023-09-09 14:10:55.633 | WARNING | gdsfactory.pdk:get_active_pdk:721 - No active PDK. Activating generic PDK.
2023-09-09 14:10:55.765 | INFO | gdsfactory.technology.layer_views:__init__:790 - Importing LayerViews from YAML file: '/usr/local/lib/python3.10/site-packages/gdsfactory/generic_tech/layer_views.yaml'.
2023-09-09 14:10:55.766 | INFO | gdsfactory.pdk:activate:334 - 'generic' PDK is now active
Traceback (most recent call last):
File "/Users/lukasc/Downloads/solve_thermal.py", line 227, in <module>
heater2 = gf.components.straight_heater_metal(length=50, heater_width=2).move(
File "/usr/local/lib/python3.10/site-packages/gdsfactory/component.py", line 2153, in move
raise ValueError(move_error_message)
ValueError:
You cannot move a Component. You can create a new Component, add a reference to the other Component and then move the reference.
For example:
# BAD
c = gf.components.straight()
c.xmin = 10
# GOOD
c = gf.Component()
ref = c.add_ref(gf.components.straight()) # or ref = c << gf.components.straight()
ref.xmin = 10
Hi Lukas,
We are in the process of fixing the gdsfactory-femwell interface for thermal simulations, the idea is that from layout you will be able to directly extract the heater efficiency from the layout
in the meantime you will need to use femwell directly
https://helgegehring.github.io/femwell/photonics/examples/metal_heater_phase_shifter.html
@HelgeGehring
@simbilod
I think if you use
c = gf.Component()
...
c.to_gmsh(
type="uz",
...
)
it should work. I never used the mesh2D directly. @simbilod could you have a look as well?
Yes the 2D meshing interface is well-documented:
https://gdsfactory.github.io/gplugins/notebooks/meshing_03_2D_uz_mesh.html
Right now, to go beyond a mesh and do calculations requires a few manual steps, however. But in principle once you have the mesh as described above, you can use the femwell code directly from what Joaquin linked
Thank you for reporting this Lukas
Here is an initial fix
https://github.com/gdsfactory/gplugins/pull/106
We still need to add more useful examples on how to go from layout to thermal simulations
|
GITHUB_ARCHIVE
|
Richard Rutter is a web site producer living in Brighton and working in London for Multimap. He also runs his own weblog where he shares his personal perspectives on Web development and design, accessibility, usability and information architecture.
- 1. Your blog is a very successful 3 column liquid layout. Even in an extremely wide browser window content seems comfortably readable. How was this achieved?
Richard: I'm a big believer in liquid layouts. I believe liquid layout is more appropriate to a Web where known variants include screen resolution and window size. Those designing for the Web as a medium know their designs must work for any (reasonable) text size so why not any window width? I would hazard a guess that more visitors would know how to change window size than text size.
One of the problems cited against liquid designs is that lines of text can become unreadably long. I counter this in my blog by putting plenty of leading in the text (typographer's terminology for setting line-height to 1.5em). Spacing apart lines of text in this manner helps readers keep track of which line they are reading, and which to read next.
More important to the text layout is a technique I call concertina padding. Each of the three columns on my blog has a width set as a percentage of the window width. All horizontal margin or padding is also set as a percentage width, so as the window is enlarged the padding increases thus keeping the lines of text shorter than they would otherwise be and giving them more room to breathe. Similarly as the window is shrunk, the padding decreases, making giving more room for the text. I visualise the browser window attached to a concertina as it is stretched and squashed with every single horizontal dimension scaling accordingly.
- 2. You have written about how images can be scaled within liquid layouts using
maxwidth. Could you explain your preferred method?
Richard: Probably the biggest issue facing liquid layouts is how to contain and display fixed width elements such as images and flash movies. Inherently with a liquid layout one never knows the width of a column which inevitably leads to situations where an image is wider than its container and breaks the layout.
There are two approaches to address the issue: automatically shrink the image to fit the narrow column or mask the bit of the image that is too wide for the column. Masking using
overflow:hiddenworks well for panoramic images, alternatively the image can be set as a background if it is little more than decoration. In all other situations, my preferred technique is to set the image width to 100%, which will shrink (or expand) the image to its container, and set a
max-widthat the true image size to prevent it expanding. Of course
max-widthdoesn't work in IE/Win so you can either accept the image being expanded or apply other
max-widthtechniques such as Dean Edwards's IE7.
- 3 Around 12 months ago you did some research into access keys. Do you think they are relevant and how would you implement them on a site?
Richard: I still use accesskeys for the global navigation on my site, but wouldn't use them as a matter of course. In theory they are a good idea, particularly for their original purpose which was to replicate the behaviour of OS dialog boxes and speed up form filling. One of the hurdles that facing accesskeys is that they require the same keystokes as assistance software such as JAWS, leaving only the numbers available for general use. In the post you mention, I tried to ascertain whether there is an adopted standard set of accesskeys for global navigation and the answer was not really - another problem. But most importantly browsers provide no visual support for accesskeys, so we are forced to code in our own, such as underlines. This in itself causes problems, as highlighted recently on the WSG list, as some assistance software will read something like
</span>earch as S-earch.
All-in-all accesskeys could be used successfully in controlled environments such as Intranets, but their usefulness on the public Web is sadly limited.
- 4. You define your dropdown menus as 'accessible dropdowns'. What is it that makes them accessible?
It's also worth nothing that the dropdown is activated using an onclick event handler. According to WAI guidelines this is a failure as onclick implies a mouse must be used; the reality is that the dropdown menu can be operated entirely with a keyboard just by using the tab and return keys.
- 5. You have recently written about text-sizing using ems. What are the advantages and disadvantages with this method?
Richard: In an ideal world we'd all be sizing text in pixels, but if we want IE/Win users to be able to change their text size - which we do - then we need to size text using a relative unit such as keywords, percentages or ems. I find that using ems gives me more precise control of text size than do keywords and with a little practice ems are not much harder than using pixels.
- 6. I have heard rumours that you have just launched a new-look Multimap. The obvious question - is it built using web standards?
Richard: Yes it is, but to be fair it used Web standards beforehand in as much as it validated. We'd got as far as removing all the font tags but the site was still chock full of nested tables and the usual non-semantic mark-up that you would expect to be inherited from a few years ago. We rebuilt the site using meaningful mark-up for two big reasons: ease of maintenance and bandwidth reduction.
The templates used throughout Multimap.com are constructed from heaps of server-side includes. Making changes to the site was a tricky business as any given include might contain one table inside part of another table. Now all the includes are really simple: most just have a div with a proper heading and a list. We're already noticing the increase in speed with which we can make changes. The move to meaningful mark-up has eased this surprisingly stressful part of the job.
As for bandwidth, roughly speaking Multimap.com serves 4 million pages a day. On average, the HTML of the old site weighed in at 65kb per page. The new site pages are half that at 35Kb. That's a saving of 40,000 Gb of bandwidth per year! I'll leave you to translate that into money, but I can tell you the move to Web standards paid for itself within a month.
- 7. It is a very complex layout, yet still full CSS. How did you go about building it?
Richard: The incumbent design is so rigid I thought I'd have to go for a hybrid build with a single table for the main layout and CSS for the remaining presentation. With that in mind, I didn't want to waste days struggling with an all CSS solution to the layout so I gave myself just a morning to find one. I came across some brilliant 3-column layouts which had the added bonus of allowing me to code the middle column straight after the header (thereby ensuring the H1 and the map appear early to Google and non-styled renderings). I will admit to some surprise that, within two hours, I had the basic layout of header, three columns and footer, centred and working across my browser suite.
After that I printed out screen shots of all the different kinds of pages across the entire site (home page, map pages, driving directions, local information, etc.) and picked out all the common structures. From this I worked out a mark-up strategy for each chunk of content, annotating each printout by hand to identify where I would use divs with a given class, headings, unordered lists, tables and so on. Because I planned this site-wide in advance I could reduce the number of classes and ids used and much increase the visual consistency from screen to screen.
In case you're interested I develop primarily in Firefox. Its excellent rendering engine and tools such as the DOM inspector, the Web Developer toolbar and view background image context menu, make Firefox by far the best browser for development. As I get it working in Firefox, I test on Safari and the IE6/Win. I always leave IE5 testing to a later stage.
- 8. Were there any major obstacles you faced when building it?
Richard: Netscape 7 proved to be the biggest pain; one of Multimap's directors browses with Netscape 7 on a Mac so there was no getting away with any discrepancies on that particular platform. Netscape 7 is built on the Mozilla 1.4 branch which has quite a few rendering bugs long since fixed, so it was actually more tricky getting that to behave than Internet Explorer which at least is well documented bug-wise.
I would like to mention that at no point was management buy-in an obstacle to moving to Web standards; the opportunity was there and the benefits of moving to Web standards were so clear the go-ahead was never an issue.
- 9. Did you have to use any CSS hacks?
Richard: One: I used the Blue Robot method of centering a fixed width box. It's not really a hack though, more an addition of style rules for IE/Win (but interpreted by all).
- 10. The big question everyone seems to be asking at the moment - did it take you longer to develop this site using web standards than if you had built it using traditional methods?
Richard: The development process was definitely quicker using Web standards, in fact the idea of rebuilding Multimap.com with nested tables fills me with horror! I had the common templates built in two days, with a further day of tweaking to fix some minor display issues here and there. After that it was just a case of rebuilding the remaining bits of the site in the same mold, a job made easy because of the simple, meaningful mark-up used throughout.
- Thank you for the interview!
- Richard: You're welcome Russ. It was tough but fun!
|
OPCFW_CODE
|
About | ITSELECTA – IT Recruitment Agency
Located in the heart of Krakow, Poland, ITSELECTA is an IT recruitment consulting agency delivering top IT talent for outsourcing or in-house projects. Despite its original specialisation in IT and multilanguage, it is able to provide high-quality services also for administrative and executive positions.
In fact, during 2019 solely, ITSELECTA was able to achieve great results presenting candidates for a various set of jobs, some success cases are represented by country managers, customer care representatives specialised in 3 foreign languages and IT professionals all over Europe.
The success of ITSELECTA IT recruitment firm depends on an international team of skilled recruiters and business developers dealing with different markets with one objective in mind: solving challenges and helping to build internal and external IT and multilingual teams. The diversification of work makes the team efficient.
Each team member has a specific project to take care of based on his or her knowledge of the market, language and educational background. The development and results of the projects are shared daily and followed by the other recruitment team members to have an overview of the ongoing performance of the company.
The continuous cooperation and share of information allow ITSELECTA to be up to date and to provide optimal solutions and innovative ideas to deliver better service, day by day.
ITSELECTA works as a matchmaking company providing two services, that are finding talents for companies and helping candidates in finding the right vacancy in a company for them.
In the first case, ITSELECTA implements a rigorous plan for business development where it is in regular contact with its customers, that vary from corporations to founded start-ups, to know the ideal candidate they are looking for, the methodology of employment and the details of the deal. Based on that, tailored interviews and projects are created and assigned to ITSELECTA members. Here’s when the magic happens: the team has access to a great database of candidates, collected and divided based on their strengths and career expectations. If this would not be sufficient, the recruitment starts through various platforms and tools.
In the second case, that is finding the right vacancy for candidates, ITSELECTA screens every CV and contacts each possible candidate listening to its expectations and needs, thus offering the most suitable job offer available. If the vacancy requires a specific language level, ITSELECTA is able to assess it in most of the cases thanks to its multicultural team that is able to cover most of the EU spoken languages. Once the candidate is presented to the company, the team is always available for extra support.
ITSELECTA location in Krakow is another key point since the city is one of Europe’s most vibrant cities and in full technological development. Thanks to its position and agreements with many international companies, we can provide job seekers with a huge range of recruiting opportunities and likewise we can propose our customers professionalism, efficiency and top talents.
|
OPCFW_CODE
|
azure-webapp:config with default choices creates bad config
Plugin name and version
Azure Webapp Maven Plugin v1.5.3
Plugin configuration in your pom.xml
Starting config:
<plugin>
<groupId>com.microsoft.azure</groupId>
<artifactId>azure-webapp-maven-plugin</artifactId>
<!-- check Maven Central for the latest version -->
<version>1.5.3</version>
</plugin>
Expected behavior
I ran the mvn azure-webapp:config command with the default options (did not enter an index, I just hit the return key on each prompt). It should create a Linux app with JRE 8.
Actual behavior
This is the config generated by the mvn azure-webapp:config command. When I later run mvn azure-webapp:deploy with this config, it creates an App Service that has a Tomcat webapps/ROOT/ directory. When I later deploy my app.jar, the app does not start and I get a 500 error.
<plugin>
<groupId>com.microsoft.azure</groupId>
<artifactId>azure-webapp-maven-plugin</artifactId>
<!-- check Maven Central for the latest version -->
<version>1.5.3</version>
<configuration>
<schemaVersion>V2</schemaVersion>
<resourceGroup>maven-deployment-1555456770537-rg</resourceGroup>
<appName>maven-deployment-1555456770537</appName>
<region>westeurope</region>
<pricingTier>P1V2</pricingTier>
<runtime>
<os>linux</os>
<javaVersion>jre8</javaVersion>
<webContainer>jre8</webContainer>
</runtime>
<deployment>
<resources>
<resource>
<directory>${project.basedir}/target</directory>
<includes>
<include>*.jar</include>
</includes>
</resource>
</resources>
</deployment>
</configuration>
</plugin>
I think the problem is the <webContainer> tag. It should probably be blank (or even better--not there).
Steps to reproduce the problem
I was working on a tutorial for App Service. Instructions are here.
Here is the terminal output that generated the XML above.
PS C:\Users\jafreebe\Desktop\java-on-app-service\maven-deployment\initial> mvn azure-webapp:config-deployment\initial>
PS C:\Users\jafreebe\Desktop\java-on-app-service\maven-deployment\initial> mvn azure-webapp:config vice:maven-deployment >--
[INFO] Scanning for projects...
[INFO]
[INFO] --< com.microsoft.azure.samples.java-on-app-service:maven-deployment >--[INFO] Building maven-deployment 1.0-SNAPSHOT
[INFO] --------------------------------[ jar ]---------------------------------
[INFO][INFO] --- azure-webapp-maven-plugin:1.5.3:config (default-cli) @ maven-deployment ---
[WARNING] The plugin may not work if you change the os of an existing webapp.
Define value for OS(Default: Linux):
1. linux [*]
2. windows
3. docker
Enter index to use:
Define value for runtimeStack(Default: jre8):
1. tomcat 8.5
2. tomcat 9.0
3. jre8 [*]
4. wildfly 14
Enter index to use:
Please confirm webapp properties
AppName : maven-deployment-1555456770537
ResourceGroup : maven-deployment-1555456770537-rg
Region : westeurope
PricingTier : Premium_P1V2
OS : Linux
RuntimeStack : JAVA 8-jre8
Deploy to slot : false
Confirm (Y/N)? :
[INFO] Saving configuration to pom.
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 59.459 s
[INFO] Finished at: 2019-04-16T16:19:55-07:00
[INFO] ------------------------------------------------------------------------
Notice in the image below that the site has a default.jar for the JRE parking page, and the Tomcat parking page JSP's under webapps/ROOT/. The site config confirms that the site is a Java 8 SE image (from the config: "linuxFxVersion": "JAVA|8-jre8") Very strange!
If I use this configuration I get the same issue.
<runtime>
<os>Linux</os>
<javaVersion>jre8</javaVersion>
<webContainer></webContainer>
</runtime>
@JasonFreeberg App service JavaSE image do contains folder webapps/ROOT/ and default.jar, the default.jar will be execute when user didn't upload their applications, here is a snapshot of an empty linux app service.
To run jar application like spring-boot project, please also set port forwarding in configuration, as app service only open port 80 for webapp, here is an example
<build>
<finalName>app</finalName> <!-- Set name to app.jar -->
<plugins>
<plugin>
<groupId>com.microsoft.azure</groupId>
<artifactId>azure-webapp-maven-plugin</artifactId>
<version>1.5.4</version>
<configuration>
...
<appSettings>
<!-- Set port for app service-->
<property>
<name>PORT</name>
<value>8080</value>
</property>
</appSettings>
...
</plugin>
Besides, app service will take some time to recycle old web container and run new upload artifact, it will take about 5 minutes, and app service team is working on this issue, you may run ps -al in ssh to see whether app.jar is executed
Today every time we publish a new jar, the container in App Service will be recycled. It will take some time and then the web app could be reached. Ideally, only the JVM should be restarted without recycling the whole container. This functionality is expected to arrive by the end of Jun.
@Flanker32 -- We can close this. Thanks Hanxiao
|
GITHUB_ARCHIVE
|
How to calculate the angles between axis and bounding box normals?
I would like to determine the torsion of an object bounding box in reference to the global coordinate system (such that for a axis aligned object all angles will be 0).
My idea was to create a bounding box object bb via bpy.ops.mesh.primitive_cube_add() and copy its parameters from the object obj:
centre = sum((Vector(b) for b in obj.bound_box), Vector())
bb.dimensions = obj.dimensions
bb.rotation_euler = obj.rotation_euler
bb.location = (centre[0], centre[1], centre[2])
bb.location = obj.matrix_world * bb.location
Than I tried to calculate the angles:
for poly in obj.data.polygons:
# angle of bb normal against z axis
angle = Vector((0,0,1)).angle(obj.matrix_world * poly.normal)
print("%6i %.3f°" % (poly.index, degrees(angle)))
But the values don't make any sense. I am not sure whether this is a good approach at all.
Any suggestion to solve this problem?
Another idea might be to align the object to the axis somehow an than read out the difference from rotation_euler
What you're looking for is the Vector.rotation_difference function. It gives you the angle between two vectors. https://www.blender.org/api/blender_python_api_current/mathutils.html?highlight=rotation_difference#mathutils.Vector.rotation_difference
It is unclear what you mean by "torsion". The obj.rotation_euler describes how the object is transformed from the world coordinate system. Each of the normals of the bounding box point in a different direction, so the angle between that normal and the z axis will vary. You probably need to sit down and think about your question. How does your question apply to a sphere? how would it apply to Suzanne (the monkey head)? How would it apply to to a boomerang model?
@MutantBob Sorry for my unclear explanation. I try to rephrase: Considering Suzanne (or any complex objects) I would take its bounding box cuboid. So let's imagine a cuboid with a arbitrary position and rotation in the scene. Now I like to know which rotation I have to apply on the cuboid on each axis to reach axis alignment. But I am thinking now about your hint, maybe the solution is simply the euler rotation?
@TLousky Thanks for your hint, the functions gives the same result though. But it confirms that the problem is not the angular calculation but the bounding box normal vectors
The orientation of an object is obj.rotation_euler or obj.rotation_quaternion (and whichever one does not match the object's rotation_mode can be invalid or inaccurate).
The orientation can also be recovered from obj.matrix_world.decompose()[1], but that's a little bit of overkill.
For a slightly deeper dive down the rabbit hole, check out Relationship between global vertex coordinates and mesh object's matrix attributes .
|
STACK_EXCHANGE
|
|7 Jan 2005, 07:02 AM||#1|
Join Date: Jan 2004
FAQ: Problem burning with USB devices
Problem burning with USB devices
The burning is very slow.
Go into device manager, right click on USB devices, select properties, and do a search for updated drivers. The search will find any new ones and install the appropriate drivers. These can also be found on Microsoft Windows Service Pack 1.
I don't know if my computer have USB 1.1 or USB 2.0?
To check if your USB port is USB 2.0 or standard USB 1.1, go to "Control Panel" --> "Device Manager" --> "Universal Serial Bus Controller" to see if you have "USB 2.0 Root Hub" listed or "Enhanced Universal Serial Bus Controller" thereunder.
Are the external DVD burners compatible with USB 1.1?
Yes, they are compatible with USB 1.1. However you will only be able to burn CD and read CD/DVD, you can not burn DVD because the speed is too slow.
Last edited by cynthia; 7 Jan 2005 at 07:07 AM
|1 Feb 2005, 04:55 AM||#2|
Join Date: Jan 2004
Semaphore Timeout Issue
Semaphore Timeout Issue
You recieve this error when you try to burn with DVD Decrypter with a USB 2.0 connected burner.
The ALI's are the main culprit. It's some kind of weird combination between the XP, since ME appears to be unaffected, drivers, USB 2.0 onboard connections, and the ALI PCI to USB bridge chips. When installed internally on an IDE connection or when connected to a PCI USB 2.0 card, it seems to work fine. I've verified the IDE connection fix myself, as have others, here and in other forums. I've connected the card and just need to swap around the cable and put the drive back in the enclosure to test. Connecting by FireWire also seems to work, but, I don't have FireWire to test with. Oh, and, USB 1.x works, even when connected to the mobo, i.e. a 1.x hub connected to the 2.0 mobo port. BUT, you get speeds at less than 1x that way, so, that answer, while it works, is not very practical.
So far, we've only been able to see the problem occur under ALI bridge chips. My drive has one, and at least 2 or 3 others who can verify their chips have shown us that those who have the problem are ALI bridges and those that have verified their working chips aren't ALI's. Plus, when removed from the ALI drive enclosures and put on other chips or used internally as IDE's, they work. So, thus far, the ALI chip has been our main culprit to focus on.
The following parts are highlights from a thread in the DVD Decrypter forum. Credit: dbminter
The solution is to remove the drive from its current enclosure and and install it internally or a new enclosure that does not use ALI chips. In System on Belkins we know works from experience, and, we know that removing the drives and placing them internally works. And, I know from other forums and boards that external drives that had issues, not just the semaphore, have been shown to work when removed from the enclosure and installed internally as ATA. Thus, we can pretty much state it's the USB to IDE chip bridge. Whether they're all ALI, though, is still in question.
Use ElbyCDIO as the interface, after installing ElbyCD from something like CloneDVD. I've gotten that to work, but, not any other interface I can recall. ElbyCDIO is just the interface layer, it wont be any different for you from using any of the others - except it'll work where SPTI wont on crappy semaphore timeout issues!
Well, we know it's a problem with the SPTI drivers, since they aren't in Windows ME, and switching from SPTI has (Sometimes) been the kill all solution for the timeout. (So far, the only sure fire solution is to remove the drive from the enclosure and install it internally, or connect the external drive by FireWire if offered.) Thus, I can't really see how the ME drivers would help. But, maybe Lightning might know of a reason.
There are a few things that can be done, but, you're not going to like them. For any external drive where you've gotten the semaphore timeout, you can try these different things:
1.) connect the drive by FireWire if it and the PC have the capability
2.) remove the drive from the USB enclosure and install it internally in the PC
3.) install a USB 2.0 card in one of the PCI expansion slots and connect the drive to it
4.) use Windows ME instead of XP
5.) remove the drive from the USB enclosure and install it in a new enclosure, like Belkin, that doesn't use one of the known bad chips
6.) connect the drive to a USB 1.x hub, however, be aware your burn speed will be less than 1x.
Okay, here are some things that may or may not work.
1.) XP Service Pack 2. I've not been able to see if #2 requires this or not, and, I doubt I can be bothered with testing it over and over again, truth be told.
2.) if you experience a semaphore timeout, the reboot might help IF done one of two ways a.) power off the drive and then back on BEFORE you turn the PC on. b.) leave the drive powered off until AFTER Windows boots up. Then, turn it on.
Right now, a burn is burning after the semaphore timeout occurred after XP SP 2 had been installed. But, I've not been able to determine if the drive needed a power cycle after the service pack installation to avoid the timeout or if you need to power off and power on the drive each time before Windows starts or to leave the drive powered off and then power it back on after Windows loads.
But, so far, at least one of the above, or a combination or multiple combinations of the above seems to work...
However, I've also noticed that the majority of times the burn seemed to work in instances where I was testing explicitly for it, it seems, though no solid proof can be discerned from this, is that if you're going to use an external USB 2 DVD writer with XP, you may want to shutdown, turn off the power to the DVD drive, restart into Windows, making sure the DVD drive is powered off. Then, power on the drive after Windows starts. Whenever I did this, I did not encounter any semaphore timeouts, as opposed to doing this, then, a burn, then rebooting with the power to the drive on, then another attempt burn, reboot, burn again, etc. That way, every other burn was almost always a semaphore timeout cancel.
So, I'd recommend trying that for anyone with semaphore timeout problems: before doing a burn, shut down the PC, turn off the DVD drive's power, boot back into Windows, then turning the drive's power on, and trying.
Works perfectly with latest version of Nero. No semaphore timeouts whatsoever.
I have had this damn semaphore problem for ages. After reading this thread again, I tried Nero v126.96.36.199, and sofar it works
like a charm(touch wood) - no timouts whatsoever. I would like to be able to use dvddecrypter or recordnow, but if it means buying a new enclosure, then Nero will have to do.
I don't think there can be a fix that doesn't require hardware. ALI didn't design these chips to be updated, but, they have updated future chip sets to do this, so, they know there's a problem. You can't even poll the chips for version number, BIOS revisions, etc. In fact, ALI's solution when I talked to them is to simply replace the chip with one of their newer ones. How convenient. I'd just as well replace it with a known WORKING chip set, thank you very much!
So, the only solutions so far, and, I believe, the only one we'll come across since ALI won't be forthcoming with info:
a.) remove the drive from its external enclosure and install it internally on IDE
b.) connect the drive by Firewire, installing a Firewire card if necessary
c.) remove the drive from its external enclosure and install it in another external enclosure that has a Firewire connection and connect the drive by Firewire, installing a Firewire card if necessary
d.) remove the drive from its external enclosure and install it in another external enclosure that uses a known good PCI to USB bridge chipset.
The nearest I can tell it's a combination driver and hardware issue. It only affects certain hardware and certain driver combinations. On Windows ME, the problem apparently doesn't exist. On XP, it only exists on ALI chipsets connected by USB.
Sometimes, it doesn't affect USB drives connected to hubs. It doesn't affect USB 1.x. So, which do you blame? Later ALI chips, according to ALI so take it with a grain of salt aren't affected. But, since Windows ME doesn't have a problem, it's a problem with the driver when used with specific hardware. So, if it's a driver, which is it? Probably impossible to track it down. I can't remember if I tried SP 2 or not. It's probably earlier in this thread. I made a lot of on board notes as I conducted my tests so I'd know in future what I had and hadn't done and what happened.
I downgraded back to DVD Decrypter ver 188.8.131.52 and the problem went away again.
Last edited by blutach; 6 Nov 2005 at 11:25 PM
|22 Feb 2005, 03:37 PM||#3|
Join Date: Nov 2002
Location: East, TX
This thread has been closed/locked, as it is for information purposes only. If you have a question regarding this thread.
Moderators of the FAQ/Knowledgebase forum
Last edited by admin; 26 May 2007 at 02:42 AM
|Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)|
|Display Modes||Rate This Thread|
|
OPCFW_CODE
|
XP Deus Program Settings, Features and Modes Explained, page 2
DISCRIMINATION: Function, Options & Effects On Metal Detector's Performance
(...CONTINUED from Previous Page)
Discrimination feature of the XP Deus metal detector comprises two types: Conventional Discrimination and Tonal Discrimination (see next page). Besides classic level-adjustment option (from 0 to 99), the CONVENTIONAL DISCRIMINATION feature offers the following three options:
If one type of unwanted targets such as frequently detected pull tabs has to be rejected, a user can utilize Notch discrimination (page 15 in the user's manual) to reject a conductivity group of six points which would include conductivity values corresponding to the pull tabs.
And desirable targets with conductivity values outside this group, lower and higher, will not be rejected. The 6-point rejection group can be placed anywhere on the Discrimination / Conductivity scale.
If the unwanted target(s) generate a wider conductivity range (more than 6 points), you can utilize a Multi-Notch option (page 40 of user's manual) which enables you to widen the rejection window. Up to three rejecting notches, N1, N2 and N3, can be used. So if you do not wish to bother with the most common varieties of junk targets, you can create a highly effective notch-pattern of Conventional Discrimination.
However, be aware of "blurry" edges - lower and upper 'threshold points' (breakpoints), of any rejecting notch. If the notch does not have "sharp edges", it is wider than it is supposed to be, and if the VDI (Visual Display Indicator) number of a desirable target "lands" into the "plume" of the rejecting notch, the Deus' sensitivity to this target will be also abated.
NOTE: Both Notch and Multi-Notch options are quite useful for metal detecting at hunt sites that are highly contaminated with MODERN trash. These Conventional Discrimination options should NOT be used at the hunt sites containing no modern junk because 1) the Deus' detecting range is reduced when either Notch or Multi-Notch options are implemented - not a good idea at any medieval site!, and 2) some detected ferrous and non-ferrous targets may be valuable artifacts and should not be rejected.
Always remember that the Conventional Discrimination feature is one of those few features (Ground Balance, Sensitivity and Reactivity) that are very susceptible to a "human factor". If you accidentally set up Conventional Discrimination wrong, you will render all good work of the Deus firmware useless!
DISCRIMINATION IAR (in v3.0 and subsequent versions)
NOTE: A factory pre-set program #10 - GOLD FIELD, was added to the v3.0 and subsequent versions. This program uses a different principle of discrimination, called IAR (Iron Amplitude Rejection), and allows for setting the Conventional Discrimination level in a range from 0 to 5.
This range applies only to iron targets that induce strong signals typically associated with shallow and/or large ferrous objects. The IAR will not reject deeper NON-FERROUS targets that are detected through the highly-mineralized soil, and whose induced electromagnetic fields are received by the search coil as "weak" (the low-amplitude signals) and, therefore, may be identified as ferrous by the Discriminator circuit.
More details are given in the "Pumping Mode for Program #10" section on my XP Deus Ground Balance Modes page).
The range of the IAR Discrimination is from 0 to 5. With a zero value, all ferrous targets will obviously be accepted. Setting the IAR Discrimination level at 5 will enable the IAR Discriminator to silence the iron objects generating signals of lower amplitude, i.e. deeper targets. As the gold nuggets at relative depths in the highly mineralized ground normally generate the iron-like audio responses, it is best to keep the IAR Discrimination level at low values to detect as many nuggets as possible.
|
OPCFW_CODE
|
Arduino - Motor PID Speed Control - Hacksterio
Connect Stepper Motor to Arduino and control it with Rotary Encoder - Quick and Easy! Arduino + Visuino: Control Stepper Motor with Rotary Encoder by Boian Mitov
duino mega - DC Motor Speed Measurement using Rotary
With PID control, speed of motor can be archived exactly. This article mainly introduces about making program in Arduino Pro mini, program in Computer (Visual Studio) to …
Micro DC Motor with Encoder-SJ01 SKU: FIT0450 - DFRobot
/27/2015In this first part of controlling a Stepper Motor with a Rotary Encoder, we will use the 28BYJ-48 stepper with the included ULN2003 driver board. Control a Stepper Motor using an Arduino and a
Activity 6 Part (a): Time-Response Analysis of a DC Motor
In a motion control system, you drive the motor at some speed (use PWM built into the Arduino AVR) until the encoder tells you you’re approaching the target position, then you decelerate as …
Control a Stepper Motor using an Arduino and a Rotary
Arduino + 2A Motor Shield + Encoder Motor. 2 0 This tutorial is to verify or count the output pulses from the quadrature encoder type of motor by using Arduino. further adding an Arduino-LCD Keypad Shield can help us to control the DC motor that connected to MDS40A with the 6 momentary push buttons (built-in push buttons on LCD keypad
Arduino PID motor position and speed control - YouTube
Introduction. With PID control, the speed of a motor can be archived exactly. This article mainly introduces making a program for the Arduino Pro Mini on your computer (using Visual Studio) to control motor speed by a PID algorithm.
DC Motor Speed Control with PID - Hackadayio
I want the encoder to control the motion of the stepper motor but I am new to arduino so not sure how to build such a code. e. g. 1) I want the stepper motor to start rotating clockwise at a speed X when the encoder reads a value Y0. 2) Then when encoder reads a value Y1 I want it to give a signal for PC to start recording the value of the encoder.
tary encoder interfacing with arduino - dc motor speed
Home › Tutorials › Controlling A DC Motor With Arduino. 19 Sep 2016. by Chris @ BCR. 14 comments. Controlling A DC Motor With Arduino. In this tutorial we will be using an Arduino to control the speed and direction of a DC Motor. For this tutorial we will be using our basic DC Hobby Motor but this tutorial can be applied to just about any
|
OPCFW_CODE
|
Arrival 11h55 Rotterdam Airport / Departure 13h40 Schiphol Airport with EasyJet
I will arrive on Monday 16th May to RTM Airport at 11h55 and need to catch the EasyJet flight EZY2726 to Milan Italy at 13:40 at Schiphol Airport.
My plan is to book a driver in advance, apparently, tranfer to Schiphol can be done in 40 min maybe less.
So I will arrive at… let say 12h40 at Schiphol and still have 1 hour before the flight.
I have no luggage and already bording pass.
But I know connection to EasyJet gate can be long and that they close the gate pretty early before the take off so I'm a bit concerned.
Do you know what is the closest entrance to EasyJet usual gate ?
Any idea? Is it totally crazy?
I can't say whether it's crazy, but it's certainly risky. You have to be prepared for the possibility that your arrival in Rotterdam will be delayed and your departure from Schiphol will not. In that case, you'll miss the flight and will have to find another way to Milan.
Let us know how it went next week!
It did sound a bit crazy at first, but with a bit of luck, it should be doable, at least if you do have a driver on hand and your incoming flight is not delayed. Rotterdam airport is tiny, no time lost walking to the baggage claim area and exit or anything, you can literally be in front of the terminal building in 5 min.
At Schiphol, EasyJet flights depart from the H pier (departure 3/4, EasyJet check-in/baggage drop-off and the entrance to the pier are at the end of the departure hall). IIRC, there are signs outside and your driver should know where that is, it's the end of the last terminal. And you don't need to be at the gate an hour in advance. In fact, passengers have to wait in the H-pier lounge and the actual gate is only shown something like 30 min before departure time.
Taxis/professional drivers (but not regular folks, I think) can drop you on the upper level (departure), right in front of EasyJet's check-in desks and the entrance to the H-pier is right behind them (private cars can do it too, but a bit further away from the terminal building, see comment). The distances are actually very short compared to most low-cost piers and you can walk to the gate in 5-10 min. The main bottleneck is the security check, usually it's OK but I missed a flight once because of it (and EasyJet in particular won't accept any responsibility for it, they charged me €50 to change my ticket to the next flight).
Another concern is the motorway, I think you need more than 40 min at the best of times (especially if you obey the speed limits, which taxis usually do in the Netherlands, at least on the motorway) and delays are extremely frequent, there are traffic jams every day and more than one hour delay is easily possible on a busy rush hour. On the other hand, Monday is a bank holiday and 12 is not a bad time. Have cash on hand, taxis don't always accept credit cards in my experience.
Private cars are also allowed to drop of passengers on departure hall level, just like the taxis, just one lane farther out.
Monday will be day off, so I would expect more relaxed traffic, but who knows
@EugenMartynov It's a bank holiday, yes, it's already mentioned in the answer.
For anyone who's going to try this in the future: I'd highly recommend taking a taxi to the Rotterdam Central station and then from there a train to Schiphol. The trains in Schiphol drop you off right in the center of the airport.
The big plus here is the train from Rotterdam Centraal to Schiphol. You have two choices - the Intercity Direct (25 minutes, it's a high speed train, specifically for that route, that makes no stops in between) and the Thalys (21 minutes, also high-speed, no stops in between.) The Intercity Direct costs €2.40 plus the normal train fare, the Thalys costs considerably more.
It's about 9 minutes by taxi from Rotterdam airport to Rotterdam Centraal, so that plus a 21 minute train means you can (in theory) get from airport to airport in 30 minutes.
The reason I prefer a taxi+train is that it gives you more options. If there's train delays (which you can check on your phone) then you're already in a taxi. If there's traffic, you have a train booked.
|
STACK_EXCHANGE
|
Python Scripting in Kicad
When do engineers actually use the python scripting feature available in CAD software like KiCAD.
I am designing an amplifier and I am thinking about using a python script to check for max voltage rating of parts and compare them to operating point generated by spice simulation and throw an error if the operating point voltage exceeds the max voltage of the part.
However I am an amateur hobbyist having no experience in the industry.
So how is python scripting is used in practice if it is used at all.
I've used scripting to place locate parts using polar coordinates. That's less necessary now that many tools have snap-to-polar grid. At one time we had to use scripting to get logos onto PCBs with at least one tool!
I would say that generally over time tools tend to absorb whatever you would want to do in scripting into their functionality. That is part of what causes the tools to bloat over time. And unless you're planning to be a layout professional that mostly works with one set of tools, the percentage of your time you spend using any one tool will be small, so automating tasks is of less benefit than you might think. PSPICE has has 'smoke' parameters that can do more-or-less what you are asking for, and has had them for some time. I don't know how widely they are used. I didn't use them when I used PSPICE.
In the PCB-based system world I rarely find it necessary to simulate more than 10% of a design. Unless you go to a great deal of trouble, simulation only tells you what happens typically, and often the models are idealized in other ways (both for simplicity and so simulations run fast). That's particularly important with things like switching power supplies because the simulator has to accurately model fast waveforms over many, many cycles. I don't think we need a simulator to tell us that a 10V capacitor is adequate voltage rating for a 3.3V rail (and unless you use a sophisticated capacitor model, the simulator won't tell you what really happens with bias, aging and temperature effects- and even then you're not likely modelling the layout parasitics). Predicting what happens on a stressed part with self-heating requires modelling the heat loss etc., not easy to get right, and worthy of serious attention if you want system-level reliability.
That said, it does sounds like a decently 'cool' academic project though. You could probably get an article, a paper or a video out of it depending on how and what you do.
As time has passed, I have settled on doing my best to not customize my software tools as much as practical, as remembering and mainaining these customizations becomes a growing chore.
That being said, there are exceptions, one being a bit of python scripting in Kicad to produce BOM's in the exact format for which my templates expect. This is so far the only case in Kicad where I have deemed it efficient to do so, to get the results that I need.
#
# KiCad BOM Generator Entry:
# python "snip\Electronics\Library\KiCad\Scripts\ner_bom.py" "%I" "%O"
#
# Example python script to generate a BOM from a KiCad generic netlist
#
# Example: Sorted and Grouped CSV BOM
#
"""
@package
Output: Excel (default) or CSV (comma-separated)
Grouped By: Value, Footprint, P1, P2, P3, Type
Sorted By: Ref
Fields: Ref, Quantity, Value, Cmp name, Footprint, Description, Vendor
Command line (default):
python "pathToFile/ner_bom.py" "%I" "%O"
(with variants)
python "pathToFile/ner_bom.py" "%I" "%O" "autolab" "autoscan" "aldm"
If you do not want to sort on 'Value' (as is the default), you may append a list of additional
arguments. A separate BOM spreadsheet will be generated for each argument, with the
components sorted by the argument name. The argument name must exist as a schematic
parameter for every component.
Like:
python "pathToFile/ner_bom.py" "%I" "%O" "autolab" "autoscan" "aldm"
"""
A number of my clients do very similar things: one draws the diagrams with internal part numbers but needs to make BOMs with manufacturer/supplier part numbers, which are chosen from a preference list (ie, internal part 123 has a list of supplier part numbers, the person in charge of production choose which things to buy). A python script does the necessary adaptation.
|
STACK_EXCHANGE
|
In December of 2009 I custom built a new computer. For the longest time, I was having issues out of it where memory wasn't clocking to the level it was supposed to, and simple things like Youtube would cause a lot of stuttering with the video. I would also have complete system crashes when using sound or video editing programs. Eventually, I figured out that it was the motherboard that had an issue from day one. I got the motherboard replaced under warranty, and I haven't had any of those issues since then.
However, I'm have a new issue. I haven't played many games with my system since I've had it. But now I'm playing Guild Wars 2 and X-Com: Enemy Unknown (From Steam). If I have AA turned on pretty much at all, the games seem to crash with a pretty high regularity. If I turn them off, I can get a good few hours between crashes, but they still happen.
Sometimes the games just crash out. Sometimes the entire OS freezes, and I have to reboot the system in order to get everything back up and running again. It's an extremely annoying problem, and nothing I've tried seems to make much of a difference. Sometimes I can get through 4 or 5 hours without a crash. Other times I may have a crash every hour.
While I do have the ocassional system crash maybe once a month or so, I don't appear to have any huge problems with anything besides games. So I'm thinking that the problem is linked with the GPU somehow. At idle, I'm running at about 69 Celsius. I only recently started paying attention to the temperature. At the time of my last crash, I reached 88 Celsius.
I'm running a Corsair Enthusiast Series TX950 (CMPSU-950TX) 950W power supply, which from the research I did prior to building the system seemed to indicate would be more than enough for my needs.
The only thing I can think of that could be a potential problem is that my video card BARELY fits into my case. Like it could literally be no bigger and still fit. I'm thinking that it's possible that because it's in there so tight, it's possible that it's not getting proper airflow, and it might be over heating as a result.
The biggest thing that makes me think it's a temperature issue, is that when my system does completely lock up, it sounds like a jet when I start it back up. As if all of the fans are trying to cool things down as fast as possible.
Any ideas as to what might be going on? I'm willing to spend a bit more money to fix things, but as of right now, I'm not really sure which direction to head. More fans? A bigger case? A new video card? I would hate to have to replace my video card, as I'm not sure what the modern day equivalent to my video card is, but it's probably on the edge of my price range.
Sounds like a driver issue to me Saberj.
i have a 5870 and run it @ 900 & 1300mhz and temps to get high, my highest is 85' in GPUZ but thats not a prob, toasty yeah but not dangerous. I've never had any issues because of it anyway.
I'd get a driver wiper freeare program and remove the drivers and re-install it.
|
OPCFW_CODE
|
The more I work with PyCharm, the more I like it. Now it’s time to make this IDE more confortable for you, using some cool features you maybe don’t know event they exist. Because this IDE is one of the best I’ve used to write Python code!
You can find this feature under the Help menu.
The result of this feature is to show you statistics of PyCharm features that will make you write better Python code. Of course, I’ve never seen a feature like this on any IDE from all I’ve used programming (even with any language).
Así puedes obtener consejos para mejorar.
Although you have a keymap availabre and predefined, maybe you want to modify it in order to fit your needs (I already told you that investing time while configuring the IDE is one of the best think you can do using PyCharm, and afeter that, export and import the configuration). You can access from Settings:
Now, it’s time for you to define your favorite shorcuts. There is also a search textbox, that it’s very useful because of all posibilities available. There is a plugin called Presentation Assistant that shows the key bindings of the things users do!!
Some features I recommend to to activate
There are a few characteristics that are deactivated by default, and at least, I would activate. One of them is collect runtime types, while you execute code.
Another feature I would activate is case sensitive. You can modify the option here:
Another cool feature I recommend you to activate isto mark modified files, that’s is, files that need to be saved:
Another feature desabled is modify the font size using the mouse.
Writting good Python code
The Python rules for writting good code are the PEP-8, you can read it here. I already wrote about how to write good Python code, but now, you have a veri noce help from PyCharm. Look at the right part of the code and you can notice marks with diferent colours, each of them is a problem:
But even more, you can modify (and of course, export and import) the option for writting good code. How? Right clic on this zone and you’ll have this window:
Click on Configure highlightings level:
And you modify everything that you want to be noticed in red, yellow, green, ….
As you can notice (as well), you can export (and import) the configuration, not losing your time!
If you are reading this, you can notice I like PyCharm a lot, and is the IDE I’m using, mainly because it helps me to write better code, more quickly and with more help. The debugger is not as good as the PyScripter, but I must admit I’m confortable with it.
To end with, I recommend you this video, how to debug Python using PyCharm:
Have a nice day and happy coding!
|
OPCFW_CODE
|
A set of classes for reading, writing and manipulating NASA Flexible Image Transport System (FITS) files.
The aim is ultimately to be able to read and write all of the header and data array formats that are set out in the latest FITS standard documentation. At the time of writing, this was FITS v4.0 July 2016 DRAFT.
Initially, we will concentrate on building a
FitsReader class and being able to read a
Primary HDU containing a single image data array ("SIF" of Single Image FITS format).
Next we'll add the ability to transform the raw data array into a
Bitmap so it can be
more easily displayed and manipulated.
Then we will add the ability to modify and create new files and write them out to disk
Once we can both read and write SIF files, we'll add additional data array types and the ability to work with multiple Extensions/HDUs.
Calculate the position in space of the Earth relative to the Sun for a given date, time Give the answer in both cartesian coordinates (X,Y,Z) and sperical coordinates (Latitude, Longitude and Radius).
Use a reference implementation to verify the results.
- Epoch J2000 is assumed unless otherwise stated.
- We are going to use VSOP87 but may want to use other orbit engines in the future, so we will need to keep things loosely coupled.
Calculate the position in space of Earth using VSOP87 in both spherical and rectangular coordinates Calculate rectangular coordinates Calculate spherical coordinates Calculate the 6 latitude terms, L0 to L5 (L0 done). Make ComputeL0 general purpose by passing in 'alpha' and the VSOP87 data. Compute latitude L in radians as (the sum of the series (Ln*rho^n)) / 100000000.0.
for n from 0 to 5
Calculate Longitude (B) in a similar way Calculate Radius (R) in AU in a similar way Orbit Engine - method is hard coded for Earth. How do we get this to work independent of the body being computed?
- How to represent the Sun and Earth?
Read the VSOP87 data from a text file instead of having it hard coded. Abstract the loading of data away from the user. Delete the obsolete hard-coded data.
- Download VSOP87 data from the web if needed (maybe?).
- Caching strategy, so that each VSOP87 file is loaded only once.
- Position of planets other than Earth
- Position of other planets relative to Earth (instead of Sun)
- Position of Planets in horizon-based coordinates
This project is licensed under the Tigra Astronomy MIT License. Essentially, "Anyone can do anything at all with the software without restriction, but whatever happens it's not our fault".
|
OPCFW_CODE
|
The data backup is used for secure archiving of a database. This is usually done fully automatically by installed and activated security procedures. These copy the entire data stock (full backup) to a backup medium, only the changed data since the last full backup (differential backup) or the changed data since the last backup (incremental backup).
For the use in the database environment, various manufacturers provide extensions to secure different databases. These access the database as part of a user context to create an image of the database. When using a CortexDB this is not necessary. Separate licensing and use of a backup account is therefore no longer necessary.
Integrated CortexDB backup process¶
A backup function is already integrated in the CortexDB and can be configured via the corresponding parameters in the configuration file of the server ("ctxserver.ini").
The backup mechanisms should therefore only back up the backup directory of the CortexDB. All other directories are to be excluded from the backup. In the event of a failure, it is sufficient if the corresponding backup file is restored in a new environment.
The backup function of the CortexDB ensures a complete database status at the specified times. During backup, the database and all applications running on it can be used further.
The set number of retained backup files is within the backup directory. The filename shows the date and time of creation.
131201020100.cbz 131202020100.cbz 131203020100.cbz
The associated configuration has been set to save only three files ("MaxBackupCount")
[BACKUP] backuppath=.\backup MaxBackupCount=3
and that every day the backup is done at two o'clock at night (0 = Sunday, 1 = Monday; [.. ]; 6 = Saturday).
0=02:00 1=02:00 2=02:00 3=02:00 4=02:00 5=02:00 6=02:00
If the entire database directory is to be backed up, the database should be stopped during this. While the database is running and user access is active, if a backup of the overall directory is made using products from other manufacturers, error-free recovery cannot be guaranteed.
If continuous data backup is required during operation, it is recommended to use the CortexDB online backup server.
Perform a manual backup¶
In addition to the automatic data backup, there is the additional option of creating a manual backup. The corresponding backup file as well as all automatically created backups are stored in the configured directory.
It should be noted that the maximum number of saved backup files is taken into account. A manual backup may therefore delete an older backup file.
To start the manual backup, it is necessary to use the Remote Admin. For this is the function "Backup" available.
Before the data backup is carried out, a query is made about the type of data backup. Here it is possible to create a license-free backup that can be read in with any other license. Usually, a regular backup should be done here that is used exclusively with the same license.
An optional extension is the possibility of one (or more) online backup servers. Such an instance can be operated in constant synchronization with the productive server and therefore essentially corresponds to a mirrored CortexDB database. This is synchronized via TCP / IP connections and can thus be placed in geographically different locations (eg in other fire compartments).
In the event of a fault, the online backup server can take over productive operation and thus it to be upgraded to the primary system. This process is a manual operation to deliberately intervene to control any automatisms of premature system changes (e.g., in the event of a momentary line failure, unscheduled and irregular changes for the parties, other systems, etc.).
The synchronization between the productive server and the backup server is a one-way synchronization. Changes to the database are therefore only permitted on the production system, but not on the backup server. Within the framework of the failure safety and/or distributed applications, the so-called matching server can be made available for this purpose.
CortexDB synchronization server¶
For distributed database operations is the so-called."Match server" functionality available. With their help, a large number of synchronized servers can be operated. Optionally, the activation of a rights management system is possible, so that only selected databases are transferred to dedicated databases. The central database server thus retains control and the entire database; The synchronized servers receive only selected information.
An adjustment server can be operated independently and without constant connection to the main server ("master"). Only after establishing an online connection will new information be transmitted and synchronized between the two servers.
Recovery of a database (Recovery)¶
The Cortex database server has several automatic backup methods to ensure stability. However, irregular states of the operating system or hardware can cause errors that affect a database. Due to the different functions, it is therefore possible to make corrections and only during critical need to restore a complete backup.
In addition to the defined backup configuration, the database server internally stores all datasets in contiguous blocks on the storage medium. Changes to the datasets are stored in a so-called "chd" file, whereby all changes of a single day are coded in a file ("Transaction Log"). These files are located in the "ctxchh" subdirectory ("cortexdb change history") of the configured data directory from the server (see the parameter "basepath" in the configuration block[SETTINGS ]).
This change information is processed in such a way that for each changed field content the previous and the new content are stored and every dataset change receives its own "hash". This allows you to read these files in addition to restoring a backup, to restore the state of the database at any point in time (database before system failure) (differential backup).
Hardware failures or other failures may cause inconsistent states within a database. In this case, the function of "reorganization" was integrated within the CortexDB.
In a reorganization, all management information is regenerated from the stored datasets. As long as the datasets themselves are consistent, recovery is guaranteed. If partial areas are destroyed, however, all remaining datasets in which no errors were found can be correctly restored.
Manual restore ("Restore") of a database¶
The recovery of a backup file is done using the remote admin. This offers through the "Restore" function the possibility to read in a backup file completely. All settings, users and extensions contained in it ("php projects") are restored.
This backup file must be located in the configured backup directory of the server (see configuration block [BACKUP ] the "backuppath" parameter).
BRestoration overwrites all information, configurations, licenses, user settings, and other settings. For license-free backup files it is necessary to re-import the license (lic file).
|
OPCFW_CODE
|
Problem with streams?
I'm trying to execute programs through node thought the child process API. Every program I tried did run as expected, with the exception of imagemin. Here's the minimal code that displays my problem:
var spawn = require('child_process').spawn;
var process = spawn( 'node_modules/.bin/imagemin', ['Assets/Images/*', 'Public/asset/'], {
cwd: '/var/folders/xf/_hq9dzv925d_zj_fv06n4jcm0000gn/T/hubot.2RF7YIaK'
} );
process.on( 'close', function( code ) {
console.log( 'done: ' + code );
} );
The directory Public/asset/ is not created, and no file is being transformed. Using this code on the command line, which should do the same, works as expected:
cd /var/folders/xf/_hq9dzv925d_zj_fv06n4jcm0000gn/T/hubot.2RF7YIaK
node_modules/.bin/imagemin Assets/Images/* Public/asset/
I did try to fix it by ignoring the stdin stream (with stdio: [ 'ignore', 'pipe', 'pipe' ] and the files are still not generated, but the script does not run forever. It just ends with return code 0, and nothing happens.
Any idea what might happening here?
Thank you!
Tobias
Set stdio: 'inherit'.
Thank you very much @kevva. inherit does not really work for me, as I need to have live access to stderr and stdout. When I use the following code, the raw image data will be in stdout:
var spawn = require('child_process').spawn;
var process = spawn( 'node_modules/.bin/imagemin', ['Assets/Images/*', 'Public/asset/'], {
cwd: '/var/folders/xf/_hq9dzv925d_zj_fv06n4jcm0000gn/T/hubot.2RF7YIaK',
stdio: [ 'ignore', 'pipe', 'pipe' ]
} );
process.on( 'close', function( code ) {
console.log( 'done: ' + code );
} );
process.stdout.on( 'data', function( data ) {
console.log( 'data: ' + data );
} );
When I execute the command directly inside the terminal there is no output. Background of all this is that I would like to live stream the output of command line calls. Having the raw image data send to a browser is kind of suboptimal.
Yeah, it's because of https://github.com/imagemin/imagemin/blob/master/cli.js#L64-L66. @sindresorhus, do you think we should have options for this? I feel like this is an edge case. Would be easier if you'd just use the API.
Not sure if this is an edge case, but for example a self hosted CI tool I tried to use had also problems with this. I would assume e.g. Travis CI would have also problems with this, but I don't have a use case to test with them. I'll try to test it somehow.
It's an edge case to use CLI tools when there's a programmtic API available.
Just for the record, travis does not seem to have a problem with this behavior: https://travis-ci.org/tobiastom/imagemin-test#L132-L134
I guess it treats that script as isTTY. If you're looking in https://github.com/imagemin/imagemin/blob/master/cli.js we're making some assumption depending on if stdin and stdout are isTTY.
I solved it for me now with pty.js, which comes with a lot of different kind of problems, but it's working for me.
@kevva Just for the record, I have the same issue with pkgrun, which seems to use the same spawn method.
Here's that I tried with nom scripts:
"imagemin": "pkgrun 'imagemin:*'",
"imagemin:images": "imagemin Assets/Images/*.* Public/assets/",
"imagemin:svg": "imagemin Assets/SVG/*.* Build/SVG",
The script will never finish the imagemin:images task.
This does not seem to be an edge case for me. I just tried parallelshell and it has the same problem.
Here are the scripts to reproduce it:
"imagemin": "parallelshell 'nom run imagemin:images' 'nom run imagemin:svg'",
"imagemin:images": "imagemin Assets/Images/*.* Public/assets/",
"imagemin:svg": "imagemin Assets/SVG/*.* Build/SVG",
|
GITHUB_ARCHIVE
|
With a view of all Washington DC, we were definitely lucky enough to be at the right place at the right time. We departed northbound on runway 01 which calls for …
Don’t miss your chance to win a trip to Los Angeles and dinner with the TYT hosts at https://www.TYT.com/dinnerinLA
Trump is bragging about himself. What else is new? Ana Kasparian and John Iadarola, hosts of The Young Turks, break it down. MORE TYT: https://tyt.com/trial
Read more here:
「President Donald Trump said Wednesday this year’s Independence Day will feature a fireworks display atop Mount Rushmore, an event he would 「try」 to attend.
In May 2019, South Dakota Republican Gov. Kristi Noem initially announced that the state and the Department of Interior had struck a deal to have the fireworks return to Mt. Rushmore beginning with the 2020 Independence Day celebration. The fireworks had been discontinued in 2009 due to concerns of a wildfire hazard in forests adjacent to the monument. Noem said advancements in pyrotechnics and a strengthened forest led to the decision to have the fireworks return to the site.
Pine beetle infestations in nearby forests were the cause of concern when the fireworks were discontinued. These infestations can kill trees, which increases their flammability risk and, in turn, poses a potential wildfire hazard. Fireworks increased the risk that a fire would ignite.
Some studies suggest that climate change is causing some types of pine beetles to reproduce more rapidly and influencing their growth and development to make them more lethal.」
Hosts: Ana Kasparian, John Iadarola
Cast: Ana Kasparian, John Iadarola
The Largest Online News Show in the World. Hosted by Cenk Uygur and Ana Kasparian. LIVE STREAMING weekdays 6-8pm ET. http://tyt.com/live
Subscribe to The Young Turks on YouTube: http://youtube.com/subscription_center?add_user=theyoungturks
TYT on Facebook: http://facebook.com/theyoungturks
TYT on Twitter: http://twitter.com/theyoungturks
TYT on Instagram: http://instagram.com/theyoungturks
Donate to TYT
Download audio and video of the full two-hour show on-demand + the members-only postgame show by becoming a member at http://tyt.com/join/. Your membership supports the day to day operations and is vital for our continued success and growth.
Gift membership: http://tyt.com/gift
Producer, Senior Producer, and Executive Producer membership: http://go.tyt.com/producer
Young Turk (n), 1. Young progressive or insurgent member of an institution, movement, or political party. 2. A young person who rebels against authority or societal expectations. (American Heritage Dictionary)
#TYT #TheYoungTurks #Trump
Dear All Beloved Subscribers and Viewers, How are you?
Today Adorable Macaque would like to show you about 」 Baby Handy ! Newly Monkey Handy Is Very Good, Baby is now able to walk around his mom,Daily life of Adorable Macaque 」 Please enjoy your value time with “Adorable Macaque” here, and don’t forget to subscribe to our YouTube Channel. Please visit our video playlist if you love to see more great video, Thank you very much for like and share my channel.
For more information please contact my Facebook
From Adorable Macaque
American automakers take their trucks extremely
seriously. And the ongoing battles for dominance among the Detroit three are often called the 「Truck wars」. Third place challenger Ram has made waves in recent years, snagging major industry awards and stealing market share from rivals. Watch this video to find out how this upstart is now posing a more serious threat to rivals than ever before.
Ram has gone from a third-place also-ran in America’s truck wars to a serious challenger.
The Ram Heavy Duty pickup snatched industry publication MotorTrend’s 2020 Truck of the Year award on Tuesday, giving Fiat Chrysler’s pickup brand yet another award to add to its growing trophy collection. The smaller full-size Ram 1500 pickup won the same award for 2019.
It is a dramatic rise for a brand that many in the industry thought Fiat Chrysler was mistaken in creating in the first place. After the Italian automaker Fiat merged with Chrysler in 2009, management decided to spin Ram out of Dodge, allowing the former to focus on trucks and Dodge to focus on performance cars and a few other models with solid customer bases.
The move seemed dubious at a time as cross-town rival General Motors ditched some of its own brands. But Ram has roughly tripled sales over the last decade and appears to be taking market share away from rivals.
It has done so by giving up on going toe-to-toe with Ford and GM on towing and capability numbers and instead offering buyers a solid all-around truck with a plush interior and a lot of highly visible technology in the cabin. The move might have seemed like a risk of its own: truck buyers have traditionally been considered practical customers who often purchase their vehicles for work or other specific uses.
But the bet seems to have paid off both in critical praise and growing market share. Now analysts say GM and Ford are taking notice and may be making similar tweaks to their own lineups.
» Subscribe to CNBC: https://cnb.cx/SubscribeCNBC
» Subscribe to CNBC TV: https://cnb.cx/SubscribeCNBCtelevision
» Subscribe to CNBC Classic: https://cnb.cx/SubscribeCNBCclassic
About CNBC: From 『Wall Street』 to 『Main Street』 to award winning original documentaries and Reality TV series, CNBC has you covered. Experience special sneak peeks of your favorite shows, exclusive video and more.
Connect with CNBC News Online
Get the latest news: https://www.cnbc.com/
Follow CNBC on LinkedIn: https://cnb.cx/LinkedInCNBC
Follow CNBC News on Facebook: https://cnb.cx/LikeCNBC
Follow CNBC News on Twitter: https://cnb.cx/FollowCNBC
Follow CNBC News on Instagram: https://cnb.cx/InstagramCNBC
Why GM And Ford Are Worried About RAM
Explaining Pig God’s secret and what it could possibly be.
✔️ PATREON: https://www.patreon.com/zhoniin
✔️Get Anime Merch here: http://bit.ly/ZhoniinAnimeMerch
– Use code meme for an Extra 5% discount ($49+)!
– Use code memes for an Extra 10% discount ($99+)!
✔️ DONATE: https://www.streamlabs.com/zhoniin
One punch that like button, and please subscribe.
Pig God stepping on Goku art: https://www.reddit.com/r/OnePunchMan/comments/bl1r0u/no_spoilers_goku_overwhelmed_by_pig_god_during/
Pig God mouth 3d art: https://twitter.com/TheGoldenSmurf
Beam cannon effect: https://www.youtube.com/watch?v=_7ExxC9N3Jw
Black Hole effect: https://www.youtube.com/watch?v=wauInlmhuPs
Monster Association arc art: https://www.reddit.com/r/OnePunchMan/comments/ciz87x/drew_a_poster_for_monster_association_arc/
Blast color: https://www.reddit.com/r/OnePunchMan/comments/7n3yak/some_of_you_suggested_i_should_color_another_page/
Beat link: https://www.youtube.com/watch?v=2TRwxg-6Guo
Copyright Disclaimer Under Section 107 of the Copyright Act 1976, allowance is made for 『fair use』 for purposes such as criticism, comment, news reporting, teaching, scholarship, and research. Fair use is a use permitted by copyright statute that might otherwise be infringing. Non-profit, educational or personal use tips the balance in favor of fair use』
|
OPCFW_CODE
|
Breakpoints in Visual Studio Code not hit when debugging mocha tests
I'm using Mocha (and Chai) for my unit tests for a NodeJS module and want to debug it in Visual Studio code. I have a TypeScript file in the test subfolder with some tests. VScode generates the .js and .map file in the out dir (via tsc watch mode task). My tsconfig.json file contains these settings:
{
"compilerOptions": {
"compileOnSave": true,
"module": "commonjs",
"target": "es6",
"outDir": "out",
"removeComments": true,
"noImplicitAny": true,
"sourceMap": true,
"inlineSources": true,
"isolatedModules": false,
"allowSyntheticDefaultImports": true,
"experimentalDecorators": true
},
"include": [
"src/**/*", "parser/**/*", "test/**/*"
],
"exclude": [
"node_modules",
".vscode-test"
]
}
and the out dir contains 3 subdirs for the 3 includes. All fine so far.
I can run my tests using this command:
mocha --compilers ts:ts-node/register,tsx:ts-node/register
outside of vscode. Then I ran this code with the --debug-brk switch and attached vscode to it. This works, but no breakpoint is hit. The configuration in launch.json for that is:
{
"name": "Attach",
"type": "node",
"request": "attach",
"port": 5858,
"address": "localhost",
"restart": false,
"sourceMaps": true,
"outDir": null,
"localRoot": "${workspaceRoot}",
"remoteRoot": null
}
Ideally, I'd like to have a run config so that I don't need to run mocha manually. With these settings I can at least run the tests:
{
"name": "Mocha",
"type": "node",
"request": "launch",
"cwd": "${workspaceRoot}",
"preLaunchTask": "tsc",
"program": "${workspaceRoot}/node_modules/mocha/bin/_mocha",
"args": [ "--no-timeouts", "--colors", "${workspaceRoot}/out/test/**/*.js" ],
"stopOnEntry": true,
"runtimeExecutable": null,
"env": {
"NODE_ENV": "testing"
}
"sourceMaps": true
}
but still, no breakpoint is hit.
What is required to make at least one of the 2 scenarios work?
Update: meanwhile I found by accident that breakpoints start working when you add a debugger; command somewhere in the test code and set at least one fresh breakpoint after it stopped on debugger;. After that all following breakpoints in this single file work as expected. Looks almost like a bug to me.
I'm on the same boat. Adding a debugger; either in the test file or the file with the breakpoint didn't stop execution when the line with the breakpoint executed. Assuming this is a bug, who would own it?
Probably not a bug, I built a minimal test case and it worked :/ https://github.com/givanse/vscode-debug-mocha-tests Some other config file or dependency must be messing things up.
Well, you wrote a JS test, while I'm using Typescript. That might be part of teh problem.
I've got the same problem with an express app, not hitting the breakpoint unless using 'debugger;'. Using --nolazy to run the app which I had hoped might fix it but still having the issue.
Using "protocol": "inspector", in the launch options helped me to continue for a while, even though this had the annoying side effect that the test process never stopped after everything was executed. I had to kill the task after each run. So I though i'd give it another try to find the problem and I succeeded. The solution is simple: add the outfiles option to your launch options, otherwise vscode will look for maps in the TS source folder. By adding:
"outFiles": [
"${workspaceRoot}/out/**/*.js"
],
everything started to work nicely. It would so helpfull if vscode would print a warning that it cannot find the source maps because of this missing setting.
situtaion
using Jasmine unit test (not Mocha)
when I click the Debug button on a unit test
-> it doesnt stop on the breakpoint, just runs to the end.
solution (in short)
(this may not apply to everyone's case)
it could be the port 9229 is taken by other process on your computer
try debugging in terminal, instead of in Vscode, and see what happens (I tried it in powershell)
eg: node --inspect-brk /usr/local/lib/node_modules/jasmine/bin/jasmine.js ~/exercism/javascript/leap/leap.spec.js
-> then the error shows up: Starting inspector on <IP_ADDRESS>:9229 failed: permission denied (in my case)
find the process that is using the port & close it (lots online discussion on how to do this)
run netstat -ano | findstr 9229
or, you can find it in the Task Manager > Resource Monitor > Network > check your ports
run net stop hns && net start hns to restart your Host Network Service
I found this especially useful when you cannot find the port is taken by which process -- maybe due to that is a dynamic port
after resetting the port -> click the debug button on unit test -> should stop at breakpoint
---
situation & some comments (minor)
some online posts said:_ add the config in launch.json. doesnt help -- the default launch has no problem
the default config uses port 9229
some online posts said:_ use Chrome config to debug. feels unnecessary & its using the browser not nodejs
output panel in vscode Test Explorer shows nothing to indicate the error
this port issue can due to some software that mess around with dynamic port (eg: sometime could be related to some software that change your system proxy / ip).
(this once messed up with my other softwares eg: ActivityWatch)
reference (minor)
[]
"port": 9229,
<>
https://github.com/hbenl/vscode-jasmine-test-adapter
[]
1. run node --inspect-brk node_modules/mocha/bin/mocha test.js:
<>
https://youtrack.jetbrains.com/issue/WEB-43747
[]
I managed to start the debugger by running node --inspect-brk with the jasmine.js file called by Jasmine's CLI:
Kurts-MacBook-Pro:bin kurtpeek$ node --inspect-brk /usr/local/lib/node_modules/jasmine/bin/jasmine.js ~/exercism/javascript/leap/leap.spec.js
<>
How to drop into a debugger in a Jasmine test?
[]
Secondly, check if the port is in the excludedportrange by command "netsh int ipv4 show excludedportrange protocol=tcp".
Then check the dynamicport range by command "netsh int ipv4 show dynamicport tcp".
Set the start of dynamicport by command "netsh int ipv4 set dynamicport tcp start=49152 num=16384"
其次,用命令"netsh int ipv4 show excludedportrange protocol=tcp"检查是否在被排除的端口范围内。
然后用命令"netsh int ipv4 show dynamicport tcp"检查动态端口范围。
用命令"netsh int ipv4 set dynamicport tcp start=49152 num=16384"设置起始动态端口。
<>
https://github.com/eggjs/egg/issues/2432
[]
net stop hns && net start hns
<>
An attempt was made to access a socket in a way forbidden by its access permissions. Why?
[]
The port 9229 is the default debug port of the --inspect and --inspect-brk options.
<>
https://code.visualstudio.com/docs/nodejs/nodejs-debugging
|
STACK_EXCHANGE
|
import unittest
import os
import shutil
import yaml
from pyeurovoc import EuroVocBERT
class MainTests(unittest.TestCase):
def test_eurovoc(self):
pyeurovoc_path = os.path.join(os.path.expanduser("~"), ".cache", "pyeurovoc")
if os.path.exists(pyeurovoc_path):
print(f"Removing every model from .cache: {pyeurovoc_path}")
shutil.rmtree(pyeurovoc_path)
with open(os.path.join("..", "configs", "models.yml"), "r") as yml_file:
dict_models = yaml.load(yml_file)
for lang in dict_models:
print("-" * 100)
print(f"Testing for language: {lang}")
model = EuroVocBERT(lang)
outputs = model("This is a test text.")
assert type(outputs) == dict and len(outputs) == 6
if __name__ == "__main__":
unittest.main()
|
STACK_EDU
|
|| UDM in PPs
|I have looked to find the method of having my UDM show up in the PPs. My PPs have UDM n/a. Can some one steer me in the right direction. Thank You. Chris|
|I ran into the same thing when the pps generator first came out (before I switched over to sql mode.) |
Jeff will probly jump in and correct me if I'm wrong - but it's gotta be 1 of 2 things:
1. UDM display in pps is enabled in the program for sql mode only. Not enabled in playlist file mode.
2. If running in sql mode and not seeing udms in pps, then it's probly a folder conflict. Double check current folder in the dfm. Reload cards, recalc races if necessry. But as long as you're running in sql mode, and have run a calc races on cards loaded into the program that are sitting in the current checked folder in the dfm: You'll see a udm list for each horse in your pps - provided you actually have some active sql udms flagging horses.
~Edited by: Charlie James on: 3/12/2011 at: 11:18:54 PM~
|From the PPs Generator Help Doc:|
"Seeing JCapper Numbers on Past Performance Data Reports --end quote
Whenever the most recent Calc Races that you have run in SQL Mode includes the same race card as the current loaded data file, the module will automatically show a table just below the rider, trainer, and horse "stat boxes" and just above the horse's "running lines" that displays JCapper numbers for the individual horse. You will also see a list of any active SQL UDMs selecting the current horse.
The JCapper numbers table that the Past Performance Generator module displays on its reports is the same table of numbers displayed on a SQL Mode HTML Report after a Calc Races.
When you are first starting out, the JCapper numbers table is driven by the default SQL Mode Report Layout.
However, you should know that the SQL Mode report layout is fully customizable by the user.
On the System Settings/System Definitions Screen, you'll find a button labeled SQL Mode Setup Wizard... A complete set of tools behind that button enables you to control everything that you see on your reports: What factors you see, the positions (or slot numbers) on the report where they are displayed, how the numbers are displayed... type of font they are displayed in, the number of decimal places, the right-left alignmnet of the numbers in each column, and whether the individual factors are displayed as whole numbers, rank only, or names. You can also define the headers used to describe the numbers displayed.
In JCapper, this is functionality is called the User's Custom Report Layout... and the definitions for it are stored in the following file: c:\JCapper\Exe\JCapper2.mdb.
Hint: Whenever you edit the report layout it is strongly recommended that you make a backup copy of this file first (just in case.) "
A few bullet points:
- The seeing JCapper Numbers on Past Performance Data Reports feature - which includes the UDM list for each horse - is enabled in Sql Mode only. It is not enabled (was never intended to be enabled) in Playlist File Mode. (In my opinion, the number of programming hrs required to make it happen for Playlist File Mode was simply too steep.)
- If you are running in Sql Mode and are not seeing the numbers table at all, then you most likely have a folder conflict.
Hint: Double check/reset the current default folder in the DFM. Reload cards/rerun Calc Races. And then relaunch the PPs Generator.
- If you are running in Sql Mode, are seeing the numbers table for each horse, but have "N.A." as the UDM list for each horse: Chances are you have no active Sql UDMs flagging horses.
Hint: Create a simple Sql Mode UDM and rerun a Calc Races, followed by relaunching the PPs Generator. Something like the following expression works for purposes of creating a test Sql UDM:
SELECT * FROM STARTERHISTORY WHERE RANKJPR = 1
|Thank You Charlie for your reply. I have checked that everything is pointed to my default folder. My folder is C:\2011\, I think this might be my problem. After I create or change a SQL UDM, and everything is saved I am getting this right after I click out of the modify UDM screen...Can not create a log entry of the UDMXML history table UDM name is missing...This is after I have saved the UDM to the UDM History table by selecting the time and saving. I will continue my search. Thank You.Chris|
|Thanks Jeff. I will have my UDM flagging in the HTML view but n/a on the same race card in the PPs. All of the numbers are coming up in the PPs just n/a on the UDM area. Thanks Chris|
|OK Finally got it after reading Jeff's very easy to understand solution a few times. I am still very infant on Jcapper. It takes me a few times more than the average person.I need to run the viewer in HTML one and run the live play and the pp's out of the HTML one tabs. When I was just running a single race calc and opening the live play and PP's through #2 tab they do not work. This program is rocking now. Chris|
|This thread bumped per email request...|
"Jeff, How do I go about displaying the numbers table and UDM List as part of my JCapper PPs? Thx in advance,"--end quote
First, you need to be operating the program in SQL Mode in order to display the numbers table and UDM List as part of your JCapper PPs.
To display the numbers table and UDM List for each horse as part of JCapper Past Performaces:
1. Persist the current active data folder in the DFM.
2. Load one or more card files into the program using the DFM Card Loader.
3. Get scratches and changes using the XML Button in Scratch Bot.
4. Run a SQL Calc Races on the current active data folder.
5. Click the PPs button to launch the Past Performace Generator.
6. From inside the Past Performance Generator, select a track code from the Loaded Card Files drop down. From there, select a race from the race number drop down to generate past performances for that race.
That's It! (You should see the numbers table and UDM List as part of your JCapper Past Performances.)
Note: The track code you select must match one of the loaded card files that was part of the SQL Calc Races run in step 4 above. (If you are not seeing the numbers table the first thing you should suspect is a folder conflict and/or the track code selected from the drop down not being on the same folder where you ran the SQL Calc Races or the track code not being part of the SQL Calc Races run in step 4 above.)
Note: You must have at least one active SQL UDM that "fires" in order to see UDM Names on the UDM List in your JCapper Past Performances.
If you are seeing the numbers table in your PPs but see N/A for every horse as your UDM List that means you don't have any active SQL UDMs - either that or none of the horses in the race you are looking at were flagged by any of your active SQL UDMs.
If this is the case, try creating a new/simple active UDM that will flag at least one horse in each race... something like SELECT * FROM STARTERHISTORY WHERE RANK UPR = 1 should do the trick (and from there retry the above steps.)
~Edited by: jeff on: 4/30/2014 at: 3:23:11 AM~
"Hi Jeff,--end quote
Sorry, this thread does not solve my problem. Can you please explain how there can be a "folder conflict"? I followed the instructions exactly and the udm's still do not appear on the past performances. I am 100% sure it's pointed to the right folder. Maybe you can explain to me exactly which folder it needs to point to in order to have the udm's show up? I pretty much follow your guidelines about having quarterly folders and point the default folder to the current quarter."
Let's try a more comprehensive approach...
I've started writing a web tutorial complete with screenshots and notes outlining exact steps (that if followed) will always result in both a numbers table and a UDM List populated with names of SQL UDMs flagging horses in your JCapper Past Performaces.
Here's a link to (the first draft) of the web tutorial:
Bullet points from the web tutorial itself:
"Troubleshooting/Notes: --end quote
• A "folder conflict" will prevent the numbers table from displaying in PPs when you are operating the program in SQL Mode. (So will operating the program in Playlist File Mode.)
• If you are operating the program in SQL Mode but you are not seeing a numbers table embedded in the PPs for each horse: You should assume you have a "folder conflict."
• Until you have a better understanding of how things work, try following the above steps (exactly) to avoid introducing a "folder conflict" into things. (Once you are getting a numbers table in your PPs that tells you that you no longer have a "folder conflict.")
• If you are getting N/A instead of names for SQL UDMs in your UDM List, you should assume one of the following situations as the cause:
• a. You do not have any active SQL UDMS. Hint: You'll need to create your own SQL UDMs when you make the switch from Playlist File Mode to SQL Mode. (There aren't any active SQL UDMs in the initial program download package.)
• b. None of your active SQL UDMs are flagging horses in the race you are looking at.
• If you are getting N/A instead of names for SQL UDMs in your UDM List, try the following suggestion:
Create a simple SQL UDM where the objective behind the SQL Expression driving the UDM is to merely flag at least one horse in every race. The following SQL Expression (or one like it) can be used to meet this objective:
SELECT * FROM STARTERHISTORY WHERE RANKUPR = 1
• Once you are certain that you have at least one active SQL UDM that should be flagging horses in every race, retry the above steps. "
Give the basic operating instructions in the web tutorial a try and see if something in the web tutorial itself doesn't clear things up for you.
~Edited by: jeff on: 4/30/2014 at: 5:09:25 PM~
|
OPCFW_CODE
|
How long have you worked at Push?
It’s been more that a year that I have worked at Push. Before working at Push, I was studying as a Master of Science at the University of Saskatchewan.
Why did you become a developer?
Surprisingly I used to hate programing! It was an absolute nightmare for me to write a couple of lines of code in QBasic for my high school assignment. On the contrary, I was interested in graphics design and animations that are usually used in front-ends! When I finished high school, I started my Bachelor of Science in software engineering and this was the fist time that I experienced the pleasant feeling of a complete application of my own in C++ language. During four years of my Bachelor study, I was introduced to different programing languages and technologies. My interest to programing and graphic design motivated me to write a book with my friend about designing web-based multimedia using Microsoft .Net technologies.
Follow my interest in programing, I started my first job in my hometown, Tehran, and after one year I established my own company with 6 employees. Working in industry proved to me that I still need to learn a lot and my journey should not end at this point. Hence, I decided to continue my Master studies in Computer Science in Canada. During my studies in University of Saskatchewan about Human-Computer Interactions, I was introduced to new fields of computer science such as smartphone programing, sensor fusion software and interface design for small screen devices. After all, I decided to be a developer as my career, because implementing different applications and interfaces lets me feel that I am re-defining the connection between human and computer. It feels unbelievably great when you sketch up your interface, implement it fro scratch and see how people react to your application. It is art!
What type(s) of development do you specialize in?
Before Push I worked in many different areas, from programing of Cisco router (IOS) to Web programing using ASP .Net and C#. In general, I specialized in database programing and business-layer programing in enterprise-scale applications. But in last three years, what I am mostly interested in Android programing and interface design.
What is your favourite thing about working at Push?
Push is a wonderful place to work, not only because you are working on what you like the most, but also you are working with amazing people who are absolute experts, who are kind and friendly. When I started my work at Push, I was an intern student at my third year of studies. Throughout the past year I have earned a lot of experience here and found cool friends that don’t hesitate to share their experience with you when you need them.
What is your favourite thing to do in your spare time?
Two things and only two things!
1- Watching movies
2- Browse other mobile or web interfaces
Here are some of my blogs:
|
OPCFW_CODE
|
ServiceStack logging request body under load issue?
Switched on request logging of body and in development it works fine. Testing now under load and getting error in my log4net logs.
ERROR 17-10-2019 14:34:44 ServiceStack.ServiceStackHost [50] - ServiceBase<TRequest>::Service Exception System.ObjectDisposedException: Cannot access a disposed object.
Object name: 'The stream with Id eacfc65d-dcfc-45dd-906b-ddbdb9dd025b and Tag is disposed.'.
at ServiceStack.Text.RecyclableMemoryStream.CheckDisposed() in C:\BuildAgent\work\912418dcce86a188\src\ServiceStack.Text\RecyclableMemoryStream.cs:line 1393
at ServiceStack.Text.RecyclableMemoryStream.set_Position(Int64 value) in C:\BuildAgent\work\912418dcce86a188\src\ServiceStack.Text\RecyclableMemoryStream.cs:line 1017
at ServiceStack.StreamExtensions.ReadToEnd(MemoryStream ms, Encoding encoding) in C:\BuildAgent\work\912418dcce86a188\src\ServiceStack.Text\StreamExtensions.cs:line 363
at ServiceStack.StreamExtensions.ReadToEnd(Stream stream, Encoding encoding) in C:\BuildAgent\work\912418dcce86a188\src\ServiceStack.Text\StreamExtensions.cs:line 442
at ServiceStack.Host.NetCore.NetCoreRequest.GetRawBody() in C:\BuildAgent\work\3481147c480f4a2f\src\ServiceStack\Host\NetCore\NetCoreRequest.cs:line 207
at RvRequestLogger.DbRequestLogger.CreateEntry(IRequest request, Object requestDto, Object response, TimeSpan requestDuration, Type requestType) in /opt/atlassian/pipelines/agent/build/RvRequestLogger/DbRequestLogger.cs:line 129
at RvRequestLogger.DbRequestLogger.Log(IRequest request, Object requestDto, Object response, TimeSpan elapsed) in /opt/atlassian/pipelines/agent/build/RvRequestLogger/DbRequestLogger.cs:line 47
at ServiceStack.HttpExtensions.EndHttpHandlerRequestAsync(IResponse httpRes, Boolean skipHeaders, Boolean skipClose, Func`2 afterHeaders) in C:\BuildAgent\work\3481147c480f4a2f\src\ServiceStack\HttpExtensions.cs:line 121
at ServiceStack.HttpResponseExtensionsInternal.WriteToResponse(IResponse response, Object result, StreamSerializerDelegateAsync defaultAction, IRequest request, Byte[] bodyPrefix, Byte[] bodySuffix, CancellationToken token) in C:\BuildAgent\work\3481147c480f4a2f\src\ServiceStack\HttpResponseExtensionsInternal.cs:line 364
at ServiceStack.Validation.ValidationFilters.RequestFilterAsync(IRequest req, IResponse res, Object requestDto, Boolean treatInfoAndWarningsAsErrors) in C:\BuildAgent\work\3481147c480f4a2f\src\ServiceStack\Validation\ValidationFilters.cs:line 65
Now my DbRequestLogger line 129 is doing this, GetRawBody;
if (EnableRequestBodyTracking)
{
#if NETSTANDARD2_0
// https://forums.servicestack.net/t/unexpected-end-of-stream-when-uploading-to-aspnet-core/6478/6
if (!request.ContentType.MatchesContentType(MimeTypes.MultiPartFormData))
{
entry.RequestBody = request.GetRawBody();
}
#else
entry.RequestBody = request.GetRawBody();
#endif
}
This might be related to this post https://forums.servicestack.net/t/unexpected-end-of-stream-when-uploading-to-aspnet-core/6478/8
Any ideas? Seems the request has passed us by at this point.
The Exception indicates the BufferedStream has already been disposed, if you can provide a stand-alone repro I can investigate, otherwise you can check whether you cab read from the buffered stream ((NetCoreRequest)request).BufferedStream.CanRead in your logger to prevent the Exception and log the request of the request.
Perfect answer as always.
Some errors in the log. ERROR 17-10-2019 20:42:44 ServiceStack.OrmLite.OrmLiteUtils [229] - Index was outside the bounds of the array. System.IndexOutOfRangeException: Index was outside the bounds of the array.
at ServiceStack.Text.Jsv.JsvTypeSerializer.EatMapKey(ReadOnlySpan`1 value, Int32& i) in C:\BuildAgent\work\912418dcce86a188\src\ServiceStack.Text\Jsv\JsvTypeSerializer.cs:line 296
at ....
can happen if serialization format is invalid, if you provide a repro I can take a look.
Can't easily provide a repro, but something causing the request logger to fail and not log each request. Bottom error is at ServiceStack.OrmLite.OrmLiteWriteCommandExtensions.PopulateWithSqlReader[T](T objWithProperties, IOrmLiteDialectProvider dialectProvider, IDataReader reader, Tuple`3[] indexCache, Object[] values) in C:\BuildAgent\work\27e4cc16641be8c0\src\ServiceStack.OrmLite\OrmLiteWriteCommandExtensions.cs:line 364
Switched in the standard ServiceStack CsvLogger instead. All good.
|
STACK_EXCHANGE
|
In chapter 1 we stressed that computer science deals with imperative (how to) knowledge, whereas mathematics deals with declarative (what is) knowledge. Indeed, programming languages require that the programmer express knowledge in a form that indicates the step-by-step methods for solving particular problems. On the other hand, high-level languages provide, as part of the language implementation, a substantial amount of methodological knowledge that frees the user from concern with numerous details of how a specified computation will progress.
Most programming languages, including Lisp, are organized around computing the values of mathematical functions. Expression-oriented languages (such as Lisp, Fortran, and Algol) capitalize on the ``pun'' that an expression that describes the value of a function may also be interpreted as a means of computing that value. Because of this, most programming languages are strongly biased toward unidirectional computations (computations with well-defined inputs and outputs). There are, however, radically different programming languages that relax this bias. We saw one such example in section , where the objects of computation were arithmetic constraints. In a constraint system the direction and the order of computation are not so well specified; in carrying out a computation the system must therefore provide more detailed ``how to'' knowledge than would be the case with an ordinary arithmetic computation. This does not mean, however, that the user is released altogether from the responsibility of providing imperative knowledge. There are many constraint networks that implement the same set of constraints, and the user must choose from the set of mathematically equivalent networks a suitable network to specify a particular computation.
The nondeterministic program evaluator of section also moves away from the view that programming is about constructing algorithms for computing unidirectional functions. In a nondeterministic language, expressions can have more than one value, and, as a result, the computation is dealing with relations rather than with single-valued functions. Logic programming extends this idea by combining a relational vision of programming with a powerful kind of symbolic pattern matching called unification.
This approach, when it works, can be a very powerful way to write programs. Part of the power comes from the fact that a single ``what is'' fact can be used to solve a number of different problems that would have different ``how to'' components. As an example, consider the append operation, which takes two lists as arguments and combines their elements to form a single list. In a procedural language such as Lisp, we could define append in terms of the basic list constructor cons, as we did in section :
(define (append x y) (if (null? x) y (cons (car x) (append (cdr x) y))))This procedure can be regarded as a translation into Lisp of the following two rules, the first of which covers the case where the first list is empty and the second of which handles the case of a nonempty list, which is a cons of two parts:
Using the append procedure, we can answer questions such as
Find the append of (a b) and (c d).
But the same two rules are also sufficient for answering the following sorts of questions, which the procedure can't answer:
Find a list y that appends with (a b) to produce (a b c d).
Find all x and y that append to form (a b c d).
In a logic programming language, the programmer writes an append ``procedure'' by stating the two rules about append given above. ``How to'' knowledge is provided automatically by the interpreter to allow this single pair of rules to be used to answer all three types of questions about append.
Contemporary logic programming languages (including the one we implement here) have substantial deficiencies, in that their general ``how to'' methods can lead them into spurious infinite loops or other undesirable behavior. Logic programming is an active field of research in computer science.
Earlier in this chapter we explored the technology of implementing interpreters and described the elements that are essential to an interpreter for a Lisp-like language (indeed, to an interpreter for any conventional language). Now we will apply these ideas to discuss an interpreter for a logic programming language. We call this language the query language, because it is very useful for retrieving information from data bases by formulating queries, or questions, expressed in the language. Even though the query language is very different from Lisp, we will find it convenient to describe the language in terms of the same general framework we have been using all along: as a collection of primitive elements, together with means of combination that enable us to combine simple elements to create more complex elements and means of abstraction that enable us to regard complex elements as single conceptual units. An interpreter for a logic programming language is considerably more complex than an interpreter for a language like Lisp. Nevertheless, we will see that our query-language interpreter contains many of the same elements found in the interpreter of section . In particular, there will be an ``eval'' part that classifies expressions according to type and an ``apply'' part that implements the language's abstraction mechanism (procedures in the case of Lisp, and rules in the case of logic programming). Also, a central role is played in the implementation by a frame data structure, which determines the correspondence between symbols and their associated values. One additional interesting aspect of our query-language implementation is that we make substantial use of streams, which were introduced in chapter 3.
|
OPCFW_CODE
|
Unexpected behavior with rxjs nested observables, window, and scan
I want to display partial results of an analysis as the data comes in. It would be very inefficient to recompute for each new value (as with 'scan'). However, in this case, I can do the analysis on chunks of the data and combine those results. So I've been using 'window' to break up the data and then 'scan' to combine the results of each window calculation. The result is itself an observable, so it would be very natural to emit that as a nested observable. Also, the next step in the process works really well when consuming observables.
However, I couldn't get this to work as I expected. (I did make it work with an awkward step of turning the inner observables into arrays and later back into observables.) It seems there is something I don't understand about "window" and/or "scan".
Here are two examples that differ in how I produce the nested observable. I'd have expected the following two examples to give the same result, but they do not.
In the first, I create the nested observable directly. In the second, I create it with the window operation. Then, in both cases, I apply the same scan to the nested observable.
This behaves as I expected:
rxjs.from([rxjs.from([1, 2]), rxjs.from([3, 4])])
.pipe(
ops.scan((acc, curr) => rxjs.merge(acc, curr), rxjs.from([]))
).subscribe(win => win.subscribe(
x => console.log(JSON.stringify(x)), e => console.log("error"), () => console.log("|")),
e => console.log("outer error"), () => console.log("outer |"))
With each emitted observable, I see the accumulation of the values of the previous one followed by the new ones.
1 2 | 1 2 3 4 |
I expected this next one to produce the same result, but it doesn't:
rxjs.from([1, 2, 3, 4])
.pipe(
ops.windowCount(2),
ops.scan((acc, curr) => rxjs.merge(acc, curr), rxjs.from([]))
).subscribe(win => win.subscribe(x => console.log(JSON.stringify(x)), e => console.log("error"), () => console.log("|")),
e => console.log("outer error"), () => console.log("outer|"))
It seems to effectively ignore the scan operation and emits the original windows,
1 2 | 3 4 |
What am I missing? What would a conventional solution to this look like? Thanks!
windowCount is using a Subject internally. So it creates and returns a Subject and then sends 1 and 2 to it for the first window. With the first scan iteration you subscribe to this Subject before 1 and 2 are send and receive those values. For later iterations you subscribe after 1 and 2 were already emitted so you won't receive those values again.
Kind of like:
const { Subject, merge, from } = rxjs
const window1 = new Subject()
const scanResult1 = merge(from([]), window1)
scanResult1.subscribe(console.log)
window1.next(1)
window1.next(2)
console.log('|')
const window2 = new Subject()
const scanResult2 = merge(scanResult1, window2)
scanResult2.subscribe(console.log)
window2.next(3)
window2.next(4)
<script src="https://unpkg.com/<EMAIL_ADDRESS>
Using bufferCount
You can simply replace windowCount with bufferCount to send an array to scan instead of a Subject. The code in scan can stay the same as merge can also handle arrays, but you should use concat instead of merge if you want to guarantee that the values are emitted in the same order they come in.
rxjs.from([1, 2, 3, 4])
.pipe(
ops.bufferCount(2),
ops.scan((acc, curr) => rxjs.concat(acc, curr), rxjs.from([]))
).subscribe(
win => win.subscribe(
x => console.log(JSON.stringify(x)),
e => console.log("error"), () => console.log("|")
),
e => console.log("outer error"),
() => console.log("outer|")
)
Using windowCount
You can add a shareReplay to your windows, to replay their values to future subscribers. As windowCount emits an empty window at the end if the count of the source is divisable by the windowSize, you have to map to your merged observable only when the current window isn't empty. Otherwise you'll get the final result twice.
from([1, 2, 3, 4]).pipe(
windowCount(2),
scan((acc, curr) => {
const shared = curr.pipe(shareReplay())
return shared.pipe(
isEmpty(),
switchMap(empty => empty ? EMPTY : merge(acc, shared))
)
}, from([]))
)
Or
from([1, 2, 3, 4]).pipe(
windowCount(2),
map(w => w.pipe(shareReplay())),
concatMap(w => w.pipe(isEmpty(), filter(e => !e), mapTo(w))),
scan((acc, curr) => merge(acc, curr), from([]))
)
That would explain why my workaround, of turning the nested observables into arrays and then back into observables later, would work.
I don't fully understand your explanation. You said "For later iterations you subscribe after 1 and 2 were already emitted so you won't receive those values again." Yes, in the second iteration I'd expect curr to be 3,4, but acc should be the result of the first iteration merge operation, which should be 1,2, shouldn't it? Resulting in 1,2,3,4? (Which obviously it isn't. I'm just trying to understand.)
@EricEvans Yes, in the second iteration acc is the result of the first iterations' merge operation. But inside this merge operation is the same Subject that was curr in the first iteration. And by the time you subscribe to the result of the second iteration 1,2 have already been emitted on this Subject, so the subscription happens to late to receive 1 and 2.
Ah, I think I see. I imagined merge as combining the values, but it is combining the two active streams, one of which has already completed. So it is deterministic? I mean, it is not a matter of timing, and could miss more or fewer, it misses exactly the first window, then exactly the second window, etc.?
I hope I'm not being annoying here, but I want to actually understand this and also understand the intended use. I see that the way I'm using merge with scan. I was surprised because, if I make a nested observable with from, it does work. Even so, is it really the case that the recommended solution is to use Array? The nested observable would work well in my case, if it worked. For one, the operation on the window is the same as the one that combines the window (by design, a monoid), and it includes a groupBy. In my current workaround, I turn it into Array and then back. Seems awkward?
@EricEvans Yes, to my knowledge it will miss exactly the first window, then the second and so on, because the next scan is called after the previous window completed (if you don't work with overlapping windows by providing a startWindowEvery value). For the code you posted in your question I would use bufferCount.
If you want to work with windowCount you can add a shareReplay to your windows, to replay their values to future subscribers. As windowCount emits an empty window at the end if the count of the source is divisable by the windowSize, you have map to your merged observable only when the current window isn't empty. Otherwise you'll get the final result twice.
|
STACK_EXCHANGE
|
Section: New Results
Participants : Régis Dupont, Andreas Enge, François Morain.
The work of AKS motivated the work of F. Morain on a fast variant of ECPP, called fastECPP, which led him to gain one order of magnitude in the complexity of the problem (see ), reaching heuristically , compared to for the basic version. By comparison, the best proven version of AKS has complexity and has not been implemented so far; the best randomized version reaches the same bound but suffers from memory problems and is not competitive yet. F. Morain implemented fastECPP and was able to prove the primality of 10, 000 decimal digit numbers , as opposed to 5, 000 for the basic (historical) version. Continuously improving this algorithm, this led to new records in primality proving, some of which obtained with his co-authors J. Franke, T. Kleinjung and T. Wirth who developed their own programs. F. Morain set the current world record to 20,562 decimal digits early June 2006, as opposed to 15,071 two years before. This record was made possible using an updated MPI-based implementation of the algorithm and its distribution process on a cluster of 64-bit bi-processors (AMD Opteron(tm) Processor 250 at 2.39 GHz).
R. Dupont has investigated the complexity of the evaluation of some modular functions and forms (such as the elliptic modular function j or the Dedekind eta function for example). High precision evaluation of such functions is at the core of algorithms to compute class polynomials (used in complex multiplication) or modular polynomials (used in the SEA elliptic curve point counting algorithm).
Exploiting the deep connection between the arithmetic-geometric mean (AGM) and a special kind of modular forms known as theta constants, he devised an algorithm based on Newton iterations and the AGM that has quasi-optimal linear complexity. In order to certify the correctness of the result to a specified precision, a fine analysis of the algorithm and its complexity was necessary .
Using similar techniques, he has given a proven algorithm for the evaluation of the logarithm of complex numbers with quasi-optimal time complexity.
A. Enge has been able to analyze precisely the complexity of class polynomial computations via complex floating point approximations. In fact, this approach has recently been challenged by algorithms using p-adic liftings, that achieve a running time that is (up to logarithmic factors) linear in the output size. He has shown that the algorithm using complex numbers, in its currently implemented form, has a slightly worse asymptotic complexity (polynomial with exponent 1.25 ). Using techniques from fast symbolic computation, namely multievaluation of polynomials, he has obtained an asymptotically optimal (up to logarithmic factors) algorithm with floating point approximations. The implementation has shown, however, that in the currently practical range, the asymptotically fast algorithm is slower than the previous one. This is due, on the one hand, to the multitude of algorithmic improvements introduced in , and on the other hand, to the lack of logarithmic factors and better constants.
Using R. Dupont's results described above, A. Enge has devised a second quasi-linear algorithm (that actually even saves a logarithmic factor in the complexity). Breaking the record for class polynomial computations, he has computed a polynomial of degree 100,000, the largest coefficient of which has almost 250,000 bits. For this enormous example, the asymptotically fast algorithm finally beats the one with exponent 1.25. The implementation is based on gmp, mpfr and mpc (see Section 5.2) and a library of A. Enge's for fast arithmetic with polynomials over multiprecision floating point numbers. It turns out that the algorithms are so optimized that the limiting factor becomes the memory consumption .
Participants : Thomas Houtmann, Régis Dupont.
P. Gaudry, T. Houtmann, D. Kohel, C. Ritzenthaler and A. Weng , have designed a new approach to construct class polynomials of genus two curves having complex multiplication. The main feature of their method is the use of 2-adic numbers instead of complex floating-point approximations. Although that method suffers from limitations due to the fact that its initialisation highly depends of the splitting of 2 in the quartic CM field, the corresponding algorithm is very efficient compared to previous approach.
T. Houtmann worked on both the aspects for an alternative to p-adic method and classical CM method. He improved the period matrices computation phase, collaborated with R. Dupont to improve the analytic phase and did work on using the very method to generate hyperelliptic curves suitable for cryptography. As far as his work is advanced, he managed to compute a 132-degree Igusa class polynomial system.
R. Dupont has worked on adapting his algorithm to genus 2, which induces great theoretical and technical difficulties. He has studied a generalization of the AGM known as Borchardt sequences, has proven the convergence of these sequences in a general setting, and has determined the set of limits such sequences have in genus 2. He has then developped an algorithm for the fast evaluation of theta constants in genus 2, and as a byproduct obtains an algorithm to compute the Riemann matrix of a given hyperelliptic curve: given the equation of such a curve, it computes a lattice L such that the Jacobian of the curve is isomorphic to . These algorithms are both quasi-linear, and have been implemented (in C , using gmp ).
Using these implementations, R. Dupont has began computing modular polynomials for groups of the form 0(p) in genus 2 (these polynomials link the genus 2 j-invariants of p-isogenous curves). He computed the modular polynomials for p = 2 , which had never been done before, and did some partial computations for p = 3 (results are available at http://www.lix.polytechnique.fr/Labo/Regis.Dupont ).
He also studied more theoretically the main ingredient used in his algorithms in genus 2, a procedure known as Borchardt sequences. In particular, he proved a theorem that parametrizes the set of all possible limits of Borchardt sequences starting with a fixed 4-tuple.
|
OPCFW_CODE
|
What is causing my my screen to go black?
I have this Ubuntu (12.04.4) netbook hooked up to a 55" screen to display slides and info. It was running fine for quite a while and was set up not to go to screen saver or anything. I think there was a power outage last night so I came in and pressed the power button to turn it on. Now when I turn the computer on, the screen is black. Here's the state of things...
user@computer:~$ xrandr
Screen 0: minimum 320 x 240, current 320 x 240, maximum 8192 x 8192
VGA1 disconnected (normal left inverted right x axis y axis)
LVDS1 connected (normal left inverted right x axis y axis)
1024x768 60.0 + 60.0
800x600 60.3 56.2
640x480 59.9
DVI1 connected (normal left inverted right x axis y axis)
1920x1080 60.0 +
1600x900 60.0
1440x900 59.9
1360x768 59.8 60.0
1280x800 59.8
1280x720 60.0
1024x768 60.0
800x600 60.3
640x480 60.0
DP1 disconnected (normal left inverted right x axis y axis)
I SSH in tried to do something like:
xrandr --output DVI1 --mode 1920x1080
That didn't throw an error but didn't make the screen stop being black. I read that using "--off" might help ( http://www.linuxine.com/story/xrandr-cannot-find-crtc-output-vga1 ) so I did something like this:
xrandr --output DVI1 --off
xrandr --output LVDS1 --off
xrandr --output DVI1 --mode 1920x1080
That worked! Then I kept sitting there for a few minutes and after maybe 5 or 10 minutes. The screen went black again! What is going on here...? Is there some sort of corruption I can resolve? Why would the screen go black again after it has already been initialized with xrandr?
EDIT: I forgot to mention... when I reboot, it shows the BIOS on the monitor. It's only after Ubuntu starts up that it goes black. This is what makes me thing it is a software or configuration problem.
Hmm... This is not looks like a software problem. First (if u are able ofcourse) plug an external monitor and boot ur netbook to see if u got signal. This way we can exclude the possibility of a damaged gpu. If u boot normal with signal in the external monitor then it means that 1) Your inverter is burned (or is about to burn and stops to work when it get warm) and or 2) Is the backlight of ur screen that means u need a new screen but the 2nd is a lil bit rare damage. Many times screen is nearly black and hardly u can see signal under some light, so boot and watch the screen carefully. Also when you boot w8 some mins and see if the hard disk read and if u can listen the ubuntu's entrance sound. To be sure that is only a screen or inverter problem and not something worst. I hope to help u a lil bit. Sry for my english :)
See my edit: In short, on boot, BIOS displays on the screen fine. It's only after Ubuntu gets started that the screen goes black. This leads me to think it's software or config related.
|
STACK_EXCHANGE
|
/*global Ext:false */
Ext.onReady(function () {
// Init the singleton. Any tag-based quick tips will start working.
Ext.tip.QuickTipManager.init();
// Apply a set of config properties to the singleton
Ext.apply(Ext.tip.QuickTipManager.getQuickTip(), {
maxWidth: 200,
minWidth: 100,
showDelay: 50 // Show 50ms after entering target
});
// Create a small panel to add a quick tip to
Ext.create('Ext.container.Container', {
id: 'quickTipContainer',
width: 200,
height: 150,
style: {
backgroundColor: '#000000'
},
renderTo: Ext.getBody()
});
// Manually register a quick tip for a specific element
Ext.tip.QuickTipManager.register({
target: 'quickTipContainer',
title: 'My Tooltip',
text: 'This tooltip was added in code',
width: 100,
dismissDelay: 10000 // Hide after 10 seconds hover
});
});
|
STACK_EDU
|
Passwordless authentication between two domains.
Thu Nov 29 00:00:00 GMT 2012
On 11/28/2012 1:21 PM, anulav2 wrote:
> Keys will "ALWAYS" be different irrespective if it is two servers on same or different domain.
> That is the whole point of copying keys to remote servers authorized_keys file.
I don't think so. I do know the following - here at my current client
there are two distinct domains that I deal with - Irvine and San Jose.
My Windows laptop is in the Irvine domain. My home directory is on a
filer and is shared between my Windows laptop and the various Linux
server machines in Irvine. I generate a key and put it in my
~/.ssh/authorized_keys and I can ssh to localhost or any of the Linux
servers. Additionally I can ssh from Linux to my laptop, passwordlessly.
If I take that key and put it into the ~/.ssh/authorized_keys in San
Jose then this allows me to ssh into from Irvine to San Jose without a
password. But I cannot ssh from San Jose -> Irvine without being
prompted for a password.
However if I generate a key in San Jose and put it in
~/.ssh/authorize_keys in Irvine then I can ssh from San Jose -> Irvine
without a password. This tells me that generated ssh keys are unique per
domain. For bilateral ssh passwordless logins between the two domains
you should have at least 2 lines in your ~/.ssh/authorized_keys file,
one for each domain:
Note that the 3rd field is treated as a comment so I changed it to
adefaria@Irvine and adefaria@San Jose. Note 2: The above keys have been
modified to protect them.
Why don't you try what I suggest and then report back if it worked.
> Else one could just "cat" its own key in its own authorized_keys file, right?
But one can just "cat" their own key to their own authorized_keys file.
That's why permissions on ~/.ssh are of paramount importance to ssh - it
needs to make sure that "Tom" didn't go into "Jane"'s
~/.ssh/authorized_keys file and insert themselves.
It is true that if you run ssh-keygen on different machines in the same
domain you'll get different keys, but within the context of that domain
any one of those keys will work. That's why sharing your home directory
is a good thing and that's why I always work to get my home directory
shared between Windows and Linux systems.
Andrew DeFaria <http://defaria.com>
I'm a tagline virus, please copy me to your signature file
Problem reports: http://cygwin.com/problems.html
Unsubscribe info: http://cygwin.com/ml/#unsubscribe-simple
More information about the Cygwin
|
OPCFW_CODE
|
You will find instructions there for adding the repository to your ppm client. On 16 may 2018, the perl foundation announced that search. Is there a way i can easily install all the modules on cpan in one stroke. Maintaining a bundle definition file means keeping track of two things. There is always more than one way to do things with perl, and this is no different.
Activestate provide a binary distribution of perl for many platforms, as well as their own perl package manager ppm. It returns a list of cpanfinddependencies dependency objects, whose useful methods are. Cpan modules for getting module dependency information. Note that not all of the cpan modules support this convention yet. Find answers to install cpan dependencies from the expert community at experts exchange. If you do not have an internet connection for each of the servers in your ibm infosphere master data management collaboration server collaborative edition installation, you can download the perl modules from cpan and then copy them to your servers for you to install.
Once youve done installing all the dependencies, you can push your application directory to a remote machine excluding local and. Downloading all dependencies for a perl module stack overflow. When installing a module from cpan, we first need to install all the dependencies of a module. Oct 23, 2019 adds recommended modules to the list of dependencies, if set to a true value. The comprehensive perl archive network cpan currently has 187,506 perl modules in 40,936 distributions, written by,918 authors, mirrored on 254 servers. Installs dependencies declared as recommends and suggests respectively, per meta spec. The dag repository has all of the required perl modules for amavisdnew except for digestmd5.
Now, exit the cpan shell, start the cpan shell, and try to install a module that you need. A cpan task distribution to install all the dependencies of dwim perl dwimperltask dwim. Its not something you want to do for a typical module, but it makes sense in the context of taskbelike. Usually you will only find out about missing dependencies when trying to install the. The first line sets your dependency policy to follow rather than ask the default. The comprehensive perl archive network cpan currently has 193,229 perl modules in 41,233 distributions, written by,918 authors, mirrored on 254 servers the archive has been online since october 1995 and is constantly growing. If you do not have an internet connection for each of the servers in your ibm infosphere master data management collaboration server installation, you can download the perl modules from cpan and then copy them to your servers for you to install. Go old school and use the perl cpan tools perl mcpan e shell to download and install the target bundles that cant be found in rhn or epel and resolve any dependencies. What if youd like to check the dependency tree of a module, even before installing all the dependencies of the module. The cpan perl module should already be installed on your linux cloud server by default.
It also comes with lots of modules preinstalled, including cpanm. This queue is for tickets about the parpacker cpan distribution. Installing perl modules required by various open source software is a routine tasks for sysadmins. It includes a compiler and preinstalled modules that offer the ability to install xs cpan modules directly from cpan. Automatically install dependencies without confirmation. Before embarking on any installation, download the module, unzip it and check out the documentation. Oct 14, 2009 recently, hanekomu was contemplating how to make subsequent installs of a taskbelike module upgrade its dependencies to their latest version the idea is intriguing.
If you want to know all the distributions youd need for a module, perhaps because you want to bundle them all together, then i think cpan finddependencies is the best bet. So please if someone knows where to get this rpm i would appreciate a link. Installing a perl module from cpan on windows, linux and mac osx. Use code metacpan10 at checkout to apply your discount. We can easily download any rpm package with all dependencies using downloadonly plugin for yum command.
Carton perl module dependency manager aka bundler for. As of writing this guide, there are 185,128 perl modules available in cpan. Cpanfinddependencies find dependencies for modules on the. How to fetch the cpan dependency tree of a perl module.
Since this is the first time that you have used cpan. It is mostly interesting for the system administrators who need to ensure, you have all the dependencies installed. Installing perl modules without an internet connection. How to install cpan package and configure it in linux. Oct 25, 2016 download a rpm package with all dependencies in centos. Contribute to miyagawacpanfile development by creating an account on github.
All dependencies will be automatically confirmed, downloaded and installed. Sep 26, 2018 there are several ways to install perl modules from the comprehensive perl archive network on your unixbased system. Need perl download link for cumulative netsftp and netssh. To install downloadonly plugin, run the following command as root user. Cpanfinddependencies perl package manager index ppm. Iirc, cpan does include the ability to recursively download dependencies within the cpan sources.
The archive has been online since october 1995 and is constantly growing. Installing perl modules with c library dependencies. As a valued partner and proud supporter of metacpan, stickeryou is happy to offer a 10% discount on all custom stickers, business labels, roll labels, vinyl lettering or custom decals. How to install perl modules manually and using cpan command. Cpan can denote either the archive network itself, or the perl program that acts as an interface to the network and as an automated software installer somewhat like a package manager. I need to download all dependencies for a specific perl module locallib and others on a windowscygwin machine with perl 5. The cpan client comes with the core perl distribution and is probably already installed on your system. Installing perl modules manually by resolving all the dependencies is tedious and annoying process. In all cases, only the required or recommended dependencies are listed there might be other modules which allow more tests to be run, but omitting them will still allow the tests to pass. Jan 30, 2020 this brief guide explains how to install perl modules on linux from cpan comprehensive perl archive network repository. When searching a perl module, sooner or later you will end up on one of two sites sites providing information about cpan modules. Precompiled binary ppms for activeperl on windows, linux and mac os x are available from the wxperl ppm repository. Install cpan dependencies solutions experts exchange.
For the life of me i cannot find an rpm for this perl module. In case it is not installed, go ahead and install it. Installing perl modules using cpan is a better solution, as it resolves all the dependencies automatically. So how do you really install perl modules from cpan. A cpan client is a program which knows how to fetch distributions from the cpan and figure out all of their dependencies. Jun 14, 20 the beauty of this online resource is we can download and install these modules in to your machine with cpan command. Cpanfinddependencies find dependencies for modules on. How to download a rpm package with all dependencies in centos. Then it runs our 4step process to build, test, and install everything while you check your mail and drink coffee. For other modules which do not have debian packages, you might want to use the cpan interactive system. I have satisfied all dependencies except for one and that is perldigestmd5. Install perl modules using cpan the cpan perl module should already be installed on your linux cloud server by default. Either outofthe box, or after setting a configuration option.1116 106 362 1368 336 1184 1325 1166 329 937 436 1201 843 71 914 1382 1105 1155 997 1427 1094 1465 80 1209 1207 1000 329 1279 964 1109 850 364 460 728
|
OPCFW_CODE
|
Feature Request - Option to add components to burn list if more than X is available.
My friend asked me to put up this issue:
Is it possible to make it an option that you can set for a number of components so that when you own more than X of a certain type then it gets added to the "Available to burn" list.
For example, if I have the limit set to 3 and I have 3 Hasted then Hasted gets added to the burn list
So as a general option, regardless of it being in prio or non-prio? I'll think about it. What matters most is how I can add something like this into the UI -- I generally don't want a dedicated options window. Maybe as an 'advanced option' via the config.ini.
Advanced option in the config.ini works just fine. Most people can open a text file :D
I'd like this as well, or an inventory list so i can see which components im overflowing with and should just spend them
I'd like this as well, or an inventory list so i can see which components im overflowing with and should just spend them
You can hold-click the PRIO-label to see the surplus of mods connected to the prio-list, and the other stuff should appear in the burn lists. If that's not what you meant, let me know.
I'd like this as well, or an inventory list so i can see which components im overflowing with and should just spend them
You can hold-click the PRIO-label to see the surplus of mods connected to the prio-list, and the other stuff should appear in the burn lists. If that's not what you meant, let me know.
I meant single components that are not associated with the priority list. On the burn list im getting assembled tier 2-3 mods that while not on the priority list I don't want to use yet. while having like 5-6 copies of a base mods not on the priority list. Perhaps i shouldnt even pick them up but they're still useful as a 4th component while building up 3 mod recipes.
I meant single components that are not associated with the priority list. On the burn list im getting assembled tier 2-3 mods that while not on the priority list I don't want to use yet. while having like 5-6 copies of a base mods not on the priority list. Perhaps i shouldnt even pick them up but they're still useful as a 4th component while building up 3 mod recipes.
I see. Yeah, I limited the burn list to seven entries to keep the panel from taking too much height. I have to make sure the UI fits on all resolutions. I could make the length of the list user-configurable via the config.ini in a future update.
I meant single components that are not associated with the priority list. On the burn list im getting assembled tier 2-3 mods that while not on the priority list I don't want to use yet. while having like 5-6 copies of a base mods not on the priority list. Perhaps i shouldnt even pick them up but they're still useful as a 4th component while building up 3 mod recipes.
I see. Yeah, I limited the burn list to seven entries to keep the panel from taking too much height. I have to make sure the UI fits on all resolutions. I could make the length of the list user-configurable via the config.ini in a future update.
That window is still missing the amounts for each component available to burn though.
well, i have the patience of a caffeinated fruit fly on crack so i went and did it on my end https://i.imgur.com/c2LAMUe.png. heres the code if you want to diff it or something. https://pastebin.com/8MUjdSrp
That window is still missing the amounts for each component available to burn though.
You didn't mention that specifically. I didn't account for people hoarding non-prio assembled mods when I planned the system :D.
well, i have the patience of a caffeinated fruit fly on crack so i went and did it on my end
Even infinite patience wouldn't have sufficed. I didn't know you wanted that because you didn't specify. I like the comparison, though :D.
The next update will implement the requested behavior a little bit differently: You will be able to set a threshold above which prio-surplus will be burned, and the burn list will include quantities (as scrangos did for himself), so there you can decide for yourself when to burn stuff.
|
GITHUB_ARCHIVE
|
SEO with microdata and Google+
I don't know you, but each time I discover a new site, I do a [ CTRL ] + [ U ] to view the html source. And often, it's hard to mind the code.
Specialy, CMS generated markup are hard to read, due to what some calls "html soup" ...
Each layer, each block, each module, add tags, and makes the code difficult to parse. And it's worse for a bot : text taming is not easy, even for Google's engineers...
I'll show you a large snippet that I used to describe myself on my résumé (work in progress), and how-to link that page with Google+ to trigger a bot visit that will boost your SEO.
Here is the code:
- <!-- Declare a person description with itemscope & itemtype-->
- <section itemscope itemtype="http://schema.org/Person">
- <!-- Give the person a name, and link it with G+ -->
- <!-- Note the ?rel=author -->
- <!-- Person is a Thing in Schema.org, so it inherit the url property-->
- <!-- Describe the person with his employer, as a sub-entity, Organization -->
- <div itemprop="worksFor" itemscope itemtype="http://schema.org/Organization">
- <!-- Organization description -->
- <!-- Declare a sub-sub-entity, a place for the Organization of the Person ... -->
- <div itemprop="location" itemscope itemtype="http://schema.org/Place">
- <!-- Idem, the Place has a PostalAddress -->
- <ul itemprop="address" itemscope itemtype="http://schema.org/PostalAddress">
- </ul><!-- end of the PostalAddress -->
- </div><!-- end of the Place -->
- </div><!- end of the Organization- -->
- <!-- Declare a colleague as a Person sub-entity -->
- <li itemprop="colleagues" itemscope itemtype="http://schema.org/Person">
- <!-- home, sweet home -->
- <div itemprop="homeLocation" itemscope itemtype="http://schema.org/Place">
- <p itemprop="address" itemscope itemtype="http://schema.org/PostalAddress">
Schema.org is easy :
- Declare an entity with itemscope and itemtype.
- Add a property with itemprop, and wrap its content within a tag.
- Entity can have sub-entities, described with itemprop, itemscop and itemtype.
- Entities inherit property from their parents
But it is also rich.
The rel html attribute is not part of Schema.org, but a tool used by Google to link together some content. Add rel="me" or rel="author" to hang an item to someone. If you have a link in your Google+ profile that point to your content, and a back link from your page to Google, Google will understand that you are the page author, and will show the page in Google's results. You can also complete this form to accelerate the bot visit.
Here is my Google+ profile, about tab, with the links added that point to my blogs:
And next a screenshot of a Google search on my name, layered with my G+ profile. The firsts results are all on my profile:
Finaly, you can test your code with the Rich Snippets Testing Tool, and see your SEO gain from Google, Bing and Yahoo webmaster's tools.
A Drupal module exists, that I'll test soon
|
OPCFW_CODE
|