Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
javaee-api-5.0-1.jar maven dependency
All Downloads are FREE. Search and download functionalities are using the official Maven repository.Download javaee-api.jar version 5 with dependencies. Add to Project. It also demonstrates how Maven brings in the relevant dependent JAR files.Add JSF dependency like jsf-api-2.1.3.jar, jsf-impl-2.1.3.jar, el-ri-1.0.jar, jsf-facelets-1.1.14.jar , jsp- api-2.1.jar, and so on in the pom.xml.
The story is about maven project dependencies. I haveand my project depends on javaee-api:7.0.EntityManagerFactoryBuilderImpl.java:882) [hibernate-entitymanager-5.0 .11.Final.jar:5.0.11.Final] at I am trying to exclude javax.persistence from javaee-api maven dependency .But even after adding the exclusion I still have the javax.
persistence package in the javaee-api-7.0.jar Read More ». In fact, dependencies is, in my opinion, the key feature of maven. Whenever a dependency is added to a project, maven will search for it at repositories, download it and store it, tagging versions.20 thoughts on Adding a custom jar as a maven dependency. Building the Application with Maven. Working with Project Dependencies.The enterprise application project has dependencies on the JAR and WAR that will be packaged and available after you compile theConfirm that provided is set for the value of the for the javaee- api artifact. In this tutorial we will see how to use javaee-api with maven creating a whole JSF,EJB and JPA integration web application.WeI have done module settings in JBoss Server for mysql.So I do not need to add mysql-connector dependency in pom.xml.Project dependencies in pom.xml like below 17 Compile Dependencies. Dependency Badge. For your copy-and-paste convenience.javax.enterprise.concurrent:javax.enterprise.concurrent-api.pom.xml snippet for Maven.javaee-api-7.0-javadoc.jar. Java(TM) EE 8 Specification APIs. License. CDDLGPL 2.0.Files. pom (8 KB) jar (1.8 MB) View All.Compile Dependencies (17). On Mon, Feb 8, 2010 at 8:22 AM, Sahoo wrote: > Felipe, > > Both javaee-api and javaee-web-api are now available in central maven > repository [ 1]. So, you dontcan you send me an example of your maven dependency block and >> -speciallyjavaee:javaee-api:jar Maven: Introduction to the Dependency Mechanism. What is JAR hell?Understanding Maven Dependency Mediation (Part 1). jApiCmp japicmp is a tool to compare two versions of a jar archive Java API Compliance Checker: A Perl script that uses javap to compare two jar archives. Maven: javaee-api vs jboss-javaee-6.0 2 answers. The story is about maven project dependencies.Its because the server implements these APIs and implementations can vary. And you should never put javaee-api.jar in war or ear. How do I add third-party jars into local Maven repository? [duplicate] 2 answers.Configure Build Path and added the jars manually, but this does not make the jar APIs available in Spring MVC controllersAnd so on for all 20 jars. Here is what the dependency tag in the pom.xml might be And actually, you could get javax.servlet-3.0.jar from there and install it in your own repository. Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Email codedump link for Maven dependency for Servlet 3.0 API? But if I dont want to use this single jar(cause there is no source or javadoc for this dependency) then what maven dependencies I need to add to get the javaee-api-5 functionality? One such dependency might be: . Main | Free Session "Enterp » A List Of Java EE 7 Maven API Dependencies. Maven (pom) dependencies to Java EE 7 web and full API profiles, as well as finer-grained dependencies to dedicated Java EE APIs are listed here:https Home Forums Frameworks Hibernate Hibernate [SOLVED]: Maven dependency for javax.persistence.: hibernate-jpa-2.1-api or javaee-api?And you should never put javaee-api.jar in war or ear. Maven dependency automatically download dependent library of javaee-api version 6.0-RC 1 and include the necessary jar files in the project. Code given here should be included inside < dependencies> .
tag in the Maven configuration file pom.xml. The Apache Geronimo project provides a Servlet 3.0 API dependency on the Maven Central repoAs a result dependencies are placed separately (not all in one jar as in javaee-web-api), source files and javadocs of the libraries are available to download from maven repository. How can I tell Maven 2 to load the Servlet 3.0 API?Id prefer to only add the Servlet API as dependencyAnd actually, you could get javax.servlet-3.0.jar from there and install it in your own repository. > javaee-api.jar artifact contains api classes not impl > classes since there can be several implementations > for the impl classes. I realize that. However, when I was developing against Java EE 5, I had Maven dependencies on the specific JSR APIs instead of the Step 3: Create an API Using Amazon API Gateway and Test It. Step 4: Deploy With AWS SAM and AWS CloudFormation.This time we will create the jar as before, and then use the maven-shade-plugin to pull in dependencies to make the standalone .jar. How do I import the javax.servlet API in my Eclipse project? Maven dependency for Servlet 3.0 API?How can I create an executable JAR with dependencies using Maven? What is the difference between JSF, Servlet and JSP? javaee api javaee api 6 0 jar maven dependency news, articles, pictures, videos and discussions.Javaee Api Javaee Api 6 0 Jar Maven Dependency. Files contained in javaee-api-5.0-1.jarMETA-INF/LICENSE.txt META-INF/MANIFEST.MF META-INF/NOTICE META-INF/NOTICE.txt META-INF/default.address.map META-INF/ dependencies.txtMETA-INF/maven /org.apache.geronimo.specs/geronimo-activation1.1spec/pom.xml Use the Favorites window to take a look at your JAR locally, if needed. Use the pom.xml file to create the new dependency. And then right-click the broken To run JUnit 5 tests through maven, you will need minimum two dependencies. JUnit Jupiter Engine Dependency.junit-jupiter-api:jar:5.0.0-M4:compile. JUnit Platform Runner Dependency. When looking for Maven Dependency groupId and artifactId you should look for implVendorID which is groupId and implVersion which is the version in Maven.JBoss EJB 3.0 Security with Annotation and Declara Managing Java Enterprise Dependency (javaee-api) w Jar Maven Maven. Powered by.org.apache.openejb/javaee-api/5.0-3 OpenEJB :: Dependencies :: JavaEE API. release. PicketLink dependencies can be easily configured in your Maven-based project by using the PicketLink Bill of MaterialsHowever, if you are using Java EE 7.0 you can use picketlink-javaee-7.0 BOM.PicketLink Uber Dependency. It provides all PicketLink dependencies from a single JAR. The canonical reference for building a production grade API with Spring.Simply put, if the artifact has runtime dependencies on other jars, these jars will need to be present on the classpath at runtime as well.Unfortunately it doesnt work for me (with maven 3.0.5) and I also found a bug about this JavaEE API for WebLogic 1 answer. Maven: javaee-api vs jboss-javaee-6.0 2 answers.And you should never put javaee-api.jar in war or ear. Also, wildfly server contains in standard modules set: hibernate-jpa-2. 1-api-1.0.0.Final.jar. maven. java.lang.ClassNotFoundException: javax.faces.webapp.FacesServlet, tomcat doesnt see javaee-api-7.0-b83.jar. I am trying to run sample guessnumber-jsfIf you are using a Maven project you can fix this error by adding the following dependency: org.glassfish RequirementsMaven 3 : Just to automate collecting the project dependencies.Eclipse : My usual IDE for Java/JavaEE developments. This page shows details for the JAR file javaee-api-5.0-1.jar contained in org/apache/openejb/ javaee-api/5.0-1.javaee-api. Version: 5.0-1. Name: OpenEJB :: Dependencies :: JavaEE API. Download: MAVEN2 http Pingback: EE7 Maven Dependencies | Kevs Development Toolbox.nginx php5-fpm response lag on first requests. Moving my nginxmysql WordPress VPS native install to Docker containers on a KVM VPS. I am trying to exclude javax.persistence from javaee-api maven dependency .The javax:javaee-api:7.0 jar will be indeed be provided with its original content. Besides, your exclusion makes no sense. org.eclipse.persistence:javax.persistence is not a dependency provided by javax Missing artifact javaee:javaee-api:jar:5:provided. The pom.xml is as followRelated Questions. How to reference javadocs to dependencies in Mavens eclipse plugin when javadoc not attached to dependency. Maven: javaee-api vs jboss-javaee-6.02 answers.If i leave both dependencies in final build then i will get a class-conflict. It is desirable that the application can be deployed to any JavaEE container such as Glassfish, Wildfly, JBOSS, etc. This javaee-api dependency contains. build properties maven.compiler.source 1.8 /maven.compiler.source. For those of us doing Java EE development with Maven (which by my own account as a former consultant is pretty much all JavaMavenjavaee-web-api-7.0.jar. Maven dependency build java.lang.ClassNotFoundException. I have one jar library with the following pom:
|
OPCFW_CODE
|
Why do enchanted ships need ship's wheel?
In Pirates of Caribbean enchanted ships like Flying Dutchman and Silent Mary defy laws of Physics because there is some curse or sorcery involved.
For instance Flying Dutchman could sail under water.
Silent Mary is literally just hollow ship
While she was trapped in the Devil's Triangle, the Silent Mary suffered a dramatic transformation. With her keel, bottom, and lower decks almost completely destroyed, her ribs exposed to the weather and many planks broken or missing, her sails in tatters and all of her masts broken, the Silent Mary became nothing more than a wreck. In normal circumstances, any ship that suffered such extensive damage would sink the moment it touched the water. However, defying the laws of physics, the Silent Mary continued to sail like a completely normal seaworthy ship
But why is that they need wind to sail or ship's wheel to divert direction ?
Maybe damaged enchanted ships don't work as well as non damaged ones.
when it's above water, it's as any other ship, it's sail from wind.
Just a guess, but I believe a some of DMTNT took elements from a cancled prequel video game. The protagonist had to reessemble a magical ship--it might be a homage to something like Ship Thesis Paradox & that the ship can't work without the sum of certain parts, with in TSM's case being the wheel one of the most important parts (the brains or head of the magic, if you will). I talked about the video game as part of a speculative answer here: https://movies.stackexchange.com/questions/75976/why-is-the-black-pearl-partially-unharmed-at-the-begining-of-at-worlds-end/81935#81935
Just because a ship has unusual qualities, does not mean that it exclusively travels by relying on those qualities.
Harry Potter can ride a broom, yet he's also still seen walking and using the stairs.
The Flying Dutchman
The Flying Dutchman was capable of submerging (in At World's End, we find out that this is how you travel to/from Davy Jones' locker), but at all other times, the Dutchman sails like any other ship.
We never see the Dutchman submerge for different reasons, e.g. to quickly dodge cannonballs and then immediately resurface. It doesn't happen. Most likely, Davy Jones is only capable of traveling to/from the locker; without him being able to essentially use the Dutchman as a magical submarine.
Look at the final fight in At World's End, where the Dutchman and the Pearl clash in the middle of a maelstrom. At no point does the Dutchman behave like an irregular ship, it's essentially equal to the Pearl during this showdown.
As there is no proof that the Dutchman can do anything but travel to/from the locker by submerging, we can't assume that it's capable of any other form of unusual travel.
Furthermore, the Pearl is later shown to be capable of traveling from Davy Jones' locker to the real world as well, which proves that this travel method is not limited to magical ships.
Additionally, part of Davy Jones' shtick is that he promises the dead that they can keep sailing, under his command. Logically, he'd need to make sure that the sailing method used is as authentic as possible, in order for the sailors to actually want to work on the ship.
If the Flying Dutchman traveled on a magical cloud, steered by Davy Jones' telekinetic powers, there wouldn't be a reason for the sailors to stay.
The Silent Mary
First of all, the Silent Mary has no functional sails anymore. Those scraps are nowhere near enough to propel a ship that size (even when accounting for the lost weigh due to being hollow). Sails do not work unless they are attached on both ends.
In that sense, she already travels magically. And on top of that, she lacks quite a lot of her rigid structure.
Keep in mind that whatever keeps the Silent Mary afloat, also keeps the pirates themselves afloat (when they run on the water in order to get to Jack, who barely manages to flee to land on time).
For all intents and purposes, the Silent Mary does travel magically.
But why is that they need wind to sail or ship's wheel to divert direction ?
Why do ghosts still look like their pre-death human form, instead of taking an abstract shape?
Because it makes them recognizable. If every ghost in all media was represented as a wisp, that would get really annoying (and boring) really quickly.
Similarly, the Silent Mary has retained her looks to some extent, because there's simply no logical reason for her to change shape into anything else.
You also need to take into account the crew that operates the Silent Mary. Though they are aware that they are undead, their personalities have carried over from when they were alive. This includes how they operate the ship.
Keep in mind that the crew of the Silent Mary was essentially in purgatory, doomed to sail the Bermuda Triangle for eternity. Working off of that idea, it's the intention that the sailors still have to do the same manual labor, regardless of whether the Silent Mary stays magically afloat.
Incidentally, why do you think the Silent Mary needs to turn the wheel? It's just as possible that the crew simply turns the wheel to have an authentic feel of sailing. It's a very common trope that sailors are endlessly in love with sailing, so they'd be inclined to keep it authentic even when they don't need to.
There are so many similar questions that are pointless to answer, because it always boils down to "that's just the way it is":
Why do ghosts, who are generally able to fly, tend to hover at a "normal" human height?
Since gravity does not affect them, why do ghosts generally appear upright and not oriented differently?
If they have no physical form, why do most ghosts look like their former selves?
If ghosts have no physical shape, how are they able to speak? They have no larynx, nor lungs. And if they are magically capable of making sound, why do they still move their mouths?
The same argument applies to seeing: how do they do it? If magically, why do they still use their eyes?
And the same argument applies to hearing as well.
Why are ghosts animated at all? Why not use a still frame? There's no functional purpose to their movement anyway.
Ghost are generally represented as an imprint of their past selves. It identifies them. It makes them recognizable.
|
STACK_EXCHANGE
|
const { heapTestCases } = require('./test-cases.js');
const Heap = artifacts.require('Heap');
const MinHeap = artifacts.require('MinHeap');
const MaxHeap = artifacts.require('MaxHeap');
contract('Heap', async accounts => {
let minHeap;
let maxHeap;
beforeEach(async () => {
minHeap = await Heap.new(true);
maxHeap = await Heap.new(false);
});
for (const test of heapTestCases) {
it('should build a correct heap given the values in test case id: ' + test.id, async () => {
await buildHeap(minHeap, test.values);
await buildHeap(maxHeap, test.values);
const min = (await minHeap.getMin()).toNumber();
const max = (await maxHeap.getMax()).toNumber();
assert.strictEqual(min, test.min);
assert.strictEqual(max, test.max);
});
}
});
// included for sake of completion
contract('MinHeap', async accounts => {
let minHeap;
beforeEach(async () => {
minHeap = await MinHeap.new();
});
for (const test of heapTestCases) {
it('should build a correct MinHeap given the values in test case id: ' + test.id, async () => {
await buildHeap(minHeap, test.values);
const min = (await minHeap.getMin()).toNumber();
assert.strictEqual(min, test.min);
});
}
});
// included for sake of completion
contract('MaxHeap', async accounts => {
let maxHeap;
beforeEach(async () => {
maxHeap = await MaxHeap.new();
});
for (const test of heapTestCases) {
it('should build a correct MaxHeap given the values in test case id: ' + test.id, async () => {
await buildHeap(maxHeap, test.values);
const max = (await maxHeap.getMax()).toNumber();
assert.strictEqual(max, test.max);
});
}
});
// O(nlog(n)) time complexity until heapify functions implemented
async function buildHeap(heap, values) {
for (v of values) {
await heap.insert(v);
}
}
|
STACK_EDU
|
Kafka::Cluster - object interface to manage a test kafka cluster.
This documentation refers to Kafka::Cluster version 1.08 .
# For examples see:
# t/*_cluster.t, t/*_cluster_start.t, t/*_connection.t, t/*_cluster_stop.t
This module is not intended to be used by the end user.
The main features of the Kafka::Cluster module are:
Automatic start and stop of local zookeeper server for tests.
Start-up, re-initialize, stop cluster of kafka servers.
A free port is automatically selected for started servers.
Create, delete data structures used by servers.
Getting information about running servers.
Connection to earlier started cluster.
Perform query to a cluster.
The following constants are available for export
Initial port number to start search for a free port - 9094. Zookeeper server uses the first available port.
Default topic name.
Starts server required for cluster or provides ability to connect to a running cluster. A zookeeper server is launched during the first call to start. Creates a Kafka::Cluster object.
An error causes program to halt.
Port is used to identify a particular server in the cluster. The structures of these servers are created in the t/data.
new() takes arguments in key-value pairs. The following arguments are recognized:
kafka_dir => $kafka_dir
The root directory of local Kafka installation.
replication_factor => $replication_factor
Number of kafka servers to be started in cluster.
Optional, default = 3.
partition => $partitions
The number of partitions per created topic.
Optional, default = 1.
reuse_existing => $reuse_existing
Connect to previously created cluster instead of creating a new one.
Optional, default = false (creates and runs a new cluster).
t_dir => $t_dir
Required data structures are prepared to work in provided directory t/. When connecting to a cluster from another directory, you must specify path to t/ directory.
Optional - not specified (operation carried out in the directory t/).
The following methods are defined for Kafka::Cluster class:
Returns the root directory of local installation of Kafka.
log_dir( $port )
Constructs and returns the path to kafka server data directory with specified port.
This function takes argument. The following arguments are supported:
$port denoting port number of the kafka service. The $port should be a number.
Returns a sorted list of ports of all kafka servers in the cluster.
node_id( $port )
Returns node ID assigned to kafka server in the cluster. Returns C <undef>, if server does not have an ID or no server with the specified port is present in the cluster.
This function takes argument the following argument:
Returns port number used by zookeeper server.
Initializes data structures used by kafka servers. At initialization all servers are stopped and data structures used by them are deleted. Zookeeper server does not get stopped, its data structures is not removed.
stop( $port )
Stops kafka server specified by port. If port is omitted stops all servers in the cluster.
This function takes the following argument:
start( $port )
Starts (restarts) kafka server with specified port. If port is not specified starts (restarts) all servers in the cluster.
$port denoting port number of kafka service. The $port should be a number.
request( $port, $bin_stream, $without_response )
Transmits a string of binary query to Kafka server and returns a binary response. When no response is expected functions returns an empty string if argument $without_response is set to true.
Kafka server is identified by specified port.
$bin_stream denoting an empty binary string of request to kafka server.
Stops all production servers (including zookeeper server). Deletes all data directories used by servers.
This function stops all running servers processes, deletes all data directories and service files in t/data directory.
Returns number of deleted files.
data_cleanup() takes arguments in key-value pairs. The following arguments are recognized:
The root directory of Kafka installation.
Required data structures are prepared to work in the directory t/. When connected to a cluster from another directory, you must specify path to the t/ directory.
Optional - if not specified operation is carried out in the t/ directory.
An error causes script to die automatically. Error message will be displayed on console.
The basic operation of the Kafka package modules:
Kafka - constants and messages used by Kafka package modules.
Kafka::Connection - interface to connect to a Kafka cluster.
Kafka::Producer - interface for producing client.
Kafka::Consumer - interface for consuming client.
Kafka::Message - interface to access Kafka message properties.
Kafka::Int64 - functions to work with 64 bit elements of the protocol on 32 bit systems.
Kafka::Protocol - functions to process messages in Apache Kafka's Protocol.
Kafka::IO - low-level interface for communication with Kafka server.
Kafka::Exceptions - module designated to handle Kafka exceptions.
Kafka::Internals - internal constants and functions used by several package modules.
A wealth of detail about Apache Kafka and Kafka Protocol:
Main page at http://kafka.apache.org/
Kafka Protocol at https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol
Kafka package is hosted on GitHub: https://github.com/TrackingSoft/Kafka
Copyright (C) 2012-2017 by TrackingSoft LLC.
This package is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See perlartistic at http://dev.perl.org/licenses/artistic.html.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
1 POD Error
The following errors were encountered while parsing the POD:
You forgot a '=back' before '=head1'
To install Kafka, copy and paste the appropriate command in to your terminal.
perl -MCPAN -e shell
For more information on module installation, please visit the detailed CPAN module installation guide.
|
OPCFW_CODE
|
EAST stands for the Enhanced Annotated Suffix Tree method for text analysis.
To install EAST, run:
$ pip install EAST
This may require admin permissions on your machine (and should then be run with sudo).
EAST comes both as a CLI application and as a python library (which can be imported and used in python code).
The basic use case for the AST method is to calculate matching scores for a set of keyphrases against a set of text files (the so-called keyphrase table). To do that with east, launch it as follows:
$ east [-f <table_format>] [-l <language>] [-s] [-d] [-a <ast_algorithm>] [-w <term_weighting>] [-v <vector_space>] [-y] keyphrases table <keyphrases_file> <directory_with_txt_files>
The -s option determines the similarity measure to be used while computing the matching score. Its value is "ast" by default (as this package has been developed primarily as an implementation of the Annotated Suffix Tree method), but it can be also set to "cosine": the cosine similary will be used then to compute the relevance of keyphrases to documents (the text in the collection will be represented as vectors then).
- Depending on which relevance measure is used while computing the table, there are some auxiliary options to further specify the computation:
- For the AST relevance measure:
- The -a option defines the actual AST method implementation to be used. Possible arguments are "easa" (Enhanced Annotated Suffix Arrays), "ast_linear" (Linear-time and -memory implementation of Annotated Suffix Trees) and "ast_naive" (a slow and memory-consumptive implementation, present just for comparison).
- The -d option and specifies whether the the matching score should be computed in the denormalized form (normalized by default, see [Mirkin, Chernyak & Chugunova, 2012].
- For the Cosine relevance measure:
- The -v option specifies what elements should form the vector space, i.e. be the actual terms (these can be "stems", "lemmata" or just "words". In the first two cases, the words in the text collection get transformed into stems/lemmata automatically).
- The -w option determines which term weighting scheme should be used ("tf-idf" or just "tf").
The -y option and determines whether the matching score should be computed taking into account the synonyms extracted from the text file.
The -l option tells EAST about the language in which the texts in the collection and the keyphrases are written. In general, EAST does not need this information to compute the AST similarity scores. However, it is used to compute the cosine similarity scores (in case the user prefers this relevance measure type). English is the default language; all possible values of this parameter are: "danish" / "dutch" / "english" / "finnish" / "french" / "german" / "hungarian" / "italian" / "norwegian" / "porter" / "portuguese" / "romanian" / "russian" / "spanish" / "swedish".
The -f option specifies the format in which the table should be printed. The format is XML by default (see an example below); the -f option can also take CSV as its parameter.
Please note that you can also specify the path to a single text file instead of that for a directory. In case of the path to a directory, only .txt files will be processed.
If you want to print the output to some file, just redirect the EAST output (e.g. by appending > filename.txt to the command in Unix).
Sample output in the XML format:
<table> <keyphrase value="KEYPHRASE_1"> <text name="TEXT_1">0.250</text> <text name="TEXT_2">0.234</text> </keyphrase> <keyphrase value="KEYPHRASE_2"> <text name="TEXT_1">0.121</text> <text name="TEXT_2">0.000</text> </keyphrase> <keyphrase value="KEYPHRASE_3"> <text name="TEXT_1">0.539</text> <text name="TEXT_3">0.102</text> </keyphrase> </table>
The east software also allows to construct a keyphrases relation graph, which indicates implications between different keyphrases according to the text corpus being analysed. The graph construction algorithm is based on the analysis of co-occurrences of keyphrases in the text corpus. A keyphrase is considered to imply another one if that second phrase occurs frequently enough in the same texts as the first one (that frequency is controlled by the referral confidence parameter). A keyphrase counts as occuring in a text if its presence score for that text ecxeeds some threshold [Mirkin, Chernyak, & Chugunova, 2012].
$ east [-f <graph_format>] [-c <referral_confidence>] [-r <relevance_threshold>] [-p <support_threshold>] [-s] [-d] [-a <ast_algorithm>] [-w <term_weighting>] [-v <vector_space>] [-y] keyphrases graph <keyphrases_file> <directory_with_txt_files>
The -p option configures the threshold for graph node support (the number of documents "containing" the corresponding keyphrase according to the AST method), starting with which the nodes get included into the graph.
The -c option controls the referral confidence level above which the implications between keyphrases are considered to be strong enough to be added as graph arcs. The confidence level should be a float in [0; 1] and is 0.6 by default.
The -r option controls the relevance threshold of the matching score - the minimum matching score value where keyphrases start to be counted as occuring in the corresponding texts. It should be a float in [0; 1] and is 0.25 by default.
- The -f option determines in which format the resulting graph should come to the output. Possible values are:
The -s option, as well as its auxiliary options (-d, -a, -v, -w and -y) configure the relevance scores computation (exactly as for the keyphrases table command). Note that the relevance measure ("ast" / "cosine") used while computing the graph usually largely influences its shape.
Sample output in the edges format:
KEYPHRASE_1 -> KEYPHRASE_3 KEYPHRASE_2 -> KEYPHRASE_3, KEYPHRASE_4 KEYPHRASE_4 -> KEYPHRASE_1
The same graph in gml:
graph [ node [ id 0 label "KEYPHRASE_1" ] node [ id 1 label "KEYPHRASE_2" ] node [ id 2 label "KEYPHRASE_3" ] node [ id 3 label "KEYPHRASE_4" ] edge [ source 0 target 2 ] edge [ source 1 target 2 ] edge [ source 1 target 3 ] edge [ source 3 target 0 ] ]
The example below shows how to use the EAST package in code. Here, we build an Annotated suffix tree for a collection of two strings ("XABXAC" and "HI") and then calculate matching scores for two queries ("ABCI" and "NOPE"):
from east.asts import base ast = base.AST.get_ast(["XABXAC", "HI"]) print ast.score("ABCI") # 0.1875 print ast.score("NOPE") # 0
The get_ast() method takes the list of input strings and constructs an annotated suffix tree using suffix arrays by default as the underlying data structure (this is the most efficient implementation known). The algorithm used for AST construction can be optionally specified via the second parameter to get_ast() (along with "easa", its possible values include "ast_linear" and "ast_naive")
Working with real texts already requires some preprocessing, such as splitting a single input text into a collection of small-sized strings, which later enables matching scores for queries to be more precise. There is a special method text_to_strings_collection() in EAST which does that for you. The following example processes a real text collection and calculates matching scores for an input query:
from east.asts import base from east import utils # Prepare your text collection (e.g. from a set of *.txt files) text_collection = [...] # Transform the list of texts into a list of shorter substrings # (this will improve the precision of relevance scores) strings_collection = text_collection_to_string_collection(text_collection) # Construct an AST for these strings ast = base.AST.get_ast(strings_collection) # Compute the relevance of a keyphrase to the text collection indexed by this AST. # The relevance score will always be in [0; 1] print ast.score("Hello, world")
|
OPCFW_CODE
|
import { PropertiesMixin } from "https://unpkg.com/@polymer/polymer@^3.0.0-pre.13/lib/mixins/properties-mixin.js?module";
import { camelToDashCase } from "https://unpkg.com/@polymer/polymer@^3.0.0-pre.13/lib/utils/case-map.js?module";
import { render } from "https://unpkg.com/lit-html@^0.10.0/lib/shady-render.js?module";
export { html } from "https://unpkg.com/lit-html@^0.10.0/lib/lit-extended.js?module";
/**
* Renders attributes to the given element based on the `attrInfo` object where
* boolean values are added/removed as attributes.
* @param element Element on which to set attributes.
* @param attrInfo Object describing attributes.
*/
export function renderAttributes(element, attrInfo) {
for (const a in attrInfo) {
const v = attrInfo[a] === true ? '' : attrInfo[a];
if (v || v === '' || v === 0) {
if (element.getAttribute(a) !== v) {
element.setAttribute(a, v);
}
} else
if (element.hasAttribute(a)) {
element.removeAttribute(a);
}
}
}
/**
* Returns a string of css class names formed by taking the properties
* in the `classInfo` object and appending the property name to the string of
* class names if the property value is truthy.
* @param classInfo
*/
export function classString(classInfo) {
const o = [];
for (const name in classInfo) {
const v = classInfo[name];
if (v) {
o.push(name);
}
}
return o.join(' ');
}
/**
* Returns a css style string formed by taking the properties in the `styleInfo`
* object and appending the property name (dash-cased) colon the
* property value. Properties are separated by a semi-colon.
* @param styleInfo
*/
export function styleString(styleInfo) {
const o = [];
for (const name in styleInfo) {
const v = styleInfo[name];
if (v || v === 0) {
o.push(`${camelToDashCase(name)}: ${v}`);
}
}
return o.join('; ');
}
export class LitElement extends PropertiesMixin(HTMLElement) {
constructor() {
super(...arguments);
this.__renderComplete = null;
this.__resolveRenderComplete = null;
this.__isInvalid = false;
this.__isChanging = false;
}
/**
* Override which sets up element rendering by calling* `_createRoot`
* and `_firstRendered`.
*/
ready() {
this._root = this._createRoot();
super.ready();
this._firstRendered();
}
/**
* Called after the element DOM is rendered for the first time.
* Implement to perform tasks after first rendering like capturing a
* reference to a static node which must be directly manipulated.
* This should not be commonly needed. For tasks which should be performed
* before first render, use the element constructor.
*/
_firstRendered() {}
/**
* Implement to customize where the element's template is rendered by
* returning an element into which to render. By default this creates
* a shadowRoot for the element. To render into the element's childNodes,
* return `this`.
* @returns {Element|DocumentFragment} Returns a node into which to render.
*/
_createRoot() {
return this.attachShadow({ mode: 'open' });
}
/**
* Override which returns the value of `_shouldRender` which users
* should implement to control rendering. If this method returns false,
* _propertiesChanged will not be called and no rendering will occur even
* if property values change or `_requestRender` is called.
* @param _props Current element properties
* @param _changedProps Changing element properties
* @param _prevProps Previous element properties
* @returns {boolean} Default implementation always returns true.
*/
_shouldPropertiesChange(_props, _changedProps, _prevProps) {
const shouldRender = this._shouldRender(_props, _changedProps, _prevProps);
if (!shouldRender && this.__resolveRenderComplete) {
this.__resolveRenderComplete(false);
}
return shouldRender;
}
/**
* Implement to control if rendering should occur when property values
* change or `_requestRender` is called. By default, this method always
* returns true, but this can be customized as an optimization to avoid
* rendering work when changes occur which should not be rendered.
* @param _props Current element properties
* @param _changedProps Changing element properties
* @param _prevProps Previous element properties
* @returns {boolean} Default implementation always returns true.
*/
_shouldRender(_props, _changedProps, _prevProps) {
return true;
}
/**
* Override which performs element rendering by calling
* `_render`, `_applyRender`, and finally `_didRender`.
* @param props Current element properties
* @param changedProps Changing element properties
* @param prevProps Previous element properties
*/
_propertiesChanged(props, changedProps, prevProps) {
super._propertiesChanged(props, changedProps, prevProps);
const result = this._render(props);
if (result && this._root !== undefined) {
this._applyRender(result, this._root);
}
this._didRender(props, changedProps, prevProps);
if (this.__resolveRenderComplete) {
this.__resolveRenderComplete(true);
}
}
_flushProperties() {
this.__isChanging = true;
this.__isInvalid = false;
super._flushProperties();
this.__isChanging = false;
}
/**
* Override which warns when a user attempts to change a property during
* the rendering lifecycle. This is an anti-pattern and should be avoided.
* @param property {string}
* @param value {any}
* @param old {any}
*/
_shouldPropertyChange(property, value, old) {
const change = super._shouldPropertyChange(property, value, old);
if (change && this.__isChanging) {
console.trace(`Setting properties in response to other properties changing ` +
`considered harmful. Setting '${property}' from ` +
`'${this._getProperty(property)}' to '${value}'.`);
}
return change;
}
/**
* Implement to describe the DOM which should be rendered in the element.
* Ideally, the implementation is a pure function using only props to describe
* the element template. The implementation must a `lit-html` TemplateResult.
* By default this template is rendered into the element's shadowRoot.
* This can be customized by implementing `_createRoot`. This method must be
* implemented.
* @param {*} _props Current element properties
* @returns {TemplateResult} Must return a lit-html TemplateResult.
*/
_render(_props) {
throw new Error('_render() not implemented');
}
/**
* Renders the given lit-html template `result` into the given `node`.
* Implement to customize the way rendering is applied. This is should not
* typically be needed and is provided for advanced use cases.
* @param result {TemplateResult} `lit-html` template result to render
* @param node {Element|DocumentFragment} node into which to render
*/
_applyRender(result, node) {
render(result, node, this.localName);
}
/**
* Called after element DOM has been rendered. Implement to
* directly control rendered DOM. Typically this is not needed as `lit-html`
* can be used in the `_render` method to set properties, attributes, and
* event listeners. However, it is sometimes useful for calling methods on
* rendered elements, like calling `focus()` on an element to focus it.
* @param _props Current element properties
* @param _changedProps Changing element properties
* @param _prevProps Previous element properties
*/
_didRender(_props, _changedProps, _prevProps) {}
/**
* Call to request the element to asynchronously re-render regardless
* of whether or not any property changes are pending.
*/
_requestRender() {this._invalidateProperties();}
/**
* Override which provides tracking of invalidated state.
*/
_invalidateProperties() {
this.__isInvalid = true;
super._invalidateProperties();
}
/**
* Returns a promise which resolves after the element next renders.
* The promise resolves to `true` if the element rendered and `false` if the
* element did not render.
* This is useful when users (e.g. tests) need to react to the rendered state
* of the element after a change is made.
* This can also be useful in event handlers if it is desireable to wait
* to send an event until after rendering. If possible implement the
* `_didRender` method to directly respond to rendering within the
* rendering lifecycle.
*/
get renderComplete() {
if (!this.__renderComplete) {
this.__renderComplete = new Promise(resolve => {
this.__resolveRenderComplete =
value => {
this.__resolveRenderComplete = this.__renderComplete = null;
resolve(value);
};
});
if (!this.__isInvalid && this.__resolveRenderComplete) {
Promise.resolve().then(() => this.__resolveRenderComplete(false));
}
}
return this.__renderComplete;
}}
|
STACK_EDU
|
Hello everyone i am trying to get my mud to compile and don't have tons of exp coding but i have done some online correspondace classes and read mounds of books but i am still lacking the knowledge of getting this all down due to the fact there are no books about making a mud lol well i am looking for someone to help me get this darn thing compiled when i make this is what it spits out not exactly sure what is what so please be as decriptive as possable
make: Entering directory `/cygdrive/c/nmud/holdall/swrots/src' gcc -c -g3 -Wall save.c save.c: In function `load_corpses': save.c:2343: warning: assignment from incompatible pointer type save.c:2345: error: dereferencing pointer to incomplete type save.c:2347: error: dereferencing pointer to incomplete type make: *** [save.o] Error 1 make: Leaving directory `/cygdrive/c/nmud/holdall/swrots/src' make: *** [all] Error 2
Are you trying to get the mud to compile for the first time, or are you trying to compile because you have just made some changes?
The compiler is pointing you to lines 2343 and surrounding in save.c. Post those lines of code so others can help point out the issues. Make sure to indicate the line numbers. You might also try some google searches for the error messages the compiler is giving you – you're going to have to understand what the compiler is telling you in order to be a self-sufficient programmer. :unclesam:
You can learn C by learning mud code first, but if you do, expect to break things. Expect to break everything. You will have to ask a lot of questions that experienced coders will scoff at. Don't be afraid to jump in head first, but do get a book. You don't have to read it at all, just skimming it will be very helpful, even if only so you know why %s doesn't work with certain types of variables (and what actually does work instead).
Disclaimer: If you do try to learn C through mudwork, you will become intimately familiar with your copy and paste keyboard shortcuts. In fact, people who come to your house will probably wonder why your C and V keys are not marked. They will also ask why everything near your computer desk is broken beyond repair (hint: you will get angry).
You might do best to at least learn just the basics of C. Pointers are a very important part of the language. I too Come from a VB background, so that was a hard part for me To get down. As far as books, the one i found most helpful is Is Sams 21 day learning. It goes over the basics and moves On to the more meaty stuff.
I would have to agree that books can only take you so far. But I tried to learn my base without understanding the basics And now that I have done that, its so much easier.
You will want this book: C Programming Language. It is NOT a good book for learning C from scratch, but you shouldn't be trying to code real C applications without having read it.
For a good introductory book, I hear a lot of good things about this one: Beginning C. I have not read it myself and hence cannot give it a hearty recommendation.
If you want to get into C++ after you are done with C (I recommend learning and switching to C++ as soon as possible, but do NOT do that until you have already become competent C, or you will end up retarded), this is apparently the text the pros all learned from: C++ Primer.
|
OPCFW_CODE
|
For a school project, I want to make a solar iPhone charger (specifically for the iPhone 4) and I've spent about a week looking up stuff all over the Internet, and I'm stuck for the hundredth time. First, I had my mind set on an iPhone charger (if you're interested, it can be found here: http://www.instructables.com/id/Solar-7-up-Solar-phone-charger-in-a-bottle/%29) that basically uses a homemade 5V solar panel, which is plugged into a 3.7V 2000mAh LiPo battery, and then plugged into LiPo Rider Pro (which is apparently a lithium charge board that has a battery charge regulator and also steps up the voltage to 5V which is the required voltage for USBs). Anyways, a male to female USB cable is plugged into the LiPo Rider Pro and then you can use your iPhone USB charger to plug into the male to female USB cable to charge your phone via the LiPo battery (which is charged from the sun). I think this is how it goes... The instructable is unclear when it gets to plugging into the LiPo Rider Pro, and I've spent a long time trying to figure it out. (If anyone actually reads the instructable, feels that it's easy to explain the part about the LiPo Rider Pro, and feels inclined to do so, I would really appreciate it.)
Because of my uncertainty, I kept looking and came across another solar lithium battery iPhone charger (again, if interested: http://www.instructables.com/id/Lithium-Battery-Solar-USB-iPhone-Arduino-Charger/). I think it's very similar to the first one except this one uses a lithium battery charge controller and a DC-DC USB boosting circuit. My guess is that the LiPo Rider Pro is basically a board that includes both of these things? A 1N4001 diode is also used; use of the LiPo Rider Pro seems to make a diode unnecessary? What kind of confuses me is that the author of the first instructable calls the LiPo Rider Pro a "lithium charge board" and I kind of assumed that that's the same thing as a "lithium battery charge controller" which is used in this second instructable. The second one seems to have additional materials, and I'm unsure if that's better or worse.
So in sum, I guess my question is: what is "better", the LiPo Rider Pro (from the first instructable) or a lithium battery charge controller + a DC-DC USB boosting circuit + a 1N4001 diode (second instructable)? From my ignorant point of view, the LiPo Rider Pro seems better as it seems like an all-in-one. Yes, I've read both product descriptions, but I still don't really understand.
|
OPCFW_CODE
|
Announcing the release of "yet another dashboard"; The Engine
Dashboard plug-in for OpenCPN
. This plugin
is a "bastardised" version of the existing built-in dashboard plugin
Dashboard displays the following data:
Temperature, Engine Hours & Alternator
Voltage for either single
or dual engine vessels.
Fluid levels for Fuel
, Live Well, Grey and Black Waste.
For sailors with NMEA2000® engine and tank level sensors, the latest release of the TwoCan plugin, version 1.6 can convert the appropriate messages from NMEA2000® networks to their NMEA 0183
equivalents which can then be displayed by the Engine Dashboard.
There are a few features yet to be implemented in this version of the Engine Dashboard:
1. It supports only a single rudder
2. It only supports a single instance for each tank. For example if you have multiple fuel tanks
it will only display the level for one fuel tank
. (Ths is a limitation in both this plugin and in TwoCan)
3. While the preferences dialog allows selection of Pressure units (Pascal or PSI) and Temperature (Celsius or Fahrenheit), the display is only changed the next time OpenCPN
4. The instruments
do not "zero" if data is no longer being received (Eg. the engine is switched off)
The rationale for developing yet another dashboard was the following:
1. The existing dashboard plugin has limitation of how many inputs/controls it can support. Therefore to add these engine displays would have meant deleting some of the other dashboard controls such as position, depth
2. On the other hand the tactics_dashboard eliminates the above limitation, however adding engine displays to the tactics plugin may be of little interest and perhaps confusing to motor boat
3. I also felt that it would be easier and simpler for me to release a simple dashboard purely to display engine & tank data and which would possibly be less confusing for end users.
Unfortunately development of both this new version of the TwoCan plugin and Engine Dashboard was undertaken independently and without the knowledge of the work
that has been done on the new version of the tactics dashboard plugin which also adds some support for engine controls. I have yet to look at the new version of the tactics dashboard to see if it could be used instead of the engine dashboard plugin. I apologise for any confusion this may cause to OpenCPN users.
Further details about the Engine Dashboard, source code and build instructions can be downloaded from https://github.com/twoCanPlugin/EngineDashboard
On a Raspberry Pi running Buster (and possibly earlier versions) with OpenCPN v5.0, if the dashboard is in a horizontal orientation, OpenCPN crashes when a adding or deleting an instrument to the dashboard. The workaround is to add or delete instruments
to the dashboard when it is in a vertical orientation, or to add or delete instruments when the dashboard is not visible. (Note the same bug exists in the built-in dashboard and probably also in the tactics-dashboard).
|
OPCFW_CODE
|
I am Able to create the image but it it fails when It comes to installing office 2010.All Other applications are being installed. Notify me of new posts via email. On the Share page, type the share name MDT_Office_Share$, and then click Next. The application is added to the list of applications in the details pane in the Deployment Workbench. this contact form
Click the Rules tab in the MEDIA001 Properties dialog box and paste the copied rules over any existing rules. If you have a product key then type it. All rights reserved. In this post we will add Microsoft Office 2010 as an application in our Deployment Share, and configure the environment to achieve a silent and unattended installation for this suite. check over here
The Progress page indicates that the share is being created. Adding Windows 7 Operating System Once the deployment share is created, the next step is to add the files from the Windows 7 image. This can then be used later in the task sequence and you can write it to the database. Read More Preserving server hardware (Part 4) This article examines some vendor and third-party tools you can use for identifying when overheating may be occurring in business server systems, PCs, and
Workaround here community.skype.com/t5/Using-Skype… (paste meeting > az361881.vo.msecnd.net/sfb/index.html) 4daysago MS may not be buying Citrix but close enough: Protection of XenDesktop and XenApp using #Azure Site Recovery azure.microsoft.com/en-us/blog/cit… 5daysago RT @madvirtualizer: Another Comment by Eduardo Mozart de Oliveira-- November 14, 2013 # Reply Leave a Reply Cancel reply Enter your comment here... Cleared, the wizard copies the source files to the deployment share. Office 2016 Mdt You will see something like this: We are going to edit this adding the features we would like to install.
Click Next to see the Summary page, and then click Next to create the image. It's a thing you want to keep 😉 . Create an application bundle (optional) If you added multiple applications to the deployment share, you can see them all by clicking Applications under MDT Deployment Share (C:\MDT_Office_Share). https://blog.augustoalvarez.com.ar/2011/01/16/deploying-windows-7-office-2010-using-microsoft-deployment-toolkit-mdt-2010-part-ii/ Install the added applications, roles, and features.
For example, I can give Finance roles Excel, PowerPoint and Word, marketing roles just PowerPoint and Word, and sales roles Access, PowerPoint and Word. Office 2010 Customization Tool Click the Browse button and create a folder named DeploymentShare$ in the root of your disk volume as shown in Figure 1: Figure 1: Specify the name and path to the The customization .msp files that you place in the Updates folder will be deployed first. Location:
In the Browse for Folder dialog box, locate and select the folder that contains the Office 2010 setup files, and then click OK. Our forum is dedicated to helping you find support and solutions for any problems regarding your Windows 7 PC be it Dell, HP, Acer, Asus or a custom build. Mdt Office 2010 Silent Install The Re-Image right is tied to the organization, not the machine. Mdt Office 2013 Silent Install I'm not 100 percent on how these keys are tracked, and the best way to do this.
In the wizard, click on “Browse”. 3. http://invisibledetector.net/office-2010/where-to-purchase-ms-office-2010.html One of the coolest options we can find in Microsoft Deployment Toolkit 2010 (and in most of Microsoft platforms) is the idea of designing the User Interface as places to find To do this, open a prompt and use the following command: c:\source\directory\[Your source installation file].exe /extract:c:\target\directory The same method can used to extract the latest service pack file and place them Fill in your details below or click an icon to log in: Email (required) (Address never made public) Name (required) Website You are commenting using your WordPress.com account. (LogOut/Change) You are Office 2010 Install Switches
Source In Source directory, type source_folder (where source_folder is the fully qualified path to the folder containing the application source files). Now, the client to set the default font for MS-Word (Time New Roman size 13). The description below is actually derived straight from the MDT help files, but I added a few screenshots and go to explain the “Office Products” tab that lights up in MDT http://invisibledetector.net/office-2010/microsoft-office-2010-replace-office-2010.html You can then edit the Config.xml attributes that are displayed.
Update the deployment share, by right-clicking the MDT Build Lab deployment share and select Update Deployment Share. Office Customization Tool 2010 Download Once installed, we can access any of the components from the “Start Menu”. MSP file on the update folder ( there is no other files available only my saved LMR.MSP file), so there is any missing in my office 2013 Package…?Add file in OCT
In all "simpliness" you need to add the following steps in the task sequence: Copy the xrm-ms certificate to c:\windows\system32\oem\ run slmgr /ilc c:\windows\system32\oem\certname.xrm-ms (certname is replaced with what you named Summary Click Next. I would like to have the activation automated. Microsoft Deployment Toolkit Prajwal DesaiCheck AppEnforce.log file… Rasheedahwow thank you for this worked like a charm AlvinWhere are you specifying to use MSP file?
Add Office 2010 Professional Plus to the deployment share To add the Office 2010 Professional Plus setup files to the deployment share: Under MDT Deployment Share (C:\MDT_Office_Share), right-click Applications and then It's like it's waiting for me to accept something or close some sort of prompt. Using the Deployment Workbench, expand the Deployment Shares node, expand MDT Build Lab, select the Applications node and create a folder named Microsoft. his comment is here Select the following options and click on “Apply”: “Office product to install”: The version you are using, in my case Professional Plus “ProPlus”. “Office languages”: Language available, in my case “en-us”.
Select the source directory where we can find the “setup.exe” file. If you have MAK key for office 2013 then you need to specify that in OCT. Note: In my environment, my WSUS server is named WSUS01, and I’m using the default WSUS port in Windows Server 2012 R2 which is 8530. Step 5 – Create the MDT Task Sequence and enable Windows Updates On MDT01, using the Deployment Workbench, in the MDT Build Lab deployment share, select the Task Sequences node, and
Click Next. Note If you want to move the Office 2010 setup files instead of copying them, select the check box that is next to Move the files to the deployment share instead network administrator tools Network Configuration Management Network inventory software Network Mapping Network monitoring / management Network Traffic Monitoring Patch Management Remote control software SharePoint Tools Software distribution and metering Storage and Run Sysprep and reboot into WinPE.
Select or clear the Move the files to the deployment share instead of copying them check box based on your requirements, and then click Next. Select the Customer name check box and type the name of the customer. Adding Microsoft Office 2010 to MDT Adding applications to Microsoft Deployment Toolkit 2010 requires only running a simple wizard which should not be any problem. Expand the Task Sequences node, right-click on the Windows 10 node, and select New Task Sequence.
Details In Publisher, type publisher_name (where publisher_name is the name of the application’s publisher). Click Next.Click Next and Specify the Installation program as setup.exe, choose Install behavior as Install for system. In “Deployment Shares”, expand the deployment share created, right click “Task Sequences” and select “New Task Sequence”. 2. We appreciate your feedback.
|
OPCFW_CODE
|
Best way to shorten urls
I am using the base64-option to generate my urls. However, the urls are getting very large. For example:
https://d1lrsw3bruipko.cloudfront.net/eyJidWNrZXQiOiJnZXRkb2dzYXBwIiwia2V5IjoiY29udGVudFwvZDkzMjJmNDAtMDNiYS00NGZjLWI4ZmQtZDMzNmJjMjU4YWIwIiwiZWRpdHMiOnsid2VicCI6eyJxdWFsaXR5Ijo4MH0sImpwZWciOnsicXVhbGl0eSI6ODB9LCJyZXNpemUiOnsid2lkdGgiOjY0LCJoZWlnaHQiOjY0LCJmaXQiOiJjb3ZlciJ9LCJjb250ZW50TW9kZXJhdGlvbiI6eyJtaW5Db25maWRlbmNlIjo5MCwiYmx1ciI6MTAwLCJtb2RlcmF0aW9uTGFiZWxzIjpbIkV4cGxpY2l0IE51ZGl0eSIsIlZpb2xlbmNlIiwiVmlzdWFsbHkgRGlzdHVyYmluZyIsIkhhdGUgU3ltYm9scyJdfX19
What would be the best way to make these shorter?
I could use CloudFlare workers to generate urls like this: https://images.domain.com/some-id/size which then redirects the response to the base64-encoded url. However, I don't want to combine CloudFlare for this since that feels redundant.
I guess I could do something similar with CloudFront? Or should I just modify the backend-handler? I always pass the same settings, except for width & height.
@Livijn, you can use Thumbor URL to resize, set format, set quality, and etc. please refer to https://docs.aws.amazon.com/solutions/latest/serverless-image-handler/thumbor-filters.html.
Based on the example you have provided you use content moderation
{
"bucket": "getdogsapp",
"key": "content\/d9322f40-03ba-44fc-b8fd-d336bc258ab0",
"edits": {
"webp": {
"quality": 80
},
"jpeg": {
"quality": 80
},
"resize": {
"width": 64,
"height": 64,
"fit": "cover"
},
"contentModeration": {
"minConfidence": 90,
"blur": 100,
"moderationLabels": [
"Explicit Nudity",
"Violence",
"Visually Disturbing",
"Hate Symbols"
]
}
}
}
which is not supported by Thumbor URL. Since you use the same settings, you can update the handler.
Yes, I know I can use Thumbor URLs but; a) they don't support all edits; b) they are still very long & ugly urls.
I'll try updating the handler, but that feels less "robust" when perhaps updating etc in the future.
To add to this, I'm very interested easier semantic URLs – there was a discussion on this earlier which settled on a design that would be perfect using query parameters
https://github.com/aws-solutions/serverless-image-handler/issues/184
However it seems like this didn't reach master (doesn't seem to work on the stack I just created)
Query parameters are much easier to work with than Thumbor style urls – for example, we can store the base image URL and append options as we need in the frontend:
<img src={url + '?width=200&height=200&fit-in=cover} />
However, with filters in the middle of the url as it currently is, we'd need to patch the url which is more fiddly
I updated the handler and deployed a custom solution.
Thanks @Livijn!
sorry to reopen this but does it<img src={url + '?width=200&height=200&fit-in=cover} /> really work @haxiomic ? It is not working for me from your develop branch @Livijn 😔
Hi @Livijn @haxiomic @frannpr
Does this still works, I am expecting something thumbor style for an eCommerce site ondemand implementation
|
GITHUB_ARCHIVE
|
'''
.. |param_local| replace:: The local folder of the repository
.. |param_remote_url| replace:: The remote URL of the repository
'''
from photon import IDENT
from photon.photon import check_m
from photon.util.locations import search_location
from photon.util.system import get_hostname
class Git(object):
'''
The git tool helps to deal with git repositories.
:param local:
|param_local|
* If ``None`` given (default), it will be ignored if there \
is already a git repo at `local`
* If no git repo is found at `local`, a new one gets \
cloned from `remote_url`
:param remote_url:
|param_remote_url|
* |appteardown| if `remote_url` is set to ``None`` but \
a new clone is necessary
:param mbranch:
The repository's main branch.
Is set to `master` when left to ``None``
'''
def __init__(self, m, local, remote_url=None, mbranch=None):
super().__init__()
self.m = check_m(m)
self.__local = search_location(local, create_in=local)
self.__remote_url = remote_url
if not mbranch:
mbranch = 'master'
self.__mbranch = mbranch
if self.m(
'checking for git repo',
cmdd=dict(cmd='git rev-parse --show-toplevel', cwd=self.local),
critical=False,
verbose=False
).get('out') != self.local:
if not self.remote_url:
self.m(
'a new git clone without remote url is not possible.',
state=True,
more=dict(local=self.local)
)
self.m(
'cloning into repo',
cmdd=dict(
cmd='git clone %s %s' % (self.remote_url, self.local)
)
)
self.m(
'git tool startup done',
more=dict(remote_url=self.remote_url, local=self.local),
verbose=False
)
@property
def local(self):
'''
:returns:
|param_local|
'''
return self.__local
@property
def remote_url(self):
'''
:returns:
|param_remote_url|
'''
return self.__remote_url
@property
def remote(self):
'''
:returns:
Current remote
'''
return self._get_remote().get('out')
@property
def commit(self):
'''
:param tag:
Checks out specified commit.
If set to ``None`` the latest commit will be checked out
:returns:
A list of all commits, descending
'''
commit = self._log(num=-1, format='%H')
if commit.get('returncode') == 0:
return commit.get('stdout')
@commit.setter
def commit(self, commit):
'''
.. seealso:: :attr:`commit`
'''
c = self.commit
if c:
if not commit:
commit = c[0]
if commit in c:
self._checkout(treeish=commit)
@property
def short_commit(self):
'''
:returns:
A list of all commits, descending
.. seealso:: :attr:`commit`
'''
commit = self._log(num=-1, format='%h')
if commit.get('returncode') == 0:
return commit.get('stdout')
@property
def log(self):
'''
:returns:
The last 10 commit entries as dictionary
* 'commit': The commit-ID
* 'message': First line of the commit message
'''
log = self._log(num=10, format='%h::%b').get('stdout')
if log:
return [
dict(commit=c, message=m) for c, m in [
l.split('::') for l in log
]
]
@property
def status(self):
'''
:returns:
Current repository status as dictionary:
* 'clean': ``True`` if there are no changes ``False`` otherwise
* 'untracked': A list of untracked files (if any and not 'clean')
* 'modified': A list of modified files (if any and not 'clean')
* 'deleted': A list of deleted files (if any and not 'clean')
* 'conflicting': A list of conflicting files (if any and not 'clean')
'''
status = self.m(
'getting git status',
cmdd=dict(cmd='git status --porcelain', cwd=self.local),
verbose=False
).get('stdout')
o, m, f, g = list(), list(), list(), list()
if status:
for w in status:
s, t = w[:2], w[3:]
if '?' in s:
o.append(t)
if 'M' in s:
m.append(t)
if 'D' in s:
f.append(t)
if 'U' in s:
g.append(t)
clean = False if o + m + f + g else True
return dict(
untracked=o, modified=m,
deleted=f, conflicting=g, clean=clean)
@property
def branch(self):
'''
:param branch:
Checks out specified branch (tracking if it exists on remote).
If set to ``None``, 'master' will be checked out
:returns:
The current branch
(This could also be 'master (Detatched-Head)' - Be warned)
'''
branch = self._get_branch().get('stdout')
if branch:
return ''.join(
[b for b in branch if '*' in b]
).replace('*', '').strip()
@branch.setter
def branch(self, branch):
'''
.. seealso:: :attr:`branch`
'''
if not branch:
branch = self.__mbranch
tracking = (
''
if branch in self._get_branch(remotes=True).get('out') else
'-B'
)
self._checkout(treeish='%s %s' % (tracking, branch))
@property
def tag(self):
'''
:param tag:
Checks out specified tag. If set to ``None`` the latest
tag will be checked out
:returns:
A list of all tags, sorted as version numbers, ascending
'''
tag = self.m(
'getting git tags',
cmdd=dict(
cmd='git tag -l --sort="version:refname"',
cwd=self.local
),
verbose=False,
)
if tag.get('returncode') == 0:
return tag.get('stdout')
@tag.setter
def tag(self, tag):
'''
.. seealso:: :attr:`tag`
'''
t = self.tag
if t:
if not tag:
tag = t[-1]
if tag in t:
self._checkout(treeish=tag)
@property
def cleanup(self):
'''
Commits all local changes (if any) into a working branch,
merges it with 'master'.
Checks out your old branch afterwards.
|appteardown| if conflicts are discovered
'''
hostname = get_hostname()
old_branch = self.branch
changes = self.status
if not changes.get('clean'):
self.branch = hostname
utmo = changes.get('untracked', []) + changes.get('modified', [])
for f in utmo:
self.m(
'adding file to repository',
cmdd=(dict(cmd='git add %s' % (f), cwd=self.local)),
more=f,
critical=False
)
for f in changes.get('deleted', []):
self.m(
'deleting file from repository',
cmdd=(dict(cmd='git rm %s' % (f), cwd=self.local)),
more=f,
critical=False
)
if changes.get('conflicting'):
self.m(
'Well done! You have conflicting files in the repository!',
state=True,
more=changes
)
self.m(
'auto commiting changes',
cmdd=dict(
cmd='git commit -m "%s %s auto commit"' % (
hostname, IDENT
),
cwd=self.local
),
more=changes
)
self.branch = None
self.m(
'auto merging branches',
cmdd=dict(
cmd='git merge %s -m "%s %s auto merge"' % (
hostname, hostname, IDENT
),
cwd=self.local
),
more=dict(
branch=old_branch,
temp_branch=hostname
)
)
self.branch = old_branch
return dict(changes=changes, pull=self._pull())
@property
def publish(self):
'''
Runs :func:`cleanup` first,
then pushes the changes to the :attr:`remote`.
'''
self.cleanup
remote = self.remote
branch = self.branch
return self.m(
'pushing changes to %s/%s' % (remote, branch),
cmdd=dict(
cmd='git push -u %s %s' % (remote, branch),
cwd=self.local
),
more=dict(remote=remote, branch=branch)
)
def _get_remote(self, cached=True):
'''
Helper function to determine remote
:param cached:
Use cached values or query remotes
'''
return self.m(
'getting current remote',
cmdd=dict(
cmd='git remote show %s' % ('-n' if cached else ''),
cwd=self.local
),
verbose=False
)
def _log(self, num=None, format=None):
'''
Helper function to receive git log
:param num:
Number of entries
:param format:
Use formatted output with specified format string
'''
num = '-n %s' % (num) if num else ''
format = '--format="%s"' % (format) if format else ''
return self.m(
'getting git log',
cmdd=dict(cmd='git log %s %s' % (num, format), cwd=self.local),
verbose=False
)
def _get_branch(self, remotes=False):
'''
Helper function to determine current branch
:param remotes:
List the remote-tracking branches
'''
return self.m(
'getting git branch information',
cmdd=dict(
cmd='git branch %s' % ('-r' if remotes else ''),
cwd=self.local
),
verbose=False
)
def _checkout(self, treeish):
'''
Helper function to checkout something
:param treeish:
String for '`tag`', '`branch`', or remote tracking '-B `banch`'
'''
return self.m(
'checking out "%s"' % (treeish),
cmdd=dict(cmd='git checkout %s' % (treeish), cwd=self.local),
verbose=False
)
def _pull(self):
'''
Helper function to pull from remote
'''
pull = self.m(
'pulling remote changes',
cmdd=dict(cmd='git pull --tags', cwd=self.local),
critical=False
)
if 'CONFLICT' in pull.get('out'):
self.m(
'Congratulations! You have merge conflicts in the repository!',
state=True,
more=pull
)
return pull
|
STACK_EDU
|
Hacking the Quarantine
If you are reading this, I hope you are safe and in good Health
The quarantine is tough. While the time of writing this, I’m nearing 3 months of sitting at home and doing stuff excluding college. It was easy at first and now I’m all sloppy and don’t think about college or studies now. I got to be a bit better at something these these and I’m proud of that.
At the tip of the Iceberg
I am an avid hackathon enthusiast, even though I have only won one hackathon yet I feel like hackathons are like my weekly/monthly family meetings. I get to see usual family members and meet new people and hang out with them for a day or two. The energy radiating through a hackathon is something you won’t get anywhere else and personally I call this energy “The Vibe” 😁. Then the quarantine happened. I was locked out of the people I meet and locked out from travelling. This was frustrating at first but I managed to cope with this.
As a technophile, I’m always infront of my computer. I enjoy being in front of it most of the times and it’s fun. I got with an internship at a Startup to kickoff everything. And bt April I joined another one as an Intern which I eventually lost after a month 😁.
Falling Down from the Iceberg
Headings has been fun, and I was in a conflict if I should run back from the tip or fall down to the ocean. I embraced the ocean.
Near the brim of the Ocean
On the first week of May, MLH started it’s summer league which was to be a series of virtual events and we simply participated in them. That was a weekend full of fun. That was where vett.space originated. I worked on the UI in Vue which I didn’t have any idea on before the hackathon. We used discord for the first time and used it’s voice features to keep us in the loop. That was my first ever Online Hackathon where I got “The Vibe”. vett.space went to be something we’ve never imagined much. It came to be a timepassing game and breakout game. Fast-forward to the last week of May where we joined forces for another hackathon by MLH. This was rookie-hacks and we built a platform for collecting raingauge data from Kerala. Me with Subin and Kiran made this. We eventually won a category prize and we were really happy for that. Discord’s voice and video features are top class and we totally loved them. We even had our team breakout sessions with vett.space game with voice chatting which made it even more fun.
Embracing the Ocean
For me Hackathons are the best way for community bindings. Personally, my teammates and I are not from the same college. 4 of us are from 4 different colleges and from various parts of Kerala. The only thing that makes us come together are the love for Technology and learning new stuffs and obviously Hackathons. We always come up with new ideas which we feel are socially good and fun to hack on. Hackathons never disappointed me other than 1(My first hackathon, where I didn’t knew anything) but it turned out to be an eye opener. Hackathons are the best place for you to be yourself and meet lot of passionate people just like you.
I embrace the community and the Hackathon culture❤️.
|
OPCFW_CODE
|
This is the second post in a series about what we learned while developing Shortcut. Today, we have a look at usability testing.
When we designed the new user interface for Shortcut, we had several options for a new design. We wanted to test how users respond to the different interfaces, in particular how easy they were to navigate.
How we tested
As with all things in a startup, we prefer to do things simple and efficient. And as it turns out, even experts agree that you need only a few users and simple tests to find most of the problems with the user experience of your prototypes: Jakob Nielsen says 5 users are sufficient.
In his “Joel Test” Joel Spolsky mentions the same and calls it hallway usability testing. He suggests to pick a few random people from the hallway and let them test your prototype.
Introducing Starbucks Usability Testing
We decided to build on the same idea, but locate the tests to the closest Starbucks, rather than the hallway. Why Starbucks? We think Starbucks is the ideal location for testing mobile apps, because:
- there is WiFi
- people tend to have time
- the sample of users is less biased than the one composed from your average company hallway victims
- the target group matches quite well for mobile apps. Most people sitting in Starbucks seem to be smartphone owners
So two of us went to the closest Starbucks, equipped with our latest App prototypes and some pre-printed evaluation forms. We bought some Starbucks gift cards as rewards. And we were ready for testing.
We organized it the way, that one of us had to pick a potential participant from the guests and ask her/him if she was interested in participating in testing a new iPhone app, that it would take 20 minutes, and that they would get a gift card as a reward. This person then also navigated the test user through the tasks. The other person observed and took notes.
A surprising large fraction of people we asked agreed to take the test. And we got quite a variety of test users.
Honestly, at first it seemed a bit awkward to approach people with the task, but after a while it turned out to be a lot of fun!
What we tested
So, what did we test you may ask? We had two alternative designs for our result screen. A more “classic” version, and the “Path” version we described in our previous post. We knew that the Path version would solve some challenges we faced with the classic version, but we wanted to see how users react. The picture below shows the two alternatives.
We had a simple test protocol which tested the usability some typical tasks on the result screen plus some additional general tasks of the app (such as if the introductory tour is understandable).
Without spending too many words on details: the tests confirmed that the “Path” version of the UI worked better, but that there were some common problems people ran into. These we could fix before our launch.
We believe that Starbucks usability testing is a simple and efficient way to test usability. In particular for mobile apps and for startups on a shoestring budget. Even for larger companies it can be highly efficient, as no time needs to be spent to arrange appointments with test persons. The tests could be extended with video recording from a laptop. The Starbucks environment is probably less suited to test complex applications or expert applications.
|
OPCFW_CODE
|
Efficient numerical methods for Radiative Transfer Equation
The PhD defence of Olena Palii will take place (partly) online and can be followed by a live stream.
Olena Palii is a PhD student in the research group Mathematics of Computational Science (MCS). Supervisor is prof.dr.ir. J.J.W. van der Vegt and co-supervisor is dr. M. Schlottbom both from the Faculty of Electrical Engineering, Mathematics and Computer Science (EEMCS).
In this thesis we studied approximation methods for the radiative transfer equation, which has numerous important applications, see Chapter 1. For most of these applications the radiative transfer equation cannot be solved analytically and a wide variety of numerical methods has been developed.
As an important introductory step, we gave an overview of classical semidiscretizations in the angular component. The two most frequently used discretizations - the discrete ordinates and spherical harmonics methods are summarized in Chapter 1. While the spherical harmonics discretization allows to turn the radiative transfer equation into a system of linear equations with tridiagonal structure, approximating the boundary conditions effectively requires extra steps. Furthermore, since the spherical harmonics expansion is a global approximation method, it is not suited for approximating non-smooth or discontinuous solutions, unlike the discrete ordinates method, which is a local approximation in the angle. The discrete ordinates method allows to obtain a consistent discretization of both the radiative transfer equation and the boundary conditions, though yielding dense scattering matrix.
A number of iterative techniques has been developed to tackle the difficulty that arises from the dense scattering matrix. A summary of some important methods was given in Chapter 2. We discussed two closely related methods - the first collision source method and the standard source iteration method, which is often accompanied by further preconditioning techniques, such as the diffusion synthetic acceleration technique. In the first collision source method the radiative transfer boundary value problem is split into two equations for the uncollided and collided components, which can be separately approximated by different numerical methods. In general this can, however, introduce consistency errors, which are difficult to analyse. We turned, therefore, to the source iteration method, which can be discretized consistently and for which convergence results are available. On the basis of these methods we gave in Chapter 2 a description of a splitting technique, which is essentially an extension of the first collision source method. These iterative methods, together with the aforementioned discretization techniques, provided an inspiration for the major part of our research.
In Chapter 3 we presented a discontinuous approximation in angle that allows for arbitrary partitions of the angular domain and arbitrary polynomial degrees on each element of that partition. As such, it can be understood as a generalization of the classical spherical harmonics approximation, where the angular domain is discretized by a single interval (−1, 1) and polynomials of high degree, and the discrete ordinates method, where the angular domain is partitioned into several intervals with piecewise constant functions. In particular, the approach described in Chapter 3 allows to account for the natural discontinuity of the solution at . In Chapter 3 an hp-discretization was applied to the even-parity formulation of the radiative transfer equation with isotropic scattering. Moreover, we developed and analysed an iterative solution technique that employs subspace correction as a preconditioner. Our approach was inspired by the DSA preconditioned source iteration method. It was shown that our iterative method exhibited convergence independent of the resolution of the computational mesh.
In Chapter 4 we then focused on an efficient iterative framework that is capable of accurately solving the system of linear equations that arises from the discretization of anisotropic radiative transfer problems. In case of forward peaked scattering the convergence of the standard DSA-preconditioned source iteration method is slow, hence acceleration with the use of an appropriate preconditioning technique is necessary. In Chapter 4 we proposed a provably convergent iterative method, equipped with two preconditioners, one of which corresponds to the efficient approximate inversion of transport. The second preconditioner was used to improve the standard contraction rate in the source iteration method. The subspace correction is then constructed from low order spherical harmonics expansions - eigenfunctions corresponding to the largest eigenvalues of the anisotropic scattering operator. The method is shown to be efficient if the scattering operator is applied properly, for which we used and -matrix compression algorithms.
Finally, in Chapter 5 we considered a non-tensor product discontinuous Galerkin discretization for the even-parity radiative transfer equations for the slab geometry. We proved stability and well-posedness for the symmetric interior penalty discontinuous Galerkin method. We also investigated the numerical convergence of the phase-space discontinuous Galerkin method. For piecewise smooth solutions the phase-space discontinuous Galerkin method with low order polynomials displays a linear rate of convergence. We show numerically that in case of non-smooth solutions the use of adaptive mesh refinement allows
for efficient approximation. The question of an appropriate choice of the error estimator remains an open question. Despite the similarity of the even-parity form of the RTE to standard elliptic problems, standard elliptic residual-based error estimators cannot be generalized directly.
There are several problems, which are open for future research.
- Developing and analyzing proper a-posteriori error estimators for our discontinuous Galerkin discretization of the radiative transfer equations in phase-space, allowing for hp-adaptivity.
- Improving the error analysis of the preconditioned iterative schemes presented in Chapters 3 and 4, by proving precise quantitative rates of convergence.
- Developing multigrid methods as an alternative to the preconditioned source iteration method.
|
OPCFW_CODE
|
Judy (Shuxuan) Nie is a Senior SOA Consultant specializing in SOA and Java technologies. He has 14 years of experience in the IT industry that includes SOA technologies such as BPEL, ESB, SOAP, XML, and Enterprise Java technologies, Eclipse plug-ins, and other areas such as C++ cross-platform development.
Since 2010, he has been working at Rubicon Red, helping customers resolve integration issues, and design and implement highly available infrastructure platforms on Oracle VM and Exalogic.
From 2007 to 2010, he had been working in the Oracle Global Customer Support Team and focused on helping customers solve their middleware/SOA integration problems.
Before joining Oracle, he had been working for the IBM China Software Development Lab for four years as a staff software engineer, participated in several complex products on IBM Lotus Workplace, WebSphere, and Eclipse platform; and then joined the Australia Bureau of Meteorology Research Center, responsible for implementation of the Automated Thunderstorm Interactive Forecast System for Aviation and Defense.
He holds an MS in Computer Science from Beijing University of Aeronautics and Astronautics.
Jos van den Oord is an Oracle Consultant/DBA for Transfer-Solutions in the Netherlands. He has specialized in Oracle Database Management Systems since 1998, with his main interest being in Oracle RDBMS Maximum Availability Manageable Architecture Environments (Real Application Cluster, DataGuard, MAA, and Automatic Storage Management). He is a proud member of the Oracle Certified Master community, having successfully passed the exam for Database 11g. He prefers to work in the field of advising, implementing, and problem-solving with regards to the more difficult issues and HA topics.
Gavin Soorma is an Oracle Certified Master with over 17 years of experience. He is also an Oracle Certified Professional (versions 7.3, 8i, 9i, 10g, and 11g) as well as an Oracle Certified Expert in 10g RAC.
He is a regular presenter at various Oracle conferences and seminars, having presented several papers at the IOUG, South African Oracle User's Group, Oracle Open World, and the Australian Oracle User Group. Recently, at the 2013 AUSOUG held in Melbourne and Perth, he presented a paper on Oracle GoldenGate titled "Real Time Access to Real Time Information".
He is currently employed as a Senior Principal Consultant for an Oracle solution provider, OnCall DBA based in Perth, Western Australia. Prior to this, he held the position of Senior Oracle DBA and Team Lead with Bank West in Perth. Before migrating to Australia, he worked for the Emirates Airline Group IT in Dubai for over 15 years where he held the position of Technical Team Manager, Databases.
He has also written a number of tutorials and notes on Oracle GoldenGate which can be accessed via his personal blog website http://gavinsoorma.com.
Michael Verzijl is a Business Intelligence Consultant, specializing in Oracle Business Intelligence, Oracle Data Warehousing and Oracle GoldenGate.
He has a wide range of experience in the financial, utilities, telecom, and government industries that include BI technologies such as Oracle, Informatica, IBM Cognos, and SAP Business Objects.
Currently he is employed as a BI Consultant for Accenture in the Netherlands, specializing in Business Intelligence and Data Warehousing.
|
OPCFW_CODE
|
In this tutorial we will be going over how to configure Out-of-Band management for APICs and fabric switches. Functions such as querying the APIC and fabric switches via SNMP will not work unless Out-of-Band management IP addresses are configured.
* Fabric discovery completed
* Physical Out-of-Band connectivity to your APIC(s) and fabric switches
* Understanding of ACI contracts
* Running ACI version 3.1(2m)
* Configuring Out-of-Band static IP addresses
* Configuring Out-of-Band contracts, contract filters, and contract subjects
Configuring Out-of-Band Static IP Addresses:
Navigate in your APIC web GUI to the following path:
Tenants -> mgmt -> Node Management Addresses -> Static Node Management Addresses
Right click Static Node Management Addresses and select Create Static Node Management Addresses
This is where we add the APIC and fabric switch Out-of-Band IP addresses:
You have the ability to add a range of node IDs, however I always like to configure my node Out-of-Band IPs one by one. Though this can be more tedious and time consuming it guarantees that a specific device receives the exact IP address I want to give it. To configure an IP for a specific node one by one just put the same node ID in the From and To fields for Node Range. Select or create your Out-of-Band Management EPG. In this tutorial we will be using the default policy. The Out-of-Band Management EPG will come into play later when applying contracts to allow specific Out-of-Band communications which are not implicitly allowed by default.
The below configuration will apply the Out-of-Band IP to APIC-1:
When prompted to proceed select Yes.
After configuring your APICs and nodes you will see them listed:
To verify the IP configuration on an APIC, console or SSH to the APIC and run the ifconfig oobmgmt command. In the command output you will see the oobmgmt interface with an IP address and configured netmask:
apic1# ifconfig oobmgmt
oobmgmt: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.122.143.30 netmask 255.255.255.192 broadcast 10.122.143.63
inet6 fe80::eebd:1dff:fe69:9946 prefixlen 64 scopeid 0x20<link>
ether ec:bd:1d:69:99:46 txqueuelen 1000 (Ethernet)
RX packets 673354 bytes 249312978 (237.7 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 440480 bytes 411215249 (392.1 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
The verify the IP address configuration on a fabric switch run ifconfig eth0 (for leaves and non-modular spines) or ifconfig eth6 (for modular spines):
leaf1# ifconfig eth0
eth0 Link encap:Ethernet HWaddr 04:62:73:57:a8:2e
inet addr:10.122.143.33 Bcast:10.122.143.63 Mask:255.255.255.192
inet6 addr: fe80::662:73ff:fe57:a82e/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:546369 errors:0 dropped:0 overruns:0 frame:0
TX packets:127117 errors:0 dropped:0 overruns:0 carrier:0
RX bytes:68157736 (65.0 MiB) TX bytes:19588687 (18.6 MiB)
If a configured Out-of-Band IP address does not show up under the physical interface of the APIC or fabric switch you will need to verify that the physical NIC is connected. An IP address will not show up if the physical interface is down. Run the ethtool command to verify the physical Out-of-Band interface status of the APIC or fabric switch interface. If the interface is up you should receive similar output:
apic1# ethtool oobmgmt
Link detected: yes
leaf1# ethtool eth0
Link detected: yes
In the event Link detected is not set to yes, you will need to verify the physical connectivity of the Out-of-Band interface. This concludes the necessary IP address configuration for the APICs and fabric switches.
Configuring Out-of-Band Contracts:
By default services such as HTTPS and SSH access to APICs and fabric switches are implicitly allowed when no contacts are configured, however services such as SNMP and NTP will not work without configuring Out-of-Band contracts.
The first step in creating an Out-of-Band contract is to create the contract filter. The contract filter is responsible for defining the specific protocols and ports we want to explicitly allow. Ex: UDP ports 161 and 162 for SNMP
To configure an Out-of-Band contract filter navigate in your APIC web GUI to the following path:
Tenants -> mgmt -> Contracts -> Filters
Right click Filters and select Create Filter
You will then be prompted with the screen to create the contract filter:
You will be prompted to enter the name of your filter. For our purposes we will name our contract filter My-Test-Contract-Filter.
You can have more than one filter entry per contract filter. For our purposes we will allow all traffic without needing to specify a protocol or port number. To do this give the filter a name such as allow_all and leave the EtherType as Unspecified.
Note: If you are choosing to not allow all traffic in the contract filter, make sure you add a contract filter entry for SSH (TCP port 22), HTTP (TCP port 80), and HTTPS (TCP port 443). If you do not allow these filter entries you will realize that all current SSH and HTTP(S) sessions will work fine, however any new SSH or HTTP(S) connections you try to establish will fail from networks outside the defined Out-of-Band IP subnet. This could lead to locking yourself out of your APICs if you are not on the same subnet as the APICs and fabric switches.
Submit your contract filter configuration.
To configure an Out-of-Band contract navigate in your APIC web GUI to the following path:
Tenants -> mgmt -> Contracts -> Out-Of-Band Contracts
Right click Out-Of-Band Contracts and select Create Out-Of-Band Contract
You will then be prompted with the screen to create the contract structure:
You will need to provide a Name and Subject for your contract. For our purposes we will name our contract My-Test-Contract. We can leave the default Scope, QoS Class, and Description fields. However, we will need to add a contract Subject. Click the + icon to add the contract subject. The contract subject will contain our previously created contract filter, My-Test-Contract-Filter. For our purposes we will call our contract subject My-Test-Contract-Subject.
When selecting the + icon to add a contract filter we will see our previously created contact filter My-Test-Contract-Filter:
Select the filter and submit all the changes.
The next step is to link our newly created contract, My-Test-Contract, to an Out-of-Band EPG. The Out-of-Band EPGs are listed under the following APIC web GUI path and a pre-pended with Out-of-Band EPG:
Tenants -> mgmt -> Node Management EPGs
By default there is a default Out-of-Band EPG already provided. If you remember earlier when assigning static IP addresses to our devices we selected an Out-of-Band EPG. The provided default Out-of-Band EPG, is what we linked the devices to, so for our purposes we will apply the contract to the default Out-of-Band EPG. In the default Out-of-Band EPG we will need to Provide the My-Test-Contract which we created earlier:
Once the contract is added as a provider submit the changes. Due to ACIs construct of contracts needing to be provided and consumed the next step is for us to consume the same My-Test-Contract. To consume the contract you will need to create a External Management Network Instance Profile. This External Management Network Instance Profile allows us to both consume the contract we are providing as well as define outside subnets we want to allow coming in. To create an External Management Network Instance Profile navigate in your APIC web GUI to the following path:
Tenants -> mgmt -> External Management Network Instance Profiles
Right click External Management Network Instance Profiles and click Create External Management Network Instance Profile
You will be prompted with a screen to create a External Management Network Instance Profile which requires a Name, Consumed Out-of-Band Contract, and the external Subnets you want to allow to access the Out-of-Band resources. For our purposes we will call our External Management Network Instance Profile My-EMNIP, consume the contract My-Test-Contract, and allow all external subnets to by defining a 0.0.0.0/0 subnet.
Submit your configuration to apply the changes.
This concludes all the necessary configuration for settings up Out-of-Band Management for your APICs and fabric switches.
|
OPCFW_CODE
|
IBM Rational Rose Technical Developer (ROSE TD) renaming/migration to IBM Rational Software Architect RealTime Edition (RSARTE)
Rational Rose was a set of visual modeling tools for development of object-oriented software. Object oriented modeling is the process of graphically depicting the software system. This is often done in the design phase of the software development lifecycle. The graphical models are designed using a UML diagram which is widely used in the industry. The models are useful for requirements analysis and definition, communicating clearly your design, and much more.
The latest version of ROSE TD was published in 2006, with a small update (Rose RT for Linux) in 2007. This version runs only on Windows 2000 and XP. It is 32bit only, the Linux support is actually using a Windows to Linux porting technology (which again prevents using later or other Linux platforms), UML 1.x only and the C++ support is also not very modern.
Announcing IBM Rational Software Architect RealTime Edition
We are proud to announce that IBM Rational Rose Technical Developer will now be renamed/migrated to IBM Rational Software Architect RealTime Edition (RSARTE).
RSARTE is a modern fully featured development tool for creating complex, event-driven, real-time applications in C++. Providing software engineers with feature-rich tools for designing, analyzing, building, debugging and deploying real-time applications.
RSARTE Features and Capabilities Overview
- Designing at a higher abstraction level than code
Providing UML real-time models, state-charts, composite structure and other diagrams and utilizing a powerful Code Editor built on an Eclipse CDT.
- Building executables your way
Building applications interactively and from batch builds to enable easy setup of build configurations while providing a highly customizable run-time environment.
- Application analysis features
Navigation and search with diagram highlighting, supports refactoring of models and code and synchronizes code changes back to the model.
- Debugging at a high-level verifies design + defect failures
Providing interactive model debugging and allowing for trace management and visualization. The software also provides run-time structure monitoring and behavior animation and offers combined model and code debugging.
With support from Git and other SCM systems, RSARTE features an interactive and intuitive capability to compare and merge both model and code and provides a powerful command-line interface. Further, it allows models to be accessed via web browsers and linked to requirements.
Migration from Rose RT to RSARTE
But how can I now migrate all my existing Rose RT models to RSARTE?
IBM together with our partner HCL offer services to help you with the migration planning and execution as each migration situation is unique.
Types of Service Offerings
ROSE RT to RSARTE Migration
This service is based on long history of successful migrations and can include related works, like platform and compiler updates.
It starts with a project assessment and the aim is to preserve the Rose RT model behavior.
To have the best start with using RSARTE, this offering will cover all areas of the tool usage from scratch. It combines theory with practical, hands-on exercises and usually takes 3 to 5 days, which can be tailored depending on your needs.
To learn more about the services, please feel free to contact us, reach out to email@example.com or alternatively you can reach out to
Further reading resources to help you can be found here:
|
OPCFW_CODE
|
Updating UI in Cocoa from another .m file
I have a GUI app that has a main thread and then I use NSOperation to run 2 other threads once the user clicks the Start button. Now one thread calculates a certain value and updates it. What I want thread 2 to do is to pick this value up and update the UI.
How do I get a IBOutlet Textfield value to get updated on the UI from this second thread ?
eg:
main.m --- handles the UI and has code to start the 2 threads when the user hits the Start Button.
thread1.m -- calculates a particular value and keeps doing it until the user hits stop.
thread2.m - Need to use this thread to update the UI in main.m with the the value that thread1.m calculates.
I am unable to accomplish the thread2.m task and update the UI. My issue is that how do I define a IBOutlet and update it with a value from thread2/1 so that the main.m has access to this value and updates the UI. I have access to the actual variable in main.m and can print it out using NSLog. Its just that I am getting stuck on how to update the UI with this value. As I need to have theIBOutlet in main.m to tie it with the UILabel in the app. Any ideas guys ? Thanks.
Could you add pointers to your thread1.m and thread2.m files? Then set them with either a constructor method or some accessor methods?
If I understand the situation you described in your example, and assuming what you are calculating is an int (you can modify as you need):
Add an accessor to thread1.m
-(int)showCurrentCalcValue
{
//Assume that you get calculatedValue from whereever else in your thread.
return calculatedValue;
}
Then add to thread2.m
NSTextField *guiTextField;
Thread1 *thread1;
-(void) setThread: (Thread1 *aThread)
{
self.thread1 = aThread;
}
-(void) setGuiTextField: (NSTextField *aTextField)
{
self.guiTextField = aTextField;
}
-(void) updateGUI()
{
[guiTextField setStringValue: [thread1 showCurrentCalcValue]];
}
Presuming your main.m is something like the following:
IBOutlet NSTextField *outputDisplay
-(void) setUpThreads()
{
Thread1 *thread1 = [[Thread1 alloc] init];
Thread2 *thread2 = [[Thread2 alloc] init];
[thread2 setGuiTextField: outputDisplay];
[thread2 setThread: thread1];
//Whatever else you need to do
}
Then just take care of setting everything and calling the methods in your threads.
Thanks Ryan. How would I access guiTextField in my main.m though ? I can get the value ( exactly like you mentioned ) in thread1.m. But the problem I am running into is how do I get this value stored in a NSTextfield that can be accessed by main.m ? If i declare it in thread2.m, main.m won't see it.
I presume you have an IBOutlet that is a NSTextField * in your main.m? If so, you can just pass it to the thread. I updated the code above to try and reflect that.
Source code files don't matter. You could have all of this stuff in one file (not that that would be a good idea) and the problem would be unchanged. What matters are the classes.
Classes are not simply bags of code; you design them, you name them, and you define each class's area of responsibility. A class and/or instances of it do certain things; you define what those things are and aren't.
When writing NSOperation subclasses, don't worry about the threads. There's no guarantee they even will run on separate threads. Each operation is simply a unit of work; you write an operation to do one thing, whatever that may be.
eg: main.m --- handles the UI and has code to start the 2 threads —
operations
— when the user hits the Start Button.
thread1.m -- calculates a particular value and keeps doing it until the user hits stop.
That's not one thing; that's an indefinite sequence of things.
thread2.m - Need to use this thread to update the UI in main.m with the the value that thread1.m calculates.
You should not touch the UI from (what may be) a secondary thread. See the Threading Programming Guide, especially the Thread Safety Summary.
I don't see why this should even be threaded at all. You can do all of this much more easily with an NSTimer running on the main thread.
If it would be inappropriate to “calculate… a particular value” on the main thread, you could make that an operation. Your response to the timer message will create an operation and add it to your computation queue. When the user hits stop, that action will go through on the main thread; invalidate the timer and wait for the queue to finish all of its remaining operations.
With either solution, “thread2.m” goes away entirely. Your update(s) to the UI will (and must) happen entirely on the main thread. With the latter solution, you don't even have to wait until you're done; you can update the UI with current progress information every time you receive the timer message.
|
STACK_EXCHANGE
|
GPU rendering vs CPU rendering: What’s the difference?
GPU rendering vs CPU rendering are two types of rendering based on the hardware on which you render your creative projects.
CPU rendering is the traditional way to render images. However, since the advent of GPUs, GPU rendering has gained a lot of popularity over time.
In this article, VFXRendering will look at the differences between GPU rendering vs CPU rendering, and see which is better for your needs.
What is rendering?
Rendering is the process of utilizing a computer program to generate a final image from a 2D or 3D model. During this process, a raw model is given all of the small details, including textures, lighting, and camera angles, until we get the final image.
Rendering is used widely in various industries. For example, it is mostly used in architectural visualizations, video games, animation, simulation, visual effects, motion graphics, and many more.
Typically, you render your projects using either the GPU or the CPU in the system. However, there are some render engines allowing hybrid (GPU+CPU) rendering such as Chaos V-Ray or Blender Cycles. In that case, both the GPU and the CPU in the computer will work together to create the final output.
GPU rendering vs CPU rendering
What is CPU rendering?
CPU stands for “Central Processing Unit”, which is also known as a processor or microprocessor. It is one of the most important components of any modern computer. A CPU is responsible for processing logical and mathematical operations as well as executing the instructions that are sent to it. It can carry out millions of instructions per second but can only do one instruction at a time.
That’s why the CPU is often referred to as the brain of the computer.
So, CPU rendering is a technique that renders images using the power of the CPU.
A CPU usually has multiple cores, a modern CPU can have up to 64 cores. The more cores it has, the better the rendering performance. Moreover, these cores have a high clock speed, allowing them to perform tasks at a very fast rate. Besides, the CPU has access to RAM (onboard random-access memory), enabling artists to render scenes with massive amounts of data with ease.
Image Source: HACOM
What is GPU rendering?
Meanwhile, GPU stands for “Graphics Processing Unit”, which is also known as graphics card or video card. It is also an important piece of hardware in modern computer systems. A GPU is responsible for handling mathematical calculations for 3D graphics, video editing, gaming applications, machine learning/deep learning, and many other tasks. It can run thousands of operations at once thanks to its parallel processing.
That’s why the GPU is considered the soul of the computer.
So, GPU rendering is a technique that renders images using the power of the GPU.
A GPU generally has thousands of small cores that run at a pretty low frequency. The GPU relies on parallel processing where its thousands of cores handle separate parts of the same task. Therefore, it gives a strong rendering performance with a faster render time. Also, a GPU has its own RAM to store data on the images it processes. That is Video RAM or VRAM.
GPU rendering vs CPU rendering: What’s the difference?
GPU and CPU are both important in their own ways, and they are both needed to make the rendering process as fast as possible. However, there are differences between GPU rendering vs CPU rendering. Each type of rendering should be better for different use cases.
In general, GPU rendering is much faster than CPU rendering. A GPU contains thousands of cores (for instance, the Nvidia RTX 4090 has 16 384 Cuda cores), whereas a CPU only has up to 64 cores. Although the GPU’s clock speed is somewhat low, the enormous amount of GPU cores compensates for it and provides rendering performance. Another factor is that the GPU is designed to carry out operations in parallel. As a result, it can render numerous elements of a scene at the same time. This provides the GPU an advantage over the CPU in terms of rendering performance.
Image Source: Nvidia
Quality and Accuracy
Rendering takes time, and so does quality. While it may take longer to render a final image (hours or even days), CPU rendering often produces higher quality and clearer, noise-free final images than GPU rendering.
Although the CPU has fewer cores than the GPU, each core runs quicker due to the greater clock speed. It is also significantly more adaptable and built to execute complicated instruction sets, allowing the CPU to run practically any algorithm with minimal effort. And thus produce a better quality product. CPU is the industry standard for creating high-quality frames and visuals in films because its rendering has no hard limits.
In terms of complex tasks, the CPU outperforms its GPU counterpart. Because the CPU processes in serial, it can carry out a variety of flexible jobs while adhering to complex instructions to assure high quality. More importantly, the CPU has no RAM constraints and is ideal for photorealistic 3D rendering as well as higher tasks that do not need consistency.
On the other hand, the GPU focuses its computational power on regularly performing parallel operations. Therefore, it is ideal for less complex workflows and consistency. And its memory limits can cause problems when rendering complex scenes that have numerous elements.
Flexibility & Scalability
It is possible to upgrade your CPU, but it will also involve considerable changes for other components. The GPU is more flexible in such cases.
The GPU typically scales linearly with more cores as it performs the renders in parallel. This also means you easily add more and more GPUs to your PC to boost performance. However, one of the factors to determine how many GPUs you can add to your computer is your render engine. While some applications only use one or two GPUs, most renderers allow for multiple GPU rendering.
GPUs in the mid- to high-end range can cost between $300 and $2000, while CPUs can cost between $150 and $1500.
However, the best GPUs are usually less expensive than top-of-the-line CPUs. AMD Ryzen Threadripper PRO 5995WX, for example, costs around $5000 to $6000, whilst Nvidia RTX 4090 costs around $1500.
In addition, the GPU gives you an advantage when it comes to upscaling. Simply add additional GPUs to your current system and you are good to go. Aside from the cost of the CPU, you will most likely need to invest in additional compatible hardware when upgrading the CPU.
GPU rendering vs CPU rendering: Render engines
Most render engines render images using solely GPU rendering or CPU rendering. For instance, Redshift and Octane are the world’s first GPU-based render engines. While Corona only works on the CPU. However, there are some render engines that can work with both GPU and CPU for a hybrid rendering such as Chaos V-Ray and Blender Cycles.
Further, many software companies tend to develop their render engines to support both GPU rendering and CPU rendering. We have seen many CPU-based render engines expand to render on GPU like Arnold, V-Ray, or RenderMan. On the opposite, some GPU-based render engines such as Redshift also extended to support CPU and hybrid rendering.
Some examples of GPU rendering
Some examples of CPU rendering
To sum up:
- You should go with GPU rendering if you want to render fast with good final output (though it might not be the most accurate). And your project is not super complex that needs a huge amount of memory. Moreover, GPU is the best if you want to easily scale the rendering performance. Simply, you just need to add a few more GPUs to your pre-existing PC.
- You should go with CPU rendering if quality is your priority. You care more about the quality and accuracy of your images than the render speed. Also, the CPU can handle insanely complex scenes that are hungry for memory. Besides, the feature set in CPU render engines is usually broader as they have been around longer.
|
OPCFW_CODE
|
Graph Algorithms and Distributed memory Parallel Systems
Processing graph algorithms in a distributed memory parallel systems is tricky and requires lots of optimizations to reach an optimal processing time. Graph algorithms usually requires information propagating between the graph vertices in a series of steps or iterations, such as PageRank algorithm, and thus the more graph vertices and edges we have the more messages propagates through the network. In a vertex-centric bulk synchronous parallel graph processor (BSP), such as Pregel, the complexity difference between sending messages to other graph nodes and the actual processing time for a graph node can be too huge to the extend that the main bottleneck for such processing scheme becomes the communication between different machines. This limitation was discussed by Andrew Lumsdaine et al. in “Challenges in Parallel Graph Processing“. Maybe shared memory systems are more suitable but this is out of scope for our current discussion. As a result, any optimization that reduces the network messages between the distributed machines reduces the computation complexity significantly.
Static Graph Partitioning
Using the most common graph partitioning algorithms to generate 3-way subgraphs on the sample graph in Figure 1, range and hash based partitioning generates 10 and 8 edge cuts respectively as shown in Figure 2 and Figure 3. In other words, the network cost per graph iteration, assuming a PageRank algorithm, for range based partitioning is 10 messages while it is 8 messages for the hash based partitioning. One way to reduce the physical network overhead is to reduce the “external” graph edges between two subgraphs that resides in different machines. This is done by finding the minimum cuts for the input graph which results in localizing highly connected graph vertices in a single subgraph if possible. Figure 4 shows the result of applying the minimum cuts algorithm on the graph in Figure 1, which results in 5 edge cuts only. It is worth to mention that finding the minimum cuts of the graph is a costly operation and sometimes cannot be done for large graphs with limited hardware. So smartly partitioning the graph should be able to reduce the cost of end-to-end computations significantly and cover for the partitioning costs, or otherwise it wouldn’t make sense to use it. We used METIS to generate the 3-way partitioning on the sample graph, but you can use any other tools or algorithms on the web for finding the minimal cuts in a graph. Please refer to “Algorithms and Software for Partitioning Graphs” for more details. In our technical report “Mizan: Optimizing Graph Mining in Large Parallel Systems” we discuss in details the costs of partitioning with respect to the graph structure.
Static and Dynamic Graph Algorithms
Static graph algorithms are usually the algorithms represented mathematically by a matrix vector multiplication such as PageRank, Random Walks and Diameter Estimation. Such algorithms always have a fixed behavior across the multiple iterations of the algorithm without any surprises to the graph processor. For example, each vertex at each iteration of PageRank algorithm gets ranks from the vertex’s incoming edges and sends the newly calculated rank to all of its outgoing edges. The complexity of static algorithms is directly related to the messages propagation through the physical network which means that minimizing the graph cuts using static partitioning improves the performance of the algorithm. So in other words, the static algorithms benefit from smartly partitioning the graph and localizing the edges of the graph. However, this is not the case for dynamic graph algorithms.
Dynamic graph algorithms has a variable behavior across the algorithm iterations, where the amount of incoming messages, the outgoing messages or even the massage size differs depending on the state of the vertex. Example algorithms are: Distributed Minimal Spanning tree, Advertisement Propagation Simulation on Graphs and Finding the Maximal vertex value. Such algorithms leads to the problem that the workload of each vertex does not depends directly on the graph edges, or any other static feature of the graph, which is only determined during the algorithm runtime. Moreover, the dynamic behavior of a graph algorithm leads to an unbalanced graph processing system which causes performance degradations to the end-to-end processing, for example some workers can be overloaded with incoming/outgoing messages while others are idle. In this case starting the computation with a static smart partitioning does not necessary leads to a balanced or optimized computation; the system should be able to adapt for the behavioral change of the graph algorithm to avoid system imbalance and improve the response time of the system.
|
OPCFW_CODE
|
On 07/30/2011 05:51 PM, Chris Fordham wrote:
> Recently I published 20 AMIs of Debian 6.0.1 for public use under
> the RightScale OSS project.
I'm happy to see continued progress in this area for my Debian friends.
>> 3. AMIs published for EBS boot (8GB root) and instance-store (10GB
> Done, instance-store was retained as 8GB for consistency
Though the EBS volume is 8GB, the root file system uses only 5GB. It
looks like the EBS volume is partitioned and 3GB of it is devoted to swap.
Having swap on the EBS volume means:
- users are paying for swap storage. Only $0.30/month, but that could
be noticeable on a t1.micro
- users are paying for swap IO transactions
- swap is saved in EBS snapshots, increasing cost
- when users take snapshots of the instance to create a public AMI
(already not recommended) there is a risk that confidential information
could leak through swap into the public AMI like passwords or AWS
- with a partitioned EBS boot volume, it is difficult for users to run
instances of the AMI with a larger root file system
I also noticed that the instance did not have ephemeral storage attached
or mounted. It can be convenient to have easy access to a large local
disk for temporary data storage, even if it is not persistent. This is
also a useful place to drop secret files that you don't want stored with
It looks like people need a RightScale account to use this or even to
read the code.
>> 6. Some startup hooks. At a minimum, the AMIs should support user-
>> data scripts ("#!" runs on first boot of instance)
> The Alestic, ec2-run-user-data service was included and tested.
The cloud-init package is taking off across multiple distributions. I'd
recommend using it so users can take advantage of the growing software
and documentation pool with running things on EC2. It has more hooks
than just user-data scripts and can be both more powerful and simpler
depending on your needs.
I recognize that RightScale has your own instance setup hooks, but those
should integrate seamlessly and RightScale has so much more
infrastructure support to offer above and beyond startup hooks, it
shouldn't be a competitive thing.
>> 7. Creates random ssh host key on first boot of each instance for
> This is performed by the RightScale RightLink agent upon start.
I assume that requires the AMI to be run using a RightScale account.
I just ran two instances of ami-1212ef7b and both have the same ssh host
key. This means that ssh to any instance of these AMIs is unsafe and
vulnerable to man-in-the-middle attacks.
It is also important to output the new ssh host key fingerprint to the
console following the output format standard started by Amazon, so that
people can check the fingerprint on first ssh. Use
"ec2-get-console-output" to see what it looks like on any Amazon or
>> 8. Uses standard EC2 ssh key installation from instance meta-data
> An LSB compliant getsshkey service was included.
I'm curious: Why is there a /root/.ssh/KEYPAIRNAME.pem file in addition
to having the public key in /root/.ssh/authorized_keys ? Is this file
used by the system?
>> 9. No default public or private passwords pre-set for any service.
Under what circumstances do instances of the AMIs dial home to RightScale?
> I can export the build db somewhere else, but there probably isn't a point.
The point would be for people to be able to find the correct AMI id with
automated software. For example, Alestic.com uses the Canonical API to
query the latest official Ubuntu AMI ids to list in the table at the top
of the home page. Having to parse an HTML page is error prone and
likely to break as the UI changes.
At this point, I am so little involved with Debian on EC2, it probably
doesn't make sense for me to be any sort of gatekeeper for what the best
Debian AMIs are. There is so little traffic on this group that I don't
even have an idea of what people are using or if existing public Debian
AMIs are being well vetted.
I recognize that http://Alestic.com is considered an authority if just
by Google for search phrases including "debian" and "ec2" / "ami", so I
feel an obligation to point people in a good direction when they land there.
I was already planning to stop listing the Debian AMI ids that I built
years ago as they are old Debian versions and I no longer release
updates. I think the debian.org page you found would be a reasonably
official place to send folks:
Disclosure: RightScale is a long time sponsor and supporter of
Alestic.com, my personal tech blog about AWS/EC2. I am a fan of and
support RightScale, but think that a good community AMI should still be
high quality and safe when run outside of RightScale.
|
OPCFW_CODE
|
added history diff
Description
Related Issue
Motivation and Context
How Has This Been Tested?
Screenshots (if appropriate):
Types of changes
[ ] Bug fix (non-breaking change which fixes an issue)
[ ] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to change)
Checklist:
[ ] I have run the pre-commit run command to format and lint.
[ ] My change requires a change to the documentation.
[ ] I have updated the documentation accordingly.
[ ] I have read the CONTRIBUTING document.
[ ] I have added tests to cover my changes.
[ ] I have added my name and/or github handle to AUTHORS.rst
[ ] I have added my change to CHANGES.rst
[ ] All new and existing tests passed.
I tested this PR and it seems to me, that it works correctly and is very helpful. Here is a preview of what it does:
I checked, that it does show the diff values for my models and also that it doesn't make unnecessary DB queries (the number of queries is constant and not dependent on number of history records).
When I look at the code, i think there is added unnecessary comment line:
<!-- {{ action.history_diff }}-->
which is probably some relic from development.
And obviously the failed test would need to be corrected.
@raunaq-sailo Could you please remove the comment and fix the failing tests?
@raunaq-sailo Just as a heads-up, I'm considering implementing the changes I suggested myself, if no activity has happened within a couple weeks :)
Oh, I thought history_list_display was already a thing. I don't want to have perfect get in the way of good though, so let's make this improvement.
@tim-schilling I realized some additional changes needed to be made to properly display foreign keys and M2M objects, and so most of the recent commits were made to facilitate that - incl. making some optimizations and usage improvements along the way. It's absolutely possible, however, to display them without some of those changes, so let me know if I should split it into multiple PRs 🙂
I also realized that making history_list_display contain the current default column layout - like we discussed - would require several extra changes, which I think is better to include in a separate PR that I can open after this one. Is that okay with you?
@tim-schilling I mainly added some (presumably final) polish commits, including fd72e7a5f9943a3d93be4066a97ed6d946b0c4d8 and 38ec0c2049bc2388408d0701a69cefed1bbad45d, which improve how really long strings are displayed, and add tests for how safe strings are handled. Again, please let me know if I should split it into multiple PRs 🙂
(The failing tests are unrelated to this PR; I'm looking into them.)
@ddabble I think it looks good. There was a lot in those changes and admittedly I didn't do a great job of reviewing. It looks like the code is well written, there are tests and they are passing. Not having code coverage hurts us here because that would confirm that the code is actually running in the tests.
Let's make this the last of the large, multi-commit PRs though. They're really tough to do a good job when I'm not super-duper familiar with the codebase.
@ddabble when you think it's ready, I'll integrate it with our application to see how things perform.
Not having code coverage hurts us here because that would confirm that the code is actually running in the tests.
True.. From the workflow logs, it looks like we're being rate-limited by Codecov, since the repo doesn't have any upload token 😕 (see #1305)
Let's make this the last of the large, multi-commit PRs though. They're really tough to do a good job when I'm not super-duper familiar with the codebase.
Alright, that's fair 😅 Thanks a bunch for reviewing regardless! I'll clean up the commits for merging, as mentioned above (followed by re-requesting a review) 🙂
when you think it's ready, I'll integrate it with our application to see how things perform.
I'm assuming you mean as a final system test before merging..?
I'm assuming you mean as a final system test before merging..?
Yeah, that's what I was thinking. Let me do that now.
Also, I didn't run into any issues. All good.
Finished cleaning up the commits, so the PR is ready for merging :)
(A few minor changes done while rebasing)
Not to spam, but I think I love you 😍 this feature is awesome
|
GITHUB_ARCHIVE
|
I like modal web dialogs, especially when they dim the background. Here is an example:
The background is dimmed, not clickable, and the dialog box is in the foreground. The work area is visible at first site, you must focus on the current content.
I wanted something similar in WPF. Here is my example application:
How it works?
There are two parts of the window. There is a DataGrid in the left pane and a ListBox in the right pane. The same data is loaded into both controls. When you select an item in the grid, the popup will appear over the grid, displaying the selected name. Clicking on the popup will close that. You can’t click through the popup. The right pane demonstrates that you can do anything with other controls. The popup resizes itself (drag the grid splitter to test it).
The key component to the popup control is the adorner class of WPF. The definition of adorners from MSDN:
An Adorner is a custom FrameworkElement that is bound to a UIElement. Adorners are rendered in an AdornerLayer, which is a rendering surface that is always on top of the adorned element or a collection of adorned elements. Rendering of an adorner is independent from rendering of the UIElement that the adorner is bound to. An adorner is typically positioned relative to the element to which it is bound, using the standard 2-D coordinate origin located at the upper-left of the adorned element.
There are three types of adorners:
Adorner: An abstract base class from which all concrete adorner implementations inherit.
AdornerLayer: A class representing a rendering layer for the adorner(s) of one or more adorned elements.
AdornerDecorator: A class that enables an adorner layer to be associated with a collection of elements.
The code of the popup adorner is the following:
You can set a content for the popup adorner, and that is all what I want. The VisualCollection is responsible for rendering the content on screen, and the ContentPresenter is responsible for storing the popup’s content.
There is a BackgroundShade user control to represent the dimmed background. The code is very short and simple:
I set the background to black at line 3 and I set the opacity of the control to 70% at line 4.
The Main window layout:
The Main window code:
The implementation details are in the comments. One interesting thing: I couldn’t simply bind the AdornedElement’s RenderSize to the shade’s width property, so it seems simpler to convert the AdornedElement to FrameworkElement and attach an event handler to the SizeChanged event.
You can download the source and the binaries from Google code, or you can simply browse source at the same place. Links are below.
|
OPCFW_CODE
|
module Motor
class Motor
attr_reader :stall_current
attr_reader :stall_torque
attr_reader :free_speed
attr_reader :max_power
attr_reader :quantity
def initialize
@quantity = 1
end
def current(speed: nil, torque: nil)
unless speed.nil?
return @stall_current * (@free_speed - speed) / @free_speed
end
unless torque.nil?
return @stall_current * torque / @stall_torque
end
end
def speed(current: nil, torque: nil)
unless current.nil?
return @free_speed * (@stall_current - current) / @stall_current
end
unless torque.nil?
return @free_speed * (@stall_torque - torque) / @stall_torque
end
end
def torque(speed: nil, current: nil)
unless speed.nil?
return @stall_torque * (@free_speed - speed) / @free_speed
end
unless current.nil?
return @stall_torque * current / @stall_current
end
end
##
# Sets the # of motors being used.
#
# Note: because this may be called in initialize, we rely on
# the @quantity value to default to one (and not be nil) - this
# is set in the super initialize in Motor.
#
def set_quantity(qty) #should be quantity=(), but then calling from child doesn't work?
@stall_current *= qty / @quantity
@stall_torque *= qty / @quantity
@max_power *= qty / @quantity
@quantity = qty
end
end
class CIM < Motor
def initialize(qty = 1)
super()
@stall_current = 131
@stall_torque = 2.41
@free_speed = 5330
@max_power = 337
set_quantity qty
end
end
class MiniCIM < Motor
def initialize(qty = 1)
super()
@stall_current = 89
@stall_torque = 1.41
@free_speed = 5840
@max_power = 215
set_quantity qty
end
end
class Pro775 < Motor
def initialize(qty = 1)
super()
@stall_current = 134
@stall_torque = 0.71
@free_speed = 18730
@max_power = 347
set_quantity qty
end
end
end
|
STACK_EDU
|
Bottom-Tested Canonical Loops in LLVM
Survey on impact of using bottom-tested loops (i.e. LoopRotation loops) as canonical loops in LLVM
What is the proper canonical loop form for loop Passes in LLVM?
Question above is one of the earliest discussion topics in the newly established LLVM Loop Optimization Working Group. The rationale behind it is that most of the loop optimization or analysis Passes only accept certain shape of loops. For example, requires to have a pre-header, a header, two eyes and four limbs. In the ideal cases, every loop Passes will agree on one type, or one shape, of loops. Which is called canonical loops.
Unfortunately, that’s not the case now. From the survey I make here, we can find that despite most of the loop Passes agree on LoopSimplify, which is kind of THE canonical loop written in some of the LLVM docs, there are still some Passes only accept LoopRotation form, which is sometimes called bottom-tested loops since LoopRotation is basically turning the conventional for-loop-style loops into do-while-style loops (alone with some sanity checks). Furthermore, the LoopRotation pass itself also use LoopSimplify loops as input. So we’re of course wondering:
Can we use bottom-tested loops as the only canonical loop form?
Given that it might can satisfy both LoopSimplify-using and LoopRotation-using users. In this survey, I will try to prove (or dis-prove) this point by comparing their high-level control flow structure and by providing some quantified experiment results.
Control Flow Structure: LoopSimplify v.s. LoopRotation
Control flow structure is the first thing a LoopSimplify-using Pass will check, since that’s usually the number-one reason why a Pass relies on LoopSimplify. So our goal here is to examine whether LoopRotation loops will pass these sanity checks designed for LoopSimplify loops.
As mentioned in my previous survey enclosed in the link above, a loop confine with LoopSimplify form if:
- It has a pre-header.
- There is only one back-edge.
- Predecessors of exit blocks are guarantee to be in the loop.
We’re particularly interested in the first property, since this property is demanded by many Passes and most of the loops won’t create pre-header at the first place.
On the other hand, a loop will have the following control flow structure after going through LoopRotation:
As seen in the diagram, the most notable thing LoopRotation does is merging the loop header, body and range test into single basic block. It might create one large, bulky basic block, but it will guarantee that instructions within this block would have the same amount of execution counts. Which can make life easier for some tricky transformations like vectorization.
Another important characteristic is the loop guard block. It will make sure that value of induction variable is not out-of-range when it first entering the loop. And in fact, it will be the pre-header for our LoopRotation loop. Furthermore, the guard block will always be generated since we need to follow the default for-loop-style range testing mechanism. So now we’re pretty confident that LoopRotation will not break the pre-header requirement.
Regarding rest of the two requirements. There is no specific process in LoopRotation that will break the number of back edge requirement. And LoopRotation will also try to maintain the exit block predecessor property by splitting critical edges at the end of the transformation (thanks Bardia for pointing out).
In order to give some numbers and providing a preliminary testing on the acceptant rate of using LoopRotation as the canonical loop, I did a simple experiments on the existing LLVM regression tests.
The methodology is simple: For those Passes that requires LoopSimplify loops as input, I inserted a LoopRotation Pass in front of it and run the tests, then see how many of them pass the test. For example:
RUN: opt -loop-fusion -S -o - < %s | FileCheck %s
RUN: opt -loop-rotate -loop-fusion -S -o - < %s | FileCheck %s
Note that despite that many of these loop Passes might also been used in tests for other components, I only extract tests specifically put under these loop Passes’ testing folders.
This approach is definitely not the best way. Since current LLVM regression test is built on string matching and it barely can tell if a generated loop is “effectively equivalent” to the golden output. But the idea is to provide us a quick impression of how LoopRotation interacts with these LoopSimplify users while leveraging the existing testing codebase, which is kind of acting the de facto specification describing the behavior of a Pass. Here is the table of failure rates:
There are total of 54 failures and it’s impossible for me to go through them all by myself. So let’s pick only part of them and see what we’ve got.
The first thing caught our eyes is the high failure number for LoopStrengthReduction(LSR for short) and high failure rate for LoopFusion. I quickly pick two of the failures (by random) in LSR, 2012–03–15-nopreheader.ll and lsr-overflow.ll . These two are failed because the extra LCSSA instruction generated by LoopRotation cause a string mismatch. Where by default LSR won’t actively maintain a LCSSA form. Other than that, the outcome looks good, and extra LCSSA is of course not a problem.
We’re also interested if there are other Passes in our list that do not actively maintain LCSSA form since it might help us cross out some of the failure reasons. Here is the list:
Despite that LCSSA might be the root cause for LoopFusion, its five-out-of-six failure rate just doesn’t look right (I’d even highlighted it in red!). So let’s look at it, picking inner_loops.ll and four_loops.ll, which is representing two of the complex loop forms. Turns out that these two test cases are not caused by LCSSA. To see if it really failed the test, I think the fastest way is dumping the CFG.
LoopFusion try to fuse multiple similar loops together. The inner_loops.ll contains nested loops, so we’re expecting to see our LoopRotation + LoopFusion loops showing nested structure, here is the graph:
Great! looks like we do get nested loops, where
bb11 is the header of outer loop and
bb14 is that of inner loop.
For four_loops.ll it’s testing if LoopFusion can fuse four continuous loops together, so we’re expecting to see, well, only one loop. Here is the graph:
The loop definitely need some work out to loose some weight but I think we can call it a success.
In this survey I presented the possibilities of using LoopRotation as the canonical loop form. The key behind this is to see if LoopRotation breaks some characteristics of LoopSimplify loop, which is kind of the canonical loop form currently used.
By comparing control flow structures of LoopSimplify and LoopRotation loops, there is no outstanding drawbacks showing that LoopRotation loops will not accepted by LoopSimplify loops users.
Finally, I ran a simple experiment to see if we ran LoopRotation before every LoopSimplify loop user Passes in existing LLVM regression testsuite, what is the failure rate. The result showed around 10% failure rate in total. I’ve hand-picked some of these failures and found that the resulting loops are still equivalent to the expecting one. This is just a preliminary experiment, not all of the failures are invested either. But I hope it can pin-pointed places that might not agree with LoopRotation loops.
As a personal verdict, I think from these experiment, migrating canonical loops to LoopRotation loops seem to require little of works on existing loop Passes, I’m pretty positive on this migration. And of course, we definitely need feedback from the community and real-field test results.
|
OPCFW_CODE
|
Mail or create a merge-directive for submitting changes.
brz send [SUBMIT_BRANCH] [PUBLIC_BRANCH]
Body for the email.
Use the specified output format.
- -f ARG, --from=ARG
Branch to generate the submission from, rather than the one containing the working directory.
- -h, --help
Show help message.
Mail the request to this address.
- -m ARG, --message=ARG
Do not include a bundle in the merge directive.
Do not include a preview patch in the merge directive.
- -o ARG, --output=ARG
Write merge directive to this file or directory; use - for stdout.
- -q, --quiet
Only display errors and warnings.
Remember submit and public branch.
- -r ARG, --revision=ARG
See “help revisionspec” for details.
Refuse to send if there are uncommitted changes in the working tree, –no-strict disables the check.
Show usage message and options.
- -v, --verbose
Display more information.
A merge directive provides many things needed for requesting merges:
A machine-readable description of the merge to perform
An optional patch that is a preview of the changes requested
An optional bundle of revision data, so that the changes can be applied directly from the merge directive, without retrieving data from a branch.
brz send creates a compact data set that, when applied using brz merge, has the same effect as merging from the source branch.
By default the merge directive is self-contained and can be applied to any branch containing submit_branch in its ancestory without needing access to the source branch.
If –no-bundle is specified, then Breezy doesn’t send the contents of the revisions, but only a structured request to merge from the public_location. In that case the public_branch is needed and it must be up-to-date and accessible to the recipient. The public_branch is always included if known, so that people can check it later.
The submit branch defaults to the parent of the source branch, but can be overridden. Both submit branch and public branch will be remembered in branch.conf the first time they are used for a particular branch. The source branch defaults to that containing the working directory, but can be changed using –from.
Both the submit branch and the public branch follow the usual behavior with respect to –remember: If there is no default location set, the first send will set it (use –no-remember to avoid setting it). After that, you can omit the location to use the default. To change the default, use –remember. The value will only be saved if the location can be accessed.
In order to calculate those changes, brz must analyse the submit branch. Therefore it is most efficient for the submit branch to be a local mirror. If a public location is known for the submit_branch, that location is used in the merge directive.
The default behaviour is to send the merge directive by mail, unless -o is given, in which case it is sent to a file.
Mail is sent using your preferred mail program. This should be transparent on Windows (it uses MAPI). On Unix, it requires the xdg-email utility. If the preferred client can’t be found (or used), your editor will be used.
To use a specific mail program, set the mail_client configuration option. Supported values for specific clients are “claws”, “evolution”, “kmail”, “mail.app” (MacOS X’s Mail.app), “mutt”, and “thunderbird”; generic options are “default”, “editor”, “emacsclient”, “mapi”, and “xdg-email”. Plugins may also add supported clients.
If mail is being sent, a to address is required. This can be supplied either on the commandline, by setting the submit_to configuration option in the branch itself or the child_submit_to configuration option in the submit branch.
The merge directives created by brz send may be applied using brz merge or brz pull by specifying a file containing a merge directive as the location.
brz send makes extensive use of public locations to map local locations into URLs that can be used by other people. See brz help configuration to set them, and use brz info to display them.
- See also
|
OPCFW_CODE
|
If this is your case, follow these steps to install the Windows 7 driver. We do not guarantee its workability and compatibility. After a short period the problem appeared again. I reinstalled the drivers and after rebooting it seemed that the problem solved. There was a pre-requisite framework update needed before the system was able to identify the drivers not sure why though. It was not because of the driver. I re-installed the driver from that link you provided before posting this one but it did not work.
Reset of system at factory settings. Attention: Some software were taken from unsecure sources. I don't really see this as a solution. If a message box titled Program Compatibility Assistant is displayed after double-clicking the downloaded file, click This program installed correctly. This message is sent out by the protection mechanism of Microsoft Windows.
The downloaded driver is always in self-installer format. This laptop was just bought recently and I expect premium quality from this business line. This message is sent out by the protectionmechanism of Microsoft Windows. Run the setup program from the directory that contains the unpacked softpaq files. Select Let me pick from list of device drivers on my computer. Select Browse my computer for driver software. Run the setup program from the directory that contains theunpacked softpaq files.
Run the setup program from the directory that contains the unpacked SoftPaq files. Their tech support provided me with an update patch that allowed the card to function as it should. But with the Pro version it just takes 2 clicks and you get full support and 30-day money back guarantee. Unzip the downloaded driver file to a specific location. Just choose an easier way on your case. Is this a software or hardware failure? Also I had another weird error that my computer won't shutdown.
After I updated the netframework, I was able to complete the driver update as usual and the device works fine. Click Browse… button to navigate to the folder where you saved the unzipped downloaded driver file. Download the file by clicking the Download or Obtain Software button and saving the file to a folder on your hard drive make a note of the folder where the downloaded file is saved. Always check downloaded files with antivirus software. This message is sent out by the protectionmechanism of Microsoft Windows 7.
If you fail to install the Windows 7 driver in Windows 10 using setup file. We all know that this is most of the time impossible and time consuming proc. I Googled high and low and found various fixes but they did not work. Or click Update All button if you go Pro to update all drivers automatically. I tried again with drivers and after a second installation it seems that they installed properly at least the date of them changed.
I hope my explanation is clear enough so someone with the same problem can fix their computers. Run the setup program from the directory that contains the unpacked files. We do not cover any losses spend by its installation. Can you please help : Cheers Hi, I don't think it will work. This message is displayed by the protection mechanism of Microsoft Windows.
Then I find out that all of my 3. Driver Easy will scan your computer and detect all problem drivers instantly. Note if the driver is missing or corrupted, you will see a yellow mark next to the device. Way 1: Download and Install the Drivers from Manufacturers Manually When you download drivers manually, ensure that you download the drivers from official manufacturers, which are definitely safe to your computer. I didn't have any issue with getting to work on a Lenovo Thinkpad. If I plug in my external hard drive, I can see light which means power is there , but the drive will not show up in Windows Explorer.
. Driver Easy will scan your computer and detect all problem drivers. . . .
. . . . .
|
OPCFW_CODE
|
defining some terms
So, the notation I use is motivated by the bra-ket notation as used in quantum mechanics, and invented by the famous physicist Paul Dirac. Note though that the mathematics of my scheme and that of QM are vastly different.
Let's define the terms:
<x| is called a bra
|x> is called a ket
Essentially any text (currently ASCII only) can be inside bra/kets except <, |, > and \r and \n. Though we do have some conventions, more on that later.
Next, we have operators, that are again text.
The python defining valid operators is:
if not op.isalpha() and not op == '!':
return all(c in ascii_letters + '0123456789-+!?' for c in op)
Next, we have what I call "superpositions" (again borrowed from QM). A superposition is just the sum of one or more kets.
|a> + |b> + |c>
But a full superposition can also have coeffs (I almost always write coefficients as coeffs).
3|a> + 7.25|b> + 21|e> + 9|d>
The name superpositions is partly motivated by Schrodinger's poor cat:
is-alive |cat> => 0.5 |yes> + 0.5 |no>
This BTW, is what we call a "learn rule" (though there are a couple of other variants).
They have the general form:
OP KET => SUPERPOSITION
Next, we have some math rules for all this, though for now it will suffice to mention only these:
1) <x||y> == 0 if x != y.
2) <x||y> == 1 if x == y.
7) applying bra's is linear. <x|(|a> + |b> + |c>) == <x||a> + <x||b> + <x||c>
8) if a coeff is not given, then it is 1. eg, <x| == <x|1 and 1|x> == |x>
9) bra's and ket's commute with the coefficients. eg, <x|7 == 7 <x| and 13|x> == |x>13
13) kets in superpositions commute. |a> + |b> == |b> + |a>
18) |> is the identity element for superpositions. sp + |> == |> + sp == sp.
19) the + sign in superpositions is literal. ie, kets add.
|a> + |a> + |a> = 3|a>
|a> + |b> + |c> + 6|b> = |a> + 7|b> + |c>
And that is it for now. Heaps more to come!
Update: I guess you could call superpositions labelled, sparse vectors. By giving each element in a vector a name, it gives us the freedom to drop elements with coeff = 0. For large sparse vectors, this is a big win. For large dense vectors, there is of course a price to pay for all those labels. And since we have labels we can change the order of the elements without changing the meaning of the superposition. Say if we want to sort by coeff size. This is harder if you use standard unlabeled vectors. I guess the other thing to note about superpositions is that they allow you to define operators with respect to vector labels, and not vector positions.
previous: announcing the semantic db project
next: context learn and recall
by Garry Morrison
email: garry -at- semantic-db.org
|
OPCFW_CODE
|
this.$el and this.$refs not populated in mounted() hook on first client-side page render
When navigating to a page on the client side, if the page is being navigated to for the first time from the client side and if the root element of the page being navigated to is another component, then this.$el in the page component is a comment instead of the other component and this.$refs is an empty object when accessed in the mounted() lifecycle hook.
Versions
nuxt: v2.15.3
node: Not sure how to check this in Code Sandbox...
Reproduction
Link to Code Sandbox where you can see the issue.
Load the /about page first (refresh if needed to ensure the /index page has not been loaded from the client side) and then navigate to the /index page so that it is being rendered on the client for the first time.
Steps to reproduce
Create a pages/index.vue file with the following code:
<template>
<Dummy refs="dummy" />
</template>
<script>
export default {
mounted() {
console.log("index", this.$el);
console.log("index", this.$refs);
},
};
</script>
Create a pages/about.vue file with the following code:
<template>
<NuxtLink to="/">To Index</NuxtLink>
</template>
Create a components/Dummy.vue file with the following code:
<template>
<NuxtLink to="/about">To About</NuxtLink>
</template>
<script>
export default {
mounted() {
console.log("dummy", this.$el);
},
};
</script>
Run the project, and from the client start on the /about page first.
Navigate to the /index page. Note in the console that in the mounted() lifecycle hook in index.vue, this.$el is a comment and this.$refs is an empty object.
What is Expected?
As far as all Vue documentation seems to indicate, by the time mounted() is called this.$el should be the DOM element at the root of the template or the Vue component at the root of the template, and this.$refs should be populated. In a project I'm working on, I need to access this.$el in the mounted() hook, but since it is a comment I am not able to.
What is actually happening?
When navigating to a page on the client side, if the page is being navigated to for the first time from the client side and if the root element of the page being navigated to is another component, then this.$el in the page component is a comment instead of the other component and this.$refs is an empty object when accessed in the mounted() lifecycle hook.
I did some more digging on this and discovered that if you remove components: true from nuxt.config.js and then manually import the Dummy component, everything works fine.
After reading this StackOverflow answer, I suspect this may be because the components are imported differently in v2 of @nuxt/components. I recently upgraded from Nuxt v2.14 to v2.15 and only then did I start seeing this issue. Can anyone confirm that this is indeed the problem?
I love Nuxt, and it's a shame I can't take advantage of the components: true feature. For now it seems I'll have to manually import all components across my application to ensure reliable mounting behavior.
I love Nuxt, and it's a shame I can't take advantage of the components: true feature. For now it seems I'll have to manually import all components across my application to ensure reliable mounting behavior.
I found this https://github.com/nuxt/nuxt.js/issues/8879. FYI.
#8879
This solved my problem.
What do you think about update docs?
@liqueflies It's not guaranteed to work - and we're currently waiting on vuejs/vue#11963 which will solve the issue properly.
Closing since is duplicate of #8879. Explanation and workarounds: https://github.com/nuxt/nuxt.js/issues/8879#issuecomment-784172819
Also for clarification vue/11963 won't solve this issue.
@pi0 I don't think that other issue exactly matches this one. As you shared, it does make senses for $refs to be empty until the component is rendered, but I'm seeing$el is also not the correct value. Per the Vue docs, the mounted() hook is called "after the instance has been mounted, where el is replaced by the newly created vm.$el." This means that either Vue or Nuxt's behavior is incorrectly documented, because $el should not be an empty comment in the mounted() hook, correct?
Actually, for async components, it is a different story. Until the actual component is being resolved, a placeholder is used in place. Please see this codepen using async components and ref with vue 2.x: https://codepen.io/pi0/pen/BaQEqbL
Options to resolve this issue are:
Not using async components (by using loader: true or directly importing component)
Using a timeout that gives enough windows for render to happen
Use updated instead of mounted
Also related issue: https://github.com/vuejs/vue/issues/2247
Sorry @pi0 and thank you for reply,
can't understand why if I roll back to nuxt 2.14.x is working with components: true and no one of that workarounds.
Is something that should solve maybe at $nextThick but no with timeouts.
This also is breaking functional components I guess.
Thank you!
I would like to propose that this new behavior is potentially very confusing in a number of different circumstances. I just ran into another situation where the usage of async components led to a bug which was very hard to find.
See the Code Sandbox link below for an example:
https://codesandbox.io/s/boring-bassi-et0fb?file=/pages/index.vue
In the example, when the root <Test /> component of the index.vue page is being loaded async, the id attribute on the <Nuxt /> component in the parent default.vue layout does not get added. So any styles that rely on the existence of that particular id do not get applied, and the styling of the page is broken.
|
GITHUB_ARCHIVE
|
This tutorial will make web UI testing easy. We will build a simple yet robust web UI test solution using Python, pytest, and Selenium WebDriver. We will learn strategies for good test design as well as patterns for good automation code. By the end of the tutorial, you’ll be a web test automation champ! Your Python test project can be the foundation for your own test cases, too.
📍 If you are looking for a single Python Package for Android, iOS and Web Testing – there is also an easy open source solution provided by TestProject. With a single executable, zero configurations, and familiar Selenium APIs, you can develop and execute robust Python tests and get automatic HTML test reports as a bonus! All you need is:
pip install testproject-python-sdk. Simply follow this Github link to learn more about it, or read through this great tutorial to get started.
- Web UI Testing Made Easy with Python, Pytest and Selenium WebDriver (Overview)
- Set Your Test Automation Goals (Chapter 1)
- Create A Python Test Automation Project Using Pytest (Chapter 2)
- You’re here → Installing Selenium WebDriver Using Python and Chrome (Chapter 3)
- Write Your First Web Test Using Selenium WebDriver, Python and Chrome (Chapter 4)
- Develop Page Object Selenium Tests Using Python (Chapter 5)
- How to Read Config Files in Python Selenium Tests (Chapter 6)
- Take Your Python Test Automation To The Next Level (Chapter 7)
- Create Pytest HTML Test Reports (Chapter 7.1)
- Parallel Test Execution with Pytest (Chapter 7.2)
- Scale Your Test Automation using Selenium Grid and Remote WebDrivers (Chapter 7.3)
- Test Automation for Mobile Apps using Appium and Python (Chapter 7.4)
- Create Behavior-Driven Python Tests using Pytest-BDD (Chapter 7.5)
With our new test project in place, let’s write some web UI tests with Selenium WebDriver!
WebDriver is a programmable interface for interacting with live web browsers. It enables test automation to open a browser, send clicks, type keys, scrape text, and ultimately exit the browser cleanly. The WebDriver interface is a W3C Recommendation. The most popular implementation of the WebDriver standard is Selenium WebDriver, which is free and open source.
WebDriver has multiple components:
- Automation Code. Programmers use language bindings to automate browser interactions. Common interactions include finding elements, clicking them, and scraping text. Typically, this is written with a test automation framework.
- JSON Wire Protocol. Language bindings encode every interaction using JSON and send them as REST API requests to the browser’s driver. The JSON wire protocol is platform- and language- independent.
- Browser Driver. The driver is a standalone executable on the test machine. It acts as a proxy between the interaction’s caller and the browser itself. It receives JSON requests for interactions and sends them to the browser using HTTP.
- Browser. The browser renders the web pages under test. It is essentially controlled by the driver. All major browsers support WebDriver. Each browser also needs its own driver type installed on the same machine as the browser and accessible from the system path. For example, Google Chrome requires ChromeDriver.
For our test project, we will use Selenium WebDriver’s Python bindings with Google Chrome and ChromeDriver. We could use any browser, but let’s use Chrome because (a) it has a very high market share and (b) its Developer Tools will come in handy later.
Make sure that the most recent version of Chrome is installed on your machine (To check/update Chrome, go to the menu and select Help > About Google Chrome. Or, download and install it here.) Then, download the matching version of ChromeDriver here and add it to your system path.
Verify that ChromeDriver works from the command line:
$ chromedriver Starting ChromeDriver 73.0.3683.68 (47787ec04b6e38e22703e856e101e840b65afe72) on port 9515 Only local connections are allowed. Please protect ports used by ChromeDriver and related test frameworks to prevent access by malicious code.
Then, install Python’s
selenium package into our environment:
$ pipenv install selenium --dev
Now, the machine should be ready for web testing!
Create a new Python module under the
tests/ directory named
test_web.py. This new module will hold our web UI tests. Then, add the following import statements:
import pytest from selenium.webdriver import Chrome from selenium.webdriver.common.keys import Keys
Why do we need these imports?
pytestwill be used for fixtures
Chromeprovides ChromeDriver binding
Keyscontains special keystrokes for browser interactions
As a best practice, each test case should use its own WebDriver instance. Although the setup and cleanup adds a few seconds to each test, using one WebDriver instance per test keeps tests simple, safe, and independent. If one test hits a problem, then other tests won’t be affected. Plus, using a separate WebDriver instance for each test enables tests to be run in parallel.
WebDriver setup is best handled using a pytest fixture. Fixtures are pytest’s spiffy setup and cleanup functions that can also do dependency injection. Any test requiring a WebDriver instance can simply call the fixture to get it.
Add the following code to
@pytest.fixture def browser(): driver = Chrome() driver.implicitly_wait(10) yield driver driver.quit()
browser is a pytest fixture function, as denoted by the
@pytest.fixture decorator. Let’s step through each line to understand what this new fixture does.
driver = Chrome()
Chrome() initializes the ChromeDriver instance on the local machine using default options. The driver object it returns is bound to the ChromeDriver instance. All WebDriver calls will be made through it.
The most painful part of web UI test automation is waiting for the page to load/change after firing an interaction. The page needs time to render new elements. If the automation attempts to access new elements before they exist, then WebDriver will raise a
NoSuchElementException. Improper waiting is one major source of web UI test “flakiness.”
implicitly_wait method above tells the driver to wait up to 10 seconds for elements to exist whenever attempting to find them. The waiting mechanism is smart: instead of sleeping for a hard 10 seconds, it will stop waiting as soon as the element appears. Implicit waits are declared once and then automatically used for all elements. Explicit waits, on the other hand, can provide custom waiting for each interaction at the cost of requiring explicit waiting calls. As a best practice, use one style of waiting exclusively for test automation. Mixing explicit and implicit waits can have nasty, unexpected side effects. For our test project, an implicit wait of 10 seconds should be reasonable (If your Internet connection is slow, please increase this timeout to compensate).
A pytest fixture should return a value representing whatever was set up. Our fixture returns a reference to the initialized WebDriver. However, instead of using a
return statement, it uses
yield, meaning that the fixture is a generator. The first iteration of the fixture – in our case, the WebDriver initialization – is the “setup” phase to be called before a test begins. The second iteration – which will be the
quit call – is the “cleanup” phase to be called after a test completes. Writing fixtures as generators keeps related setup and cleanup operations together as one concern.
Always quit the WebDriver instance at the end of a test, no matter what happens. Driver processes on the test machine won’t always die when test automation ends. Failing to explicitly quit a driver instance could leave it running as a zombie process, which could consume and even lock system resources.
Now that we have the WebDriver ready to go, we can write our first web UI test! Check it out here 😎
|
OPCFW_CODE
|
Top 8 Python Cloud Management Projects
Software to automate the management and configuration of any infrastructure or application at scale. Get access to the Salt software package repository here:Project mention: Question On Salt (command line) | reddit.com/r/saltstack | 2021-12-16
Universal Command Line Interface for Amazon Web ServicesProject mention: How to use AWS SSM Session Manager Plugin | dev.to | 2022-01-13
It turned out that this plugin is actually an open source project on GitHub, and this tool is used to power the start-session AWS CLI command to establish shell session. The exact way to use it undocumentated, but one can check AWS CLI's source code to see and example on how to use it.
Less time debugging, more time building. Scout APM allows you to find and fix performance issues with no hassle. Now with error monitoring and external services monitoring, Scout is a developer's best friend when it comes to application development.
A curated list of awesome Amazon Web Services (AWS) libraries, open source repos, guides, blogs, and other resources. Featuring the Fiery Meter of AWSome.Project mention: There are 40,000+ quality AWS open source repositories on GitHub but are completely unorganized. I made a search engine and browser for all of them, all curated carefully with 1000+ filters. | reddit.com/r/sysadmin | 2021-06-06
There is also https://github.com/donnemartin/awesome-aws
Infrastructure resource modeling for network automation. Open source under Apache 2. Public demo: https://demo.netbox.devProject mention: Netbox sites permissions | reddit.com/r/networking | 2022-01-19
AWS SDK for PythonProject mention: Which video course or book would you recommend for R on AWS? | reddit.com/r/Rlanguage | 2021-12-20
An integrated shell for working with the AWS CLI.Project mention: Starting to use AWS CLI at work. Need beginner tips. | reddit.com/r/aws | 2022-01-16
aws-shell will improve your life :) https://github.com/awslabs/aws-shell
A supercharged AWS command line interface (CLI).Project mention: What is the best program for making JSON CLI output more readable and manageable? | reddit.com/r/aws | 2021-11-01
I'd recommend giving https://github.com/donnemartin/saws a shot.
Deliver Cleaner and Safer Code - Right in Your IDE of Choice!. SonarLint is a free and open source IDE extension that identifies and catches bugs and vulnerabilities as you code, directly in the IDE. Install from your favorite IDE marketplace today.
pyinfra automates infrastructure super fast at massive scale. It can be used for ad-hoc command execution, service deployment, configuration management and more.Project mention: What would you use to configure VMs? | reddit.com/r/devops | 2021-09-16
I personally like pyinfra a lot.
Python Cloud Management related posts
Netbox sites permissions
1 project | reddit.com/r/networking | 19 Jan 2022
Django Project on Github to Learn From - Django, Bootstrap
4 projects | reddit.com/r/django | 18 Jan 2022
Advice for managing locations/data centers/carriers
1 project | reddit.com/r/networking | 31 Dec 2021
My Most Loved AWS Developer Tools & Resources
4 projects | reddit.com/r/aws | 20 Dec 2021
Which video course or book would you recommend for R on AWS?
1 project | reddit.com/r/Rlanguage | 20 Dec 2021
MariaDB 10.7 General Availability: any idea when?
2 projects | reddit.com/r/mariadb | 15 Dec 2021
Network cable color standards
1 project | reddit.com/r/sysadmin | 2 Dec 2021
What are some of the best open-source Cloud Management projects in Python? This list will help you:
Are you hiring? Post a new remote job listing for free.
|
OPCFW_CODE
|
New Perspectives HTML5 and CSS3 7th Edition Tutorial 5 Case 3 Cauli-Wood Gallery
Cauli-Wood Gallery Sofia Fonte is the manager of the Cauli-Wood Gallery, an art gallery and coffee shop located in Sedona, Arizona. She has approached you for help in redesigning the gallery’s website to include support for mobile devices and tablets. Your first project will be to redesign the site’s home page following the principles of responsive design. A preview of the mobile and desktop versions of the website’s home page is shown in Figure 5-61.
Sofia has already written much of the HTML code and some of the styles to be used in this project. Your job will be to finish the redesign and present her with the final version of the page.
Complete the following:
1. Using your editor, open the cw_home_txt.html and cw_styles_txt.css files from the html05 case3 folder. Enter your name and the date in the comment section of each file, and save them as cw_home.html and cw_styles.css respectively.
2. Go to the cw_home.html file in your editor. Within the document head, insert a meta element that sets the browser viewport for use with mobile devices. Also, create links to cw_reset.css and cw_styles.css style sheets. Take some time to study the contents and structure of the document and then close the file saving your changes.
3. Return to the cw_styles.css file in your editor. At the top of the file, use the @import rule to import the contents of the cw_designs.css file, which contains several style rules that format the appearance of different page elements.
Explore 4. At the bottom of the home page is a navigation list with the id bottom containing several ul elements. Sofia wants these ul elements laid out side-by-side. Create a style rule for the bottom navigation list displaying it as a flexbox row with no wrapping. Set the justify-content property so that the flex items are centered along the main axis. Be sure to include the WebKit browser extension in all of your flex styles.
5. Define flex values for ul elements within the bottom navigation list so that the width of those elements never exceeds 150 pixels but can shrink below that value.
6. Sofia wants more highly contrasting colors when the page is displayed in a mobile device. Create a media query for mobile screen devices with maximum widths of 480 pixels. Within that media query, insert a style rule that sets the font color of all body text to rgb(211,211,211) and sets the body background color to rgb(51, 51, 51).
7. Sofia also wants to reduce the clutter in the mobile version of the home page. Hide the following elements for mobile users: the aside element, any img element within the article element, and the spotlight section element.
8. At the top of the web page is a navigation list with the ID top. For mobile devices, display the ul element within this navigation list as a flexbox row with wrapping. For each list item within this ul element, set the font size to 2.2em. Size the list items by setting their flex values to 1 for the growth and shrink rates and 130 pixels for the basis value.
9. Under the mobile layout, the six list items in the top navigation list should appear as square blocks with different background images. Using the selector nav#top ul li:nth-of-type( 1 ) for the first list item, create a style rule that changes the background to the background image cw_image01.png. Center the background image with no tiling and size it so that the entire image is contained within the background.
10. Repeat the previous step for the next five list items using the same general format. Use the cw_image02.png file for background of the second list item, the cw_image03.png file for the third list item background, and so forth.
Explore 11. Sofia has placed hypertext links for the gallery’s phone number and e-mail address in a paragraph with the id links. For mobile users, she wants these two hypertext links spaced evenly within the paragraph that is displayed below the top navigation list. To format these links, create a style rule that displays the links paragraph as a flexbox row with no wrapping, then add a style that sets the value of the justify-content property of the paragraph to space-around.
12. She wants the telephone and e-mail links to be prominently displayed on mobile devices. For each a element within the links paragraph, apply the following style rule that: a) displays the link text in white on the background color rgb(220, 27, 27), b) sets the border radius around each hypertext to 20 pixels with 10 pixels of padding, and c) removes any underlining from the hypertext links.
13. Next, you’ll define the layout for tablet and desktop devices. Create a media query for screen devices whose width is 481 pixels or greater. Within this media query, display the page body as a flexbox in row orientation with wrapping.
14. The page body has four children: the header, the footer, the article element, and the aside element. The article and aside elements will share a row with more space given to the article element. Set the growth, shrink, and basis values of the article element to 2, 1, and 400 pixels. Set those same values for the aside element to 1,2, and 200 pixels.
Explore 15. For tablet and desktop devices, the top navigation list should be displayed as a horizontal row with no wrapping. Enter a style rule to display the top navigation list ul as a flexbox with a background color of rgb(51, 51, 51) and a height of 50 pixels. Use the justify-content and align-items property to center the flex items both horizontally and vertically.
16. Define the flex size of each list item in the top navigation list to have a maximum width of 80 pixels but to shrink at the same rate as the width if the navigation list is reduced.
17. Sofia doesn’t want the links paragraph displayed for tablet and desktop devices. Complete the media query for tablet and desktop devices by hiding this paragraph.
18. Save your changes to the style sheet and then open the cw_home.html file in your browser or device emulator. Verify that the layout and contents of the page switch between the mobile version and the tablet/desktop version shown in Figure 5-61 as the screen width is increased and decreased.
|
OPCFW_CODE
|
- Minimum of 3 years of Microsoft Technologies stack (ASP.Net, MVC, WEB API) having web development and UI development experience (frontend and backend).
- Proficient understanding of Single Page Application architecture and frameworks
- Must have exposure to any Relational DB (MSSQL, MYSQL).
- Strong understanding of data structure, SOLID Principles and problem solving skills.
- Strong understanding of Design Patterns for Real world problems.
- Conceptual knowledge of multi-threaded programming and performance monitoring tools.
- Experience in working on trending technologies, .Net Core, Node JS, NoSQL Databases.
- Experience in Micro-services architecture & Micro front end applications
- Experience with Unit Testing framework.
- Proficient understanding of Web UI test methodologies, frameworks and tools, such as BDD, Selenium.
- Must possess strong attention to details, high aesthetical taste, and ability to apply user-centric design approach to produce a delightful and highly usable UI/UX.
- Additional Knowledge/experience Is a Plus
- Experience with automated deployment and associated technologies (helm/yaml/ansible/docker)
- Familiarity with code versioning tools
- Experience in Security Domain or Security Tools for Application Validation/Scanning will be a plus.
- Ability to effectively communicate design, specification, test and implementation details.
- Occasional flexibility to work outside of normal business hours to collaborate with remote teams.
- Proven tracks on the ability to work independently on assigned tasks.
- Excellent analytical and multitasking skills and ability to perform well in a fast-paced environment.
.NET Core, .NET MVC, .NET , Windows services, Websockets , Client server , REST API, Angular JS , Angular 8+ , dockers , micro services
About Acqueon Technology
We're looking for an Engineer/Senior Engineer to join our Operations Engineering Team. The Operations Engineering Team forms the backbone of our core business. We build and iterate over our core platform that handles orders, payments, delivery promises, order tracking, logistics integrations to name a few. Our products are actively used by Fynd users, Operations, Delights, and Finance teams. Our team consists of generalist engineers who work on building REST APIs, Internal tools, and Infrastructure for all these users.
- Build scalable and loosely coupled services to extend our platform
- Build bulletproof API integrations with third party APIs for various use cases
- Evolve our Infrastructure and add a few more nines to our overall availability
- Have full autonomy and own your code, and decide on the technologies and tools to deliver as well operate large-scale applications on AWS
- Give back to the open source community through contributions on code and blog posts
- This is a startup so everything can change as we experiment with more product improvements
- You have prior experience developing and working on consumer-facing web/app products
- Expertise in Node.JS and Experience in at least one of the following frameworks - Express.js, Koa.js, Socket.io (http://socket.io/" target="_blank">http://socket.io/)
- Good knowledge of async programming using Callbacks, Promises and Async/Await
- Hands on experience with Frontend codebases using HTML, CSS, and AJAX
- Working knowledge of MongoDB, Redis, MySQL
- Good understanding of Data Structures, Algorithms and Operating Systems
- You've worked with AWS services in the past and have experience with EC2, ELB, AutoScaling, CloudFront, S3
- Experience with Frontend Stack would be added advantage (HTML,CSS)
- You might not have experience with all the tools that we use but you can learn those given the guidance and resources
- Experience in Vue.js would be plus
- Should be able to design robust backend architecture using different technologies to retrieve data from the servers.
- Creating databases and servers that are resistant to outages and work endlessly.
- Ensuring cross-platform compatibility by creating applications that work on different platforms.
- Based on the type of application the developer is responsible for the creation of API.
- The developer is responsible for building flexible applications that meet consumer requirements.
- Angular 6+ Experience is must
- Java experience
- Web application development (with API integration) experience
- UI design and implementation skills
- Quick Learner/Passion to learn
- Good Communication skills
Good to have:
- Flutter experience is optional
The task is to develop a platform for access to documents, books, and technical documentation, which are not publicly available on the Internet. It is oriented to those specialists and engineers who are engaged in innovations (for example, Toyota R&D). The main advantage of the platform is that all documents are collected in one place and there is a smart and specific search of the necessary documents, information inside the documents. Artificial intelligence and machine learning are used.
The customer is the American global information company IHS (is on the 11th place in the Forbes list of the fastest growing innovative companies). Is engaged in the production of a unique proprietary software product, the users of which are world-renowned companies (NASA, SAMSUNG, SONY, etc.).
- Commercial experience in C#, .NET Core, Angular for 3+ years;
- Experience with CSS3;
- WEB API knowledge;
- DBMS skills - PostgreSQL;
- Experience with Entity Framework;
- Knowledge and experience with Azure Pipelines;
- Experience with Docker, Kubernetes;
- Experience with Elasticsearch;
- Experience with Kafka queue;
- Understanding of software development cycle, agile methodologies (Scrum/Kanban);
- Level of English – Intermediate.
- Experience with RabbitMQ;
- Experience with Redis;
- Experience with Microsoft SQL Server;
- Experience with AWS (or other cloud);
- Understanding of CI/CD principles.
Reasons to join us
Andersen is a pre-IPO software development company that provides a full cycle of services. For over 14 years, we have been helping enterprises and middle-sized firms worldwide transform their businesses by creating effective digital solutions using innovative technologies.
We welcome true specialists no matter what country they live in. Salaries at Andersen are pegged to the USD, and employees are provided with a social package and an extensive set of bonuses.
- Cooperation with such businesses as Samsung, Johnson & Johnson, Ryanair, Europcar, TUI, Verivox, Media Markt, Shypple, etc. This project is just your beginning here — working with us means reliability and prospects;
- Excellent teams with streamlined processes and an opportunity to change the project. There are also systems of mentoring and adaptation for each new employee;
- Many different ways to grow: you can develop expertise in different business domains and improve as a specialist or a manager. Transparent performance review and assessment systems will allow you to determine your development path and plan your growth;
- Flexible start of the working day: from 7 AM to 11 AM. You can telecommute, work at the office, or opt for a hybrid schedule — whatever is convenient for you;
- Referral programs and an opportunity to additionally earn up to $1,500 per month by participating in the company's activities;
- Access to the corporate training portal, where the entire knowledge base of the company is collected and which is constantly updated;
- Such perks as private health insurance, English language courses, and certification compensation (AWS, PMP, etc.).
We have an urgent opening for Full Stack developer role.
Location - Noida
Must Have skill :- Dot net core+Angular
If some one is interested, please share your updated cv with me
As a Software Engineer, you will build and scale Clockwork, creating an engaging experience for our users. As one of the first few engineers, you will work on 0 to 1 type projects, partnering closely with our product, design and business teams to build the future of Clockwork.
- Write, test and deploy code to enhance our product
- Design software systems for new features
- Diagnose bugs and system bottlenecks as our product scales
- Work closely with product and design to define our roadmap
- 3+ years of professional experience as a software engineer
- Bachelor’s and/or Master’s degree, preferably in CS, or equivalent experience
- Solid engineering, coding and problem solving skills
- Strong product/business sense
- Excited about working in a fast-paced, dynamic startup environment
Clockwork encourages applications from people of all races, religions, national origins, genders, sexual orientations, gender identities, gender expressions and ages, as well as veterans and individuals with disabilities.
Visit us: https://www.saahihain.com
We are looking for a seasoned full-stack engineer (MERN) to build out our mobile application on AWS.
Design and implementation of the overall backend architecture
“Pixel-perfect” implementation of our approved user interface
Design and deployment of our database
Design and construction of our REST API
Integrating our front-end UI with the constructed API
Design and implementation of continuous integration and deployment
Have knowledge of live streaming, video communication, and AWS IVS
A relevant back-end programming language preferably Node.js
Database design and management, including being up on the latest practices and associated versions
Server management and deployment for the relevant environment
Familiarity with a relevant and globally supported framework—both front-end and back-end, if necessary—( e.g., React, Express, Meteor )
Ideally, familiarity with CSS preprocessors, bundlers, and associated languages/syntaxes/libraries ( e.g., Sass, Less, and webpack )
Thorough understanding of user experience and possibly even product strategy
Experience implementing testing platforms and unit tests
Proficiency with Git or similar version control tool
Appreciation for clean and well-documented code
Knowledge in AWS Amplify is a plus.
2. Working Experience of Ajax,Json
3. Rest/ SOAP API Integration
4. Good Logical and Analytical and communication Skills
5. Should have knowledge of CI/CD.
6. Must have experience in writing unit test cases and good in TDD approach.
7. Should have basic knowledge of HTML5 and CSS.
8. Ensuring high performance of applications on mobile/desktop
9. Knowledge of Jenkins, GIT, Docker, Linux (Basic)
11. Coordinating the workflow between the design team, the HTML coder, and yourself
12. Communicating with internal/external web services.
GOOD TO HAVE
1. Experience with Amazon web services, DocumentDB, etc.
2. Understanding of Ecommerce applications.
3. Understanding of fundamental design principles behind a scalable application.
4. Creating database schemas that represent and support business processes Implementing automated testing platforms and unit tests Proficient understanding Roles and Responsibilities
5. Design client-side & server-side architecture
6. Build the front-end of application through appealing visual design
7. Test software to ensure responsiveness and efficiency
8. Troubleshoot, debug and upgrade software
9. Build features and applications with a mobile responsive design
10. Ability to work effectively under very tight deadline pressure.
11. Analyze issues, recommend alternatives, and implement the best recommendation
12. Prioritize tasks and responsibilities while managing multiple, competing priorities
We’re looking for Full Stack Developer who will take a key role on our team. As a Full Stack Developer, you should be comfortable around both front-end and back-end coding languages, development frameworks and third-party libraries. If you’re also familiar with Agile methodologies, we’d like to meet you.
What We Want You to Do
- Design overall architecture of the web application.
- Participating in the design and creation of scalable software
- Collaborate with the rest of the engineering team to design and launch new features
- Maintain code integrity and organization
- Experience working with graphic designers and converting designs to visual elements
- Understanding and implementation of security and data protection
Technical Skills You Should Have
- Troubleshooting issues and problem solving as necessary
- Developing functional databases, applications and servers to support our websites on the back end.
- Coding for various platforms to ensure functionality across multiple channels.
- Leading and developing best practices for Full Stack Developer team.
- Developing and designing RESTful services and APIs
- Bachelor’s Degree in Computer Science or Computer Engineering
- Minimum 2 years of experience
|
OPCFW_CODE
|
RECOMMENDED: If you have Windows errors then we strongly recommend that you download and run this (Windows) Repair Tool.
a.cs(5,1): error CS0246: The type or namespace name 'yyy' could not be found ( are you missing a using directive or an assembly reference?) The error exposes. Application Codebehind="Global.asax.cs" Inherits="h2.Global" %>. This file has a single line, containing an aspx directive. The file is also known as the ASP.
ASP.NET Quick Guide – Free ASP.NET Tutorials, Reference Manual, and Quick Guide for Beginners. Learn ASP.NET starting from Environment Setup, Basic Controls.
I’m working on an MVC3 project and receive the following error: Parser Error Message: Could not load type ‘GodsCreationTaxidermy.MvcApplication’. Source Error.
ASP.NET Interview Questions And Answers – C# Corner – Oct 30, 2015. Global Theme For further information click on the link: Creating Web Application Using Themes in ASP.NET. Question 7: What is MVC? Answer:. Answer: The Global.asax file, which is derived from the HttpApplication class, maintains a pool of HttpApplication objects, and assigns them to applications as.
ASP.NET Web form – SlideShare – Jul 17, 2013. NET Compilation • Website configuration • State Management • Caching • Global Application Class (Global.asax) • Culture (Localization and globalization) • ASP. net Grid. Special @Master directive <%@ Master Language="C#" CodeFile=" MasterPage.master.cs" Inherits="MasterPage" %> MAHEDEE.
In my previous article "ASP.NET: Revolution, not evolution," I demonstrated a simple ASP.NET Greeter application that handled a single button click event. In that application, event handlers were attached to the button Web server.
Tmobile Wifi Calling Error Sim Card Not Ready Here we go again: another Blackberry product with UMA from T-Mobile. If you travel at all, you need UMA on T-Mobile. Straight up. Say what you want about your Samsung Ace with SIM card slot. That’s weak sauce compared to a. Hi All, So I have the new blaze (tmobile samsung galaxy blaze) all set
i have an asp.net mvc4 application. i have a problem in Global.asax file. <% @ Application Codebehind = "Global.asax.cs" Inherits. Syntax error in global.asax.
Parser Error: Server Error in '/' Application. <%@ Application Codebehind="Global.asax.cs" Inherits. I changed the attribute Inherits in my Global.asax,
Syntaxe error in the @Application directive in global.asax. ASP.NET Forums on Bytes.
c# – "Could not load type [Namespace].Global" causing. – In my.Net 2.0 Asp.net WebForms app, I have my Global.asax containing the following code: <%@ Application CodeBehind="Global.asax.cs" Inherits="MyNamespace.Global.
|
OPCFW_CODE
|
Email privacy is the broad topic dealing with issues of unauthorized access and inspection of electronic mail. This unauthorized access can happen while an email is in transit, as well as when it is stored on email servers or on a user computer. In countries with a constitutional guarantee of the secrecy of correspondence, whether email can be equated with letters and get legal protection from all forms of eavesdropping comes under question because of the very nature of email. This is especially important as relatively more communication occurs via email compared to via postal mail.
Email has to go through potentially untrustworthy intermediate computers (email servers, ISPs) before reaching its destination, and there is no way to tell if it was accessed by an unauthorized entity. This is different from a letter sealed in an envelope, where, by close inspection of the envelope, it might be possible to tell if someone opened it. In that sense, an email is much like a postcard whose contents are visible to everyone who handles it.
There are certain technological workarounds that make unauthorized access to email hard, if not impossible. However, since email messages frequently cross national boundaries, and different countries have different rules and regulations governing who can access an email, email privacy is a complicated issue.
There are some technical workarounds to ensure better privacy of email communication. Although it is possible to secure the content of the communication between emails, protecting the metadata of (who sent email to whom) is fundamentally hard.Even though certain technological measures exist, the widespread adoption is another issue because of reduced usability.
With the original design of email protocol, the communication between email servers was plain text, which posed a huge security risk. Over the years, various mechanisms have been proposed to encrypt the communication between email servers. One of the most commonly used extension is STARTTLS. It is a TLS (SSL) layer over the plaintext communication, allowing email servers to upgrade their plaintext communication to encrypted communication. Assuming that the email servers on both the sender and the recipient side support encrypted communication, an eavesdropper snooping on the communication between the mail servers cannot see the email contents. Similar extensions exist for the communication between an email client and the email server.
In end-to-end encryption, the data is encrypted and decrypted only at the end points. In other words, an email sent with end-to-end encryption would be encrypted at the source, unreadable to service providers like Gmail in transit, and then decrypted at its endpoint. Crucially, the email would only be decrypted for the end user on their computer and would remain in encrypted, unreadable form to an email service like Gmail, which wouldn’t have the keys available to decrypt it. Some email services integrate end-to-end encryption automatically.
Some popular methods for filtering and refusing spam include email filtering based on the content of the email, DNS-based blackhole lists (DNSBL), greylisting, spamtraps, enforcing technical requirements of email (SMTP), checksumming systems to detect bulk email, and by putting some sort of cost on the sender via a proof-of-work system or a micropayment. Each method has strengths and weaknesses and each is controversial because of its weaknesses. For example, one company's offer to "[remove] some spamtrap and honeypot addresses" from email lists defeats the ability for those methods to identify spammers.
Copyright © 2019 Aspire Tech, All rights reserved.
|
OPCFW_CODE
|
We owe pretty much everything that we are and have to innovation. That is, to our ancestors’ efforts (intentional or not) to improve their behaviors. But the rate of innovation has not been remotely constant over time. And we can credit increases in the rate of innovation to:
I commented on this post before it was ported to the Substack with my suggestion what the next meta innovation could be. I found the corresponding post on Wayback Machine:
Unfortunately, the Disqus there doesn't load and it seems all comments are gone.
I probably posted as Gunnar Zarncke. Robin, if you have access to the old Disqus threads, can you post my comment? I want to use it for a reply to Zvi's Response to Quintin Pope's Evolution Provides No Evidence For the Sharp Left Turn
"…all four seems to be due to better ways to diffuse, as opposed to create, innovations. Lump 1 was clearly the introduction of natural selection, where biological reproduction spreads innovations. Lump 2 seems somewhat clearly cultural evolution, wherein we learned enough how to copy the better observed behaviors of others."
I am not so sure about your emphasis on diffusion over creation of innovation. Natural selection works by generating minor variation in the copying process and then having differential selection. This is both a system of innovation discovery as well as propagation. They are tied together. As a side note, sex is a breakthrough in innovation (via recombination) and seems to have emerged around the time of nucleated cells and to be extremely important (though not essential) to multicellularity.
Similarly, cultural diffusion is creative in the act of being propagated. Minor variations are introduced in the act of copying others, and a new type of selection is created via the various choices we make in what to copy or preserve.
My point is that I would not put so much emphasis on diffusion breakthroughs over innovation breakthroughs. Indeed, I would actually add a third dimension to breakthroughs. Namely, breakthroughs in coordination or organization. I think the longer history of the universe can be modeled using a conceptual framework of meta breakthroughs in innovation, propagation and coordination.
Examples of the latter include cells, nucleated cells, multicellularity, social colonies (like ants), forager bands, villages, states, empires, global market networks and global scientific communities.
awesome summary and good guess. Can you recommend some good books about ems? I recently enjoyed the Bobiverse and plan to read The Age of Em.
If the industrial revolution was perhaps enabled by Gutenberg's printing press, what's interesting is that it took ~300 years to pay off in terms of observable economic growth (disclaimer: I don't know this history well). Likewise, maybe the technology that enables the next meta-innovation lump already exists but just hasn't paid off yet in terms of economic growth.
Put that way, there's an obvious candidate: the internet. Intuitively, arXiv, GitHub, blogs, etc. seem like a really big deal. Maybe they just haven't yet spawned their equivalent of the steam engine, whatever that turns out to be.
(I see you mention similar ideas in 6a.)
you make a copy of a worker and put them in the second factory at zero marginal cost, this is a cheaper way to do knowledge diffusion than before copying workers cheaply was possible.
Let me see if I can break this down o
1) Knowledge can be transmitted at all - via genes.2) Knowledge can be transmitted culturally - via language (and thus faster than reproduction cycles).3) Knowledge can be transmitted persistently - via artifacts and institutions (and thus grows with population).4) Knowledge can be generated systematically and with low error (and thus reliably building on top of each other becomes possible).
My guess for the next stage is
5) Knowledge can be transmitted systematically and without loss.
Right now, copying knowledge is inefficient. Data can be copied efficiently but each human has to learn to apply it again. The process starts with birth again. But there is no way to copy humans *with* their knowledge and the ability to apply it. I think ems are such a way, but large ML models that can act on the world - AGI - would also count: These can be copied arbitrarily too. We see precursors for this: Programs - still implemented by humans but running without us can be copied too. It is a bit like stage 3 where some parts of the transmission process were automated too.
Interesting claim that all previous meta-innovations sped up the rate of diffusion of innovations - I can see how this is true of life & culture, but how is this true of farming, or of the industrial revolution? Both seem to me to be object-level innovations in energy production, not meta-level innovations in how innovations spread.
See "SLOW TUESDAY NIGHT" by R. A. Lafferty for a vision of a vastly accelerated human society. For instance:
"The panhandler was Basil Bagelbaker, who would be the richest man in the world within an hour and a half. He would make and lose four fortunes within eight hours; and these not the little fortunes that ordinary men acquire, but titanic things."
Say two different factories simultaneously figure out a slightly different better way of doing something. How do they spread both of those innovations?
I'm confused by this reply. If you can copy ems (or computer based AI), why is this prohibited from counting as "diffusing innovations"? You set up a factory in one place, you can literally make an exact copy of all the workers and set up a second factory with equal output (so there's only capital costs of duplicating equipment, computer chips).
Can you give an example of diffusing innovations where this would NOT count?
My guess would be you think "diffusing innovations" perhaps requires hybridizing the new innovation with a different knowledge base, so the new knowledge could feed into more knowledge. Is that right?
If you call it innovation when GDP rises, what do you call it when GDP drops?
Also, less out of skepticism than curiosity, do you have a reference for the assertion that GDP "is generally an acceptable measure for this purpose"?
"Faster diffusion means faster diffusion of false positives as much as true positives." That has always been true. Even so, we've seen huge increases in overall diffusion rates.
Faster diffusion means faster diffusion of false positives as much as true positives. We're already at the limits of how fast we can handle the information that is diffusing.
A lot of computation can be packed into more "selective" diffusion, which often means less rather than more diffusion. So it's not clear that we could do a _lot_ better on diffusion rates.
In biology, cancers "diffuse" the fastest, but they're clearly not the highest form of life. Viruses and bacteria also have the capability to double their "empire" in relatively short time frames, but ascending to become incorporeal replicating parasites (on a substrate of Ems?) hardly sounds like a glorious future to aspire to.
That is generally an acceptable measure for this purpose.
I may be wrong, but my sense is you've had a number of other ideas that got a very significant amount of traction, while this one not so much.
There are some plots of GWP out there. For example: https://www.openphilanthrop...
|
OPCFW_CODE
|
Debugging Titanium Apps With Chrome DevTools
- fill the code with logging statements
It turns out that there are times when none of these techniques is satisfying: using logging statements is a quick and dirty technique, but it can lead to long modify/rebuild/test iterations, before finally finding the right spot in the code that’s causing an issue. Using a proper debugger, like the one provided by Ti Studio is surely the way to go in most cases, however, switching back to Studio just for debugging can be a painfully slow operation. In other words, I needed a more “agile” way to start a debugging session.
Enter Ti Inspector
Ti Inspector is a tool I had a lot of fun building over the past few months, which allows debugging a Titanium app (only on iOS for the moment) through the Chrome DevTools debugging interface, i.e. the same panel that can be opened for inspecting and debugging a web page in Google Chrome (e.g. right-click on the page, then Inspect Element).
How was this possible? Actually, the DevTools panel provided by Chrome is nothing more than a pure HTML+CSS+JS web application, whose source code is part of the Blink project, i.e. the fork of the WebKit rendering engine that Google recently started. When opened inside of Chrome, the DevTools front-end directly communicates with the Chrome internals through a series of JS to native bindings, however it can also be used in a remote debugging setup, where the front-end is served from and connects to a remote Chrome instance, running a web application we want to inspect. In particular, the DevTools front-end application expects to communicate with a remote backend counterpart through a websocket connection, implementing a JSON-based RPC protocol, which is documented in detail here.
--debug-host argument to the Titanium CLI invocation, for example:
$ titanium build --platform ios --debug-host localhost:54321
Ti Inspector is the tool that allows these two worlds to successfully communicate, acting as a gateway between the Chrome DevTools remote debugging protocol and the Titanium debugger protocol. It does so by the means of a node.js based application, which implements the following mechanisms:
- It serves the DevTools web app from the default port 8080
- It listens for tcp connections on the default port 8999, where the Ti debugger agent will connect once the app starts
- It accepts websocket connections from the DevTools app
- Once both the debugger agent and DevTools app are connected, Ti Inspector translates commands, replies and asynchronous events from one protocol to the other, doing additional book-keeping and translating the descriptors of the necessary model elements (i.e. scripts, breakpoints, stack frames, variables, etc.).
How to use it
Ti Inspector is a node.js module, so as a basic prerequisite a working node.js setup is needed, then we can use npm for installing it globally:
$ [sudo] npm install -g ti-inspector
Once installed, we can
cd to any Titanium Mobile application project directory and launch the
$ cd /path/to/your/titanium/project/directory $ ti-inspector
doing so, a web server starts listening on port 8080, and a debug server is attached to TCP port 8999.
Pointing a browser on
localhost:8080 we’ll get a page with a brief description of our application, telling that no active debug sessions are active. At this point, we can start our application through the Titanium CLI, specifying that the debug agent running in the app will need to connect to port 8999 on the localhost, for example:
$ titanium build -p iphone --tall --retina --debug-host localhost:8999
Once the app will start in the iOS Simulator, the debugger will connect with Ti Inspector and a new debugging session will be created. In the browser we can then start the DevTools app and start debugging.
Anyway, sometimes a screencast is better then thousand words, so you can check out this short demo:
- Breakpoints: setting/removing breakpoints, conditional breakpoints
- Call stack inspection (when execution is suspended)
- Variables and objects inspection
- Watch expressions
- Step operations (step over, step-into, step-out)
- Console logging
- Expression evaluation in the console (only when execution is suspended)
- Suspend on exceptions (disabled by default)
Ti Inspector is currently at an alpha stage of development. Some features are still missing and will be possibly added as they become indispensable (e.g. Android emulator support), while others will probably never taken into consideration (e.g. on device debugging).
For completeness, some of the current limitations are the following:
- Android is not currently supported: for debugging Android Apps, Titanium Studio does more heavy lifting and the Ti debugger protocol is somewhat translated into the V8 debugging protocol by an internal component. This means that supporting Android will mean implementing the V8 remote debugging protocol in Ti Inspector. This is something I’ll likely work on in the near future
- On device debugging is not supported since it’s treated in a special way by the CLI and Studio.
- Expressions can only be evaluated when the execution is suspended
The source code is completely available on GitHub under the MIT license. Issue postings and pull requests are very welcome.
|
OPCFW_CODE
|
With reference to http://clang.llvm.org/compatibility.html#dep_lookup:
I've just run into exactly the problem with overloaded operator<< that
this page describes. I've modified my code as suggested, but out of
interest I'm curious as to why the unqualified lookup on dependent
names is done immediately rather than deferred until template
instantiation (when the argument-dependent lookup is done). It seems
to contradict the C++11 spec 14.6.2 (it looks like C++98 has the same
language as well), which says:
"If an operand of an operator is a type-dependent expression, the
operator also denotes a dependent name. Such names are unbound and are
looked up at the point of the template instantiation (188.8.131.52) in
both the context of the template definition and the context of the
point of instantiation."
In the example at the URL above, std::cout<<value has a second
argument which is type-dependent, so I would expect operator<< to be
looked up in the context of the point of instantiation, at which point
the appropriate overload is defined.
Ah, this one is amusing.
Actually, overload resolution is done at the point of instantiation.
However, for the template code to be valid, the name should exist at the point where the template is declared…
In the URL mentionned, you could perfectly declare:
struct Useless; Useless Multiply(Useless, Useless);
Prior to the template and it would suffice to appease the compiler. No definition of either Useless or its Multiply would be needed because they will not be used in the end.
I sometimes wonder if this was done to “secure” the template instantiation by guaranteeing that at least one overload exist (even if not suitable, we cannot know at this point), rather than say… an object of that name.
The text you've quoted is rather imprecise about exactly what kinds of lookups are performed at each time.The lookup in the context of the template definition is unqualified name lookup + argument-dependent name lookup. The lookup in the context of the point of instantiation is only argument-dependent lookup.
Notionally, yes, name lookup occurs at the point of instantiation. But that doesn’t mean that name lookup finds names which have been declared since the template was defined. The relevant section is 184.108.40.206:
“For a function call that depends on a template parameter, the candidate functions are found using the usual
lookup rules (3.4.1, 3.4.2, 3.4.3) except that:
— For the part of the lookup using unqualified name lookup (3.4.1) or qualified name lookup (3.4.3), only
function declarations from the template definition context are found.
— For the part of the lookup using associated namespaces (3.4.2), only function declarations found in
either the template definition context or the template instantiation context are found.”
Thanks, that's the piece of the puzzle I was missing.
|
OPCFW_CODE
|
Security Tags provides ability to allow or deny access for Viewing or Updating Things
If you are looking to use tags for filtering the Things/Connections, you will want to use Tags. For more information about Tags, see Using Tags
If you are creating a Thing
- Users with Security Tags (assigned through Roles) will be able to access the Things
/Connectionswithout Security Tags
- Users without Security Tags will be able to access the Things
/Connectionswith Security Tags
- Security Tags is used to define which Dashboards a user (defined at the role level) can access.
- Security Tags be used to restrict access to certain data (attributes, properties, alarms, and methods) associated with things via the thing definition.
It is possible to create custom roles manually by clicking the New role button and setting the permissions you would like your user to have(Creating a Role). Often times though there are already existing custom roles in the solution that can be re-used with slight modifications. Cloning an existing role (shown below), then making modifications to the tags in the role would enable you to create custom roles.
In the example image below a custom role for the University of Wisconsin-Madison is defined by adding key, name, and adding the uofwiscmad security tag to the View and Update Security Tags fields.
"_" (underscore) is not allowed in a Security Tag
When a tag is added to a Thing in the Security Tag field, only users assigned to the role of Admin or a role containing the security tag associated with the Thing will be able to view those things. When a custom role is created with security tag(s) the users assigned to that role will only be able to view the Things with that View Security Tag.
When Tags/Security tags are available in an application then a new Thing auto-registering through that application will automatically assigned with those Tags and Security Tags. For example, all the Things using the NewApplication below will get the vehicle tag and showbydefault Security tag.
View Security Tags and Update Security tags are available in the Thing Definition to restrict viewing or updating of attributes, alarms, properties, and methods. They work in the opposite way that you would use Security Tags to show certain tagged Things. Instead of adding tags to items you want the user to view, you would add tags (not included in your custom role with tags) to the View security tags field of the data points that you do not want the user to view. The same process is used for attributes, alarms, properties, and methods.
In this below image it shows the hidebydefault view security tag (not associated with a custom role) added to a property in the Thing Definition. When a user associated with a role with any other View Security Tag logs in would not be able to see this CPU Usage property when they view this Thing.
If you wanted the user to be able to view the property but not be able to make updates to it, you would add tags (not included in your custom role with tags) to the Update security tags field associated with the data you don’t want the user to have the ability update. The same process is used for attributes, alarms, properties, and methods.
In this below image it shows the hidebydefault update security tag (not associated with a custom role) added to a property in the Thing Definition. When a user associated with a role with any other Update Security Tag logs in would not be able to update the CPU Usage property but they will be able to view the data associated with this property.
Methods can be hidden using security tags in the same manner that is used in attributes, properties, and alarms by adding a security tag (not included in your custom role with tags) to the Security Tags field associated with the method you do not want your user to access.
In the example above the Update Main Firmware is restricted to user roles with restricted security tag. The users in custom roles with other View Security Tags set will not be able to view or execute this method.
|
OPCFW_CODE
|
import { RECEIVES_QUESTIONS } from '../actions/questions'
import { CHOOSE_QUESTION } from '../actions/choose'
import { ADD_POLL } from '../actions/addPoll'
export default function questions(state = {}, action) {
switch (action.type) {
case RECEIVES_QUESTIONS:
return {
...state,
...action.questions,
}
case CHOOSE_QUESTION:
const {
id, userId, value,
} = action
const question = state[id]
const q = question[value].votes.concat([userId])
// console.log(q)
const updateState = {
...state,
[id]: {
...question, // spread l'object question
[value]: {// update the value
text: state[id][value].text, // add the text again
votes: q, // add the new array
},
},
}
// console.log(updateState)
state = updateState
return state
case ADD_POLL:
const {
author, textOptionOne, textOptionTwo, idGenerate,
} = action
console.log(author, textOptionOne, textOptionTwo, idGenerate)
return {
...state,
[idGenerate]: {
id: idGenerate,
author,
timestamp: Date.now(),
optionOne: {
votes: [],
text: textOptionOne,
},
optionTwo: {
votes: [],
text: textOptionTwo,
},
},
}
// id: 'xj352vofupe1dqz9emx13r',
// author: 'johndoe',
// timestamp: 1493579767190,
// optionOne: {
// votes: ['johndoe'],
// text: 'write JavaScript',
// },
// optionTwo: {
// votes: ['tylermcginnis'],
// text: 'write Swift',
// },
// },
default: return state
}
}
|
STACK_EDU
|
- We welcome Owen Sablocik to the group! Owen is an undergraduate researcher doing his first research experiences with us! We look forward to seeing his progress this semester!
- We welcome Alyssa Libonatti, and Olivia Pear to the group! Alyssa is a Ph.D. student in Biological Engineering. Olivia is a Ph.D. student in Materials Science Engineering.
- Nicole Garza was awarded funding from Discovery Learning Apprenticeship Program at CU Boulder to work in our lab for the 2023-2024 academic year.
- Samson was awarded a Molecular Biophysics T32 Fellowship for the 2023-2024 academic year! Congrats to Samson!
- Kōnane traveled to Midland, Michigan to give an invited talk in the Adhesion Community of Practice Seminar series at Dow. Thanks for the invite.
- Carlos Ruiz Gonzalez joins the lab as a Summer Program for Undergraduate Researcher! We look forward to working with him this summer!
- Lydia Flackett will be continuing to work in our lab this summer! We look forward to continuing to work with her!
- Kōnane traveled down to Alburqurque to give two invited talks! One in the Department of Chemical Engineering at University of New Mexico and the other at the Center for Intergrated Technologies at Sandia National Laboratories! Thanks for the great conversations and the invitations!
- Samson was selected for the 2023 BioPACIFIC Materials Innovation Platform Summer School! He will visit UCLA for a week to learn more about 3D printing biomaterials!
- Nicole Garza and Teagan Kelly was awarded funding from the Undergraduate Research Opportunities Program to work part time in the lab this summer! Congratulations to Nicole and Teagan!
- Kōnane gave an invited talk in the Symposium on "Engineered Living Materials through Synthetic Biology-Beyond the Crossroad of Biology and Chemistry" at the Spring Meeting of American Chemical Society National Meeting. Thanks for the invitation!
- Nicole Garza is awarded the CHER 4 U fellowship from CU Boulder AIChE chapter to work in the lab this spring! Congratulations to Nicole!
- We welcome Teagan Kelly, Lydia Flackett, and Nicole Garza to the the group! They are all undergraduate researchers doing their first research experiences with us! We look forward to seeing their progress this semester!
- We welcome Nickolas Gibson, Ava Crowley, and Samson Adelani to the group! Nick is a Ph.D. student in Biological Engineering. Ava is a Ph.D. student in Chemical Engineering. Samson is a Ph.D. student in Materials Science Engineering.
- Kōnane gave an invited talk in the Department of Chemical and Biological Engineering at Colorado State University. Thanks for the invitation!
- Kōnane presented an outreach presentation with Etta Tsosie (Penn State) at the National Conference for American Indian Science and Engineering Society.
- The Huli Materials Lab officially opened at University of Colorado Boulder. We look forward to sharing our latest news and updates here.
|
OPCFW_CODE
|
Leaving Malaysia behind, Adam returned home to Hong Kong, while Dan, Kevin and I flew to Bangkok, where we met up with ten (ten!) of our herping friends. It took two vans to haul our collective asses around, and we engaged the services of TonTan Travel for logistics and guide services. Tony and Tan are fine, knowledgeable people and fun to be around – I had engaged them on my first trip to Thailand in 2016.
We had a day in Bangkok while everyone assembled, and a subset of us headed over to Lumphini Park to check out the free-range water monitors and turtles that make the urban park home.
Bill said “I think there’s a snake on that branch over there”, pointing to a small tree maybe 25 meters away. Bill, as I was to subsequently discover in Taiwan, is dialed in on the serpentine form, and sure enough, even my old eyes could spot the serpent as we closed in to secure it. It was a golden flying snake (Chrysopelea ornata ornata).
I was amazed that we would find a Chrysopelea here in this manicured environment, although I suppose there are plenty of lizards and other prey around.
The monitors in Lumphini are a joy to photograph, and some of them are legit monsters. See my blog post “The Water Monitors of Bangkok” for additional illumination.
I didn’t see as many turtles on this visit, but I did get to interact with a Malayan Box Turtle (Cuora amnoinensis kamaroma) ambling about. The trip was off to a good start and I think the gang had a good time at the park.
Next morning we headed southeast towards Kaeng Krachan National Park, where we would herp for a few days. Our large group was settled into the nearby BaanMaka nature lodge, and the first snake there was a Fasciolated Kukri Snake (Oligodon fasciolatus).
Heading over to the park, Tony spied an Asian whipsnake (Ahaetulla prasina prasina) as we pulled up to the entrance, and the gang stopped for photographs.
Driving in the park, we saw a number of clouded monitors (Varanus nebulosus) clinging to trees or crossing the road. Not quite as big as Varanus salvator, but they still reach a respectable size.
The park features a campground and a little restaurant, and there are quite a few herps there as well.
A golden flying snake, poking its head out of the tree in the previous picture.
A few common sun skinks (Eutropis multifasciata), AKA snake food, were in the tree as well.
Another flying snake on the roof of the restaurant.
The beautiful Calotes emma, found just out back basking on a bench.
There were plenty of tokay geckos around the restaurant area, hiding behind objects and in crevices, and some of them were enormous.
There are watering holes for elephants and other wildlife along the park roads, and we investigated a number of them, coming up with some interesting herps.
Boulenger’s pricklenape (Acanthosaura crucigera) found in a thick section of forest.
A white-lipped tree viper (Trimeresurus albolabris).
Erik spotted a Nong Khor bushfrog (Chiromantis nongkhorensis) in some vegetation nearby.
The Baan Maka lodge was surrounded by forest and had its own collection of herps, including a juvenile tree viper that showed up at dinner.
The lodge had a nature trail that snaked around the perimeter of the property, and we found a number of herps while walking it, including a Phetchaburi Bow-fingered Gecko (Cyrtodactylus phetchaburiensis). By now I am completely enamored of this genus.
A Siamese leaf-toed gecko (Dixonius siamensis) back behind one of the cabins.
Two species of slug eaters turned up – a keeled slug snake (Pareas carinatus) and a spotted slug snake (Pareas macularius).
Plenty of red-eared frogs (Hylarana erythraeus) were scattered across the hotel grounds at night.
Another day hiking around in Kaeng Krachan. Trees overhanging water is a pretty good spot for a tree viper.
Sure enough, a white-lipped tree viper was tucked back in that area.
Additionally. a pale-brown stream frog (Clinotarsus penelope) was found nearby.
A mock viper (Psammodynastes pulverulentus pulverulentus). This little bugger nipped me while I was posing it, leaving two shallow scratches on the ball of my thumb. Within a minute my thumb began tingling/buzzing, much like a scorpion sting does, which lasted for hours. After the bite I also experienced a brief episode of euphoria that lasted for maybe five minutes. It was an interesting experience to say the least.
Oud, a guide who works for Tony and Tan, turned up a beautiful black copper rat snake (Coelognathus flavolineatus).
One night we drove down out of the hills to an agricultural area, in search of a particular pit viper. Walking along the margin of a pineapple field, it didn’t take us long to find our target, the Malayan pit viper (Calloselasma rhodostoma). These snakes have enormous and elongated heads in proportion to their body size, reminding me of the Terciopelos (Bothrops asper) that I’ve seen on the Yucatán peninsula.
We found a number of Calloselasma around the margins of the field, and some other cool herps as well, including several banded bullfrogs (Kaloula pulchra). They are a photogenic species, and after dark they often climb off the ground in search of insect prey.
In ditches along the field we observed some rice paddy snakes (Hypsiscopus plumbea), formerly in the genus Plumbea.
Yellow-spotted keelbacks (Xenochrophis flavipunctatus) were also in the ditches. All in all, it was a productive and exciting evening in the pineapple fields.
We also took a day trip in pursuit of cobras, but missed out. An Indo-Chinese rat snake (Ptyas korros) was a nice consolation prize. It was awesome to finally see a snake I’ve been reading about for nearly fifty years.
We also saw a number of butterfly agamids (Leiolepis belliana belliana) in a brushy area with few trees. These lizards are extremely wary and have burrows that serve as bolt holes. We managed to get our hands on one of them and get close looks and photos.
Some of the group did some road cruising at night in one of the vans, including a Koh Tao caecilian (Ichthyophis kohtaoensis). As per usual with caecilians, it was a tough critter to photograph.
Also found on the road was a juvenile Burmese python (Python vittatus). I was grateful that my first burm was in-country instead of in Florida.
We pulled up stakes and made an all-day drive north and east of Bangkok, to spend a few days at Khao Yai National Park. Khao Yai has some of the same herps as Kaeng Krachan, but different ones as well, and the park has good number of Asiatic elephants as well.
The park was closed after dark because elephants, so we herped around some agricultural areas at night. It was very dry, and the herps were thin on the ground, but we did turn up some frogs and a snake or two, like this juvenile Indo-Chinese rat snake (Ptyas korros).
Frogs included this Chon-Buri bubble-nest frog (Feihyla hansonae).
We hiked Khao Yai during the day, and there were quite a few visitors on the trails, which always cramps our style a bit. A parachute gecko (Ptychozoon trinototerra) was found in one of the park buildings.
In the early afternoon a thirty minute cloudburst dumped an impressive amount of rain on the place, and afterwards, the snakes came out, including this handsome Ngàn-Son bronzeback (Dendrelaphis ngansonensis).
A specklebelly keelback (Rhabdophis chrysargos) also made an appearance on the trail.
The rain also drew out the pit vipers, looking to snack on any frogs out in the wet. First up was this big-eyed green pit viper (Trimeresurus macrops), and we saw a half-dozen or more of this species in a short period of time.
Another pit viper out in force was Vogel’s pit viper, Trimeresurus vogeli, and we saw a number of these snakes as well.
We ate a late supper at a restaurant in the park, and the bathroom had special frogs in the rinse barrel – Chantaburi warty frogs (Theloderma stellatum).
We were late heading out of the park, and had to get special permission to drive out the main gate, rather than driving the hours-long way around. At a sharp curve in the road, we were stopped by a phalanx of elephant butts – a big herd was crossing the road, and they had stopped. We could hear them trumpeting, and more elephants in the forest were breaking branches that made distinctive cracks and pops. The herd was not happy with us, and we soon found out why when three elephants appeared behind our vans. Apparently we had cut them off from the herd. The three behemoths to our rear disappeared from view after a bit, and we backed our vehicles up to allow them to reach the others. Eventually the entire herd moved on, but it was a tense forty five minutes for us in the meantime.
Our last herp of the trip, found the next morning before we headed back to Bangkok and the airport, was another flying snake, our sixth of the trip. I’m still hoping to see one in action someday.
Bill had to return to Taiwan but the rest of us headed for the next leg of this herping juggernaut – Vietnam.
|
OPCFW_CODE
|
I have just installed Linux Mint 14 and I cannot change the screen resolution. The appropriate resolution 1920 x 1200, just isn't among the options.
I have tried this solution but it reports:
xrandr: cannot find output "VGA1"
I have also tried this, but it reports:
Fatal server error: Server is already active for display 0 If this server is no longer running, remove /tmp/.X0-lock and start again. (EE) Please consult the The X.Org Foundation support at http://wiki.x.org for help.
My graphics card information:
Graphics: Card: NVIDIA GF108 [GeForce GT 440] bus-ID: 01:00.0 X.Org: 1.13.0 driver: nvidia Resolution: email@example.com GLX Renderer: GeForce GT 440/PCIe/SSE2 GLX Version: 4.3.0 NVIDIA 313.26 Direct Rendering: Yes
sudo lshw -class outputs:
*-display description: VGA compatible controller product: GF108 [GeForce GT 440] vendor: NVIDIA Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller bus_master cap_list rom configuration: driver=nvidia latency=0 resources: irq:16 memory:fa000000-faffffff memory:c0000000-cfffffff memory:d0000000-d1ffffff ioport:e000(size=128) memory:fb000000-fb07ffff *-display description: Display controller product: 2nd Generation Core Processor Family Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 09 width: 64 bits clock: 33MHz capabilities: msi pm bus_master cap_list configuration: driver=i915 latency=0 resources: irq:57 memory:fb400000-fb7fffff memory:b0000000-bfffffff ioport:f000(size=64)
Running xrandr command in terminal outputs:
Screen 0: minimum 320 x 200, current 1024 x 768, maximum 8192 x 8192 DVI-I-1 disconnected (normal left inverted right x axis y axis) HDMI-3 disconnected (normal left inverted right x axis y axis) VGA-2 connected 1024x768+0+0 (normal left inverted right x axis y axis) 0mm x 0mm 1024x768 60.0* 800x600 60.3 56.2 848x480 60.0 640x480 59.9
This is without nvidia driver installed.
After executing command
xrandr --newmode "1920x1200_60.00" 193.25 1920 2056 2256 2592 1200 1203 1209 1245 -hsync +vsync it reports:
X Error of failed request: BadName (named color or font does not exist) Major opcode of failed request: 140 (RANDR) Minor opcode of failed request: 16 (RRCreateMode) Serial number of failed request: 29 Current serial number in output stream: 29
When I try solution from here running
sudo Xorg -configure, I get:
Fatal server error: Server is already active for display 0 If this server is no longer running, remove /tmp/.X0-lock and start again. (EE) Please consult the The X.Org Foundation support at http://wiki.x.org for help. (EE)
Can you please guide me on how to proceed? Also please see the comments in Dave's answer below.
|
OPCFW_CODE
|
Last week at the Minnesota GIS/LIS conference in St. Cloud, MN, I attended a session presented by Chris Pouliot, one of the main developers behind the popular DNR Garmin desktop application developed and maintained by the Minnesota Department of Natural Resources (MN-DNR). DNR Garmin is designed to transfer data between common GIS formats (e.g., Esri shapefiles) and recreation-grade Garmin GPS receivers. Chris announced a major new version of DNR Garmin is in the works, with a public released planned for early 2012.
While I’ve personally used DNR Garmin for several years, I honestly didn’t have a sense for the large following behind the application. As you would expect, it’s used extensively within MN-DNR to support the mapping of water access sites, hunting violations, building locations, and much more, but it’s also used by federal agencies including the National Park Service, and many other users from all over the world.
Now Open Source
Perhaps one of the most noteworthy tidbits I got out of the session was the movement to make DNR Garmin an open source project. Anyone may now participate in the project and help develop and maintain DNR Garmin, which hasn’t had a major update since 2008. Chris mentioned they’ve had many requests for updates over the years, so the move to open source will allow a wider community of developers to add enhancements to the software.
Chris cited a number of reasons for the recent push to upgrade to DNR Garmin. First and foremost is the simple fact the software hasn’t been updated since 2008. As a result, it is not completely compatible with Esri’s ArcGIS 10.x. Another driver was the lack of support for Visual Basic 6—the development language used for previous versions of DNR Garmin. In addition, Garmin has released a number of new GPS receivers since 2008, and updates are needed to ensure compatibility with these newer devices.
New version, new technology
The project team is completely re-writing the application in C#, while utilizing code from a number of other open-source projects such as GPS Babel, Proj.4, and GDAL. The goal is to keep the user interface as close as possible to the existing version 5 interface, so the update doesn’t force a major re-training for users. The effort involved in re-writing nearly 15,000 lines of code developed over 10 years is not trivial, which in part drove the move to make the project open source.
If all goes as planned, DNR Garmin 6.0 will be available for testing late in 2011. Chris hopes to make v6.0 available during early 2012.
To stay in touch with the latest on DNR Garmin, I suggest joining the e-mail listserv, or visiting the project Wiki. Check out the DNR Garmin Web site to download the current version, and learn more about its capabilities.
|
OPCFW_CODE
|
When you visit a website, your computer needs to convert a domain name to an IP address. DNS (Domain Name System) does this translation. For example, google.com translates to http://126.96.36.199/
DNS servers are distributed and are constantly updating each other.
Every computer has a name server. If you’re a home user, it’ll most likely point to your router. This can be changed to Google DNS or OpenDNS, etc.
When you register your domain name, the registrar will have some default name servers for you and allow you to change it.
You get this name server from either your registrar, hosting site, or another 3rd party (dnsmadeeasy.com) DNS service. From there you can assign an IP address to your domain name (given by your webhost). Just create an “A” record with “www” and you’re set. Keep in mind some hosts automatically assign you an IP based on your domain name (lunarpages.com) so all you have to do is use their name servers.
This is where the delay comes in. Some DNS companies make the changes immediately and some only update a few times a day. Once the changes are made, it will notify other DNS servers about the change.
How DNS Works:
Your computer has a DNS server it refers to. You can find out what this is by typing “ipconfig /all” in Windows or “cat /etc/resolv.conf” in Linux.
Your local name server doesn’t know anything but the root name servers. Run “nslookup -type=ns .” to find out what your local name server can see.
Your computer uses these root name servers to first look up the “.com” (Top-level Domain) portion of your request which returns a list of TLD name servers.
From this list you’ll look up the second part of the domain “google.com” with “nslookup -type=ns google.com (ip address from list).” This will return a list of name servers that will have the IP of the domain.
Finally type in “nslookup google.com (one of the dns servers from the last list)” and you will get the IP address. Translation is complete!
Now your local name server will remember this translation up to a certain period of time. Your browser will also cache the translation so it doesn’t even have to refer to your local name server.
How long will the translation be cached? This is the main cause of the DNS delay. You can run “dig google.com” to find out how long the TTL (Time To Live) is in seconds. The ANSWER SECTION shows how long the local name server will remember the translation and the AUTHORITY SECTION tells you how long it’ll remember the DNS server used.
If you change the IP of your domain, then the delay length is partially depended up on the speed at which your DNS service makes the changes. This can take a few minutes to 24 hours. My DNS manager (dnsmadeeasy.com) makes the changes instantly.
Once the changes are set, it has to propagate to all name servers which can take a long time. The average is 24 hours. So combined delay from the DNS company and propagation could be up to 48 hours.
Obviously you can make propagation much faster if you change the TTL. Some DNS services don’t allow you to change this value but it can range from 300 seconds (5 minutes) to 86,400 seconds (24 hours). If you have a TTL of 300 set, and you change your IP, it will take the rest of the world 5 minutes to be updated with the new IP.
|
OPCFW_CODE
|
Automatic responses in messengers
What can the bot do?
- Collect subscribers.
- Gather feedback.
- Gather ratings.
- Automatic responses to open and close conversations.
- Automatic responses after hours.
- Quick response search.
- Monitor sales in Telegram and WhatsApp groups.
ChatApp quick response bot is a service that allows you to significantly facilitate and speed up interaction with customers in ChatApp channels.
To use the bot, it is enough to work in ChatApp channels and fill in the Quick Responses Database.
How to add a new bot?
- Go to your ChatApp Dashboard and select "Bot" from the left side menu.
- Click on the "+ Add" button in the top right corner.
- Select a period and pay for the bot.
How to set up a bot?
- Come up with a Bot name that will be displayed in the web chat, and then select a license and messengers in which the Bot will work.
- Set up Cyrillic check. With the help of verification, you will be able to segment the Bot's responses for Russian and foreign audiences. For example, Bot will give answers if the client's messages contain Russian text or Bot will not give answers if the client's messages do not contain Russian text.
- Select the type of bot. In this case, "ChatApp bot - quick responses" and number of responses from the bot are considered.
- Come up with and write an explanation that the Bot will give before responding to the client's message. For example: "Look what I found...".
- Set the connection parameters for cognitive services. In this case, it is QnA Maker Q&A.
- Customize Status and Display Bot Name, and then click on the "Save Bot Settings" button.
How does a bot work in a web chat?
After setting up the Bot in your personal account, it automatically becomes available in the web chat and reacts to new messages that come to the connected messenger.
- You can enable the setting "Disable the bot if the employee answered" in the Bot settings in the Personal Account.
- In this case, if you or your employee replies to a new message, the Bot will not interfere in the dialogue.
Recommendations for working with the Bot
- Try to carefully structure and work out your Quick Responses Base. In this case, it will be easier for your Bot to navigate in it, due to which the number of errors in the Bot's answers will significantly decrease.
- Installing Application
- Channels connection
- Messenger integration
- WhatsApp integration
- WhatsApp Business API integration
- Telegram integration
- Viber integration in Bitrix 24
- Telegram bot integration
- Integration of Avito
- How to start a dialogue with a Telegram user
- Send message first from CRM
- Duplicate leads and deals
- About BotApp24
- Install BotApp24
- Teach Chatbot
- Create Chatbot
- Connect Chatbot to Bitrix24
- Add Chatbot to Open Channel
- Test Chatbot
- Configure Data Collection
- Chatbot Response to WhatsApp Calls
- Setting up a chatbot to collect feedback
- Configure Regular Expressions
- Special Characters for Regular Expressions
- FAQ. BotApp24
- Automatic responses in messengers
- Subscribe to messengers
|
OPCFW_CODE
|
Backend Capabilities Overview
Navigating your application's backend and data capabilities requires a solid understanding of the functionalities available to you. This document aims to provide you with a detailed look into these offerings.
With 8base, you can construct a relational model to bolster your application. This involves defining tables and views that depict the necessary information. Tables can be associated in one-to-one, one-to-many, and many-to-many patterns, effectively addressing all potential scenarios for data representation.
When it comes to available data types for your columns, you can choose from:
- Text - Ideal for storing plain or formatted text (HTML and Markdown supported)
- Number - Available in both float and integer variations
- Date - Includes datetime
- Switch - Offers different variations of boolean or the choice from an enumeration
- File - Used for content storage
- Smart - Handles specifically formatted text like addresses or phones
- JSON - Manages hierarchical documents
- GEO - Stores geographical references
Each field has additional capabilities, such as storing multiple values, mandating specific fields, setting default values, or performing calculations.
Upon defining your data schema, 8base generates a comprehensive GraphQL API that includes:
- Read one - Retrieves single records and their related data
- Read many - Retrieves multiple records, their related data, and, if needed, organizes those in groups
- Create - Adds records
- CreateMany - Adds multiple records
- Update - Modifies a record
- UpdateByFilter - Modifies multiple records that match a given criteria
- Delete - Soft-deletes a single record
- DeleteByFilter - Soft-deletes multiple records that match a given criteria
- Restore - Reverses soft-delete operations
- Destroy - Removes a single record
- DestroyByFilter - Removes multiple records that match a given criteria
This allows for real-time updates and notifies the application when a record changes. A subscription can update the app when:
- A mutation such as create, update, or delete occurs
- A filter match is detected
- Specific fields are updated
Resolvers offer an additional Query or Mutation to be included in the API. They can be used to integrate with third-party APIs, assist in data querying or coercion, and run custom algorithms.
Tasks are long-running operations that can be run on demand or on a schedule. They are ideal for batch operations or providing additional processing without risking a timeout from the caller.
Triggers are functions that run in response to a data mutation event, such as creating, updating, or deleting an object. They help to run important actions as callbacks to data commits without burdening client apps with web requests.
Webhooks allow for the calling of custom functions as regular RESTful endpoints. They facilitate integration with a third-party service to post data back to an application using a specified URL.
Roles and Permissions
Roles and permissions in 8base determine the specific actions a user is authorized to undertake within a project. Users can be assigned to one or multiple roles to manage their entitlement within the applications.
Permissions specify what data can be read, created, updated, or deleted and can also indicate what permissions the role has for each field. A powerful feature is the ability to define entitlements using data rules, which allows developers to set if a user can interact with a given record if that record matches certain criteria. It also outlines what custom functions can be invoked by a given role.
8base provides authentication services for each backend. Developers can choose which engine will handle the storage of passwords and validation of user input:
- 8base authentication - Running on AWS Cognito, it provides an integrated experience within 8base without additional services.
- Auth0 - For customers with existing authentication schemes with Auth0, they can integrate it into their 8base backends.
- OpenID - For developers leveraging a compliant service to manage their users' authentication needs.
The User table in 8base stores all the users, except for the passwords, which are stored in the underlying service. Developers can use a set of mutations to interact with the authentication services or, when using 8base authentication, a "hosted login", which is an external page to handle user registration and password operations.
Continuous Integration/Continuous Deployment (CI/CD)
8base implements CI/CD through the use of the environment branching feature. This allows developers to create multiple environments within a backend to manage different stages of development, such as production, staging, and development. Each environment has its own unique URL/API endpoint.
The CI/CD workflow in 8base typically involves the following steps:
- Branching: Developers clone an existing environment to create new ones. This allows for isolated development environments for each developer or team.
- Migration: Developers generate and review migration files for system data updates. They can switch between their environment and the parent environment to check the differences and ensure only the necessary migrations are committed.
- Commit: Developers commit their local migrations and/or custom logic to the parent environment using the 8base CLI migration commit command.
- Deployment: Controlled using the 8base CLI, the deployment can be done in different modes, such as full or migrations-only.
- Validation: The CI/CD system in 8base includes validation steps to ensure the integrity and correctness of the deployment.
- Rollback: In case of any issues or errors, developers can rollback the deployment to a previous state.
To enable CI/CD in a workspace, you need to subscribe to an 8base plan where the feature is enabled. You should also have the latest version of the 8base CLI installed.
In summary, 8base has built-in CI/CD capabilities to help developers and teams easily manage professional software quality controls when developing their applications.
|
OPCFW_CODE
|
Get Started With the EagLED
The EagLED is a beginner-friendly and fully Arduino-compatible micro-controller packed with LEDs, a light sensor, pushbutton switch, and a battery board all built into a handy snappable board. It won't snag on your garments either, so you can easily use them for your wearable or e-textile projects.
In this guide, we will take a closer look at the various parts of the EagLED as well as how you can set it up to be programmed using the Arduino IDE.
Complete this guide to get up to speed and start creating with the EagLED!
|Parts Used in This Guide|
Step 1 A First Look (Front side)
On the front side of the EagLED, you'll notice several snappable shapes that can be easily separated and sewn with conductive thread to make wearable electronics.
Step 2 A First Look (Back side)
Flip it over and on the back side, you will see the labels:
- LIGHT SENSOR
- COIN CELL BATTERY HOLDER
- Manufactured in Australia by Little Bird Co
Step 3 Indicator LEDs
There are four indicator LEDs on the main board. They are: TX, RX, PWR and USR.
RX stands for Receiving pin and is used for serial communication. Whenever the EagLED receives data serially, the LED connected to RX pin will blink.
TX stands for Transmitting pin and is also used for serial communication. Whenever it sends data serially, the LED connected to TX pin will blink.
PWR - This is the indicator light signifying that the board is powered.
USR - This is a user-controlled LED, and it can be accessed as Pin 7 in an Arduino sketch.
Step 4 LEDs
As you might notice, there are six snappable boards that have built-in LEDs.
Each of these boards have either two or three sewing tab pads.
The following LEDs and their associated pins can be programmed using the Arduino IDE:
left eye - Pin 3
right eye - Pin 10
left star - Pin 0
right star - Pin 6
left heart - Pin 1
right heart - Pin 12
Step 5 Light Sensor
One of the triangular snappable boards has a built-in light sensor.
This board has three sewing tab pads.
The light sensor can be programmed by using Analog Pin 9 in the Arduino IDE.
Step 6 Button
The other triangular snappable board has a built-in button.
This board also has three sewing tab pads.
The button can be programmed using Pin 2 in the Arduino IDE.
Step 7 Coin cell battery holder
The coin cell battery holder board accepts a CR2023 coin cell battery, and has an ON/OFF switch as well as 4 sewing tab pads.
Step 8 Main board
On the main board, you'll see the following pins:
Step 9 Get the Arduino IDE
To get started with programming the EagLED, you'll need the Arduino IDE!
Head to the Arduino Downloads webpage and download the software for your operating system.
Step 10 Preferences
After downloading and installing the software, open up the Arduino IDE!
To add board support for our products, start Arduino and open the Preferences window.
On Windows on Linux, Click (File > Preferences)
On a Mac, click on the (Arduino Menu > Preferences)
Step 11 Additional boards manager
Copy and paste the following URL into the 'Additional Boards Manager URLs' input field: https://raw.githubusercontent.com/littlebirdelectronics/Arduino_Boards/master/IDE_Board_Manager/package_littlebird_index.json
Note: If there is already an URL from another manufacturer in that field, click the button at the right end of the field. This will open an editing window allowing you to paste the above URL onto a new line.
Step 12 Boards manager
- Now open up the Boards Manager by clicking on Tools > Board
- Scroll to the top of the board list, and select Boards Manager.
- If you type "Little Bird" (without quotes) into the "filter your search" field, you will see options to install Little Bird's board files. Click in the desired box, and click the "Install" button that appears.
- Once installed, the boards will appear at the bottom of the board list. Select 'Little Bird EagLED'.
- Now you can go on to program the EagLED with the Arduino IDE!
Step 13 Example sketch
With that set up, you will now find the example sketches for the ShaKey!
Click on Files > Examples
You should now see the example sketches in 'Examples for Little Bird EagLED'.
|
OPCFW_CODE
|
You may have noticed from some of my previous blog posts on the concept of worth that there's an argument I hear from writers semi-regularly which really gets up my nose. It's never explicitly stated this simply, but I have, several times, seen it stated in such a way that the context meant it came across like this:
"My work is worth something! Therefore, I should charge a high price for it." (where 'a high price' means 'something above the .99c/$2.99 Amazon standard').
Now, this argument (let's call it the 'worth argument') sounds simple but plausible, and I can see why people make it. But my experience is pretty consistent on the point that arguments which sound simple but plausible turn out to be simplistic and actually deeply problematic. So this one has been getting up my nose for a while. I've tried to tackle it before, but I've always ended up wide of the mark one way or another. What follows is my latest attempt.
Price and Worth
Ultimately, I think the problem with the worth argument is that 'worth' is a much more complicated concept than 'price'. To say that the worth of something directly translates into its price - particularly when, as the worth argument as specified above does, 'price' is understood as purely monetary - is to miss out on a great deal of subtlety within both concepts, particularly worth.
To see this, we need to start by looking at what money is, or what its purpose is. Money exists as an approximation to allow us to exchange things that have one kind of value for another. If I'm a farmer with a lot of potatoes, and you pay me some money for some of them, I can then go and buy, let's say, a fish from the fishmonger. I've exchanged one kind of value - crudely, carbohydrate - for another - protein.
(Sidebar: you will hear some people - some of Ayn Rand's characters and disciples, for example - say that money, at least as underwritten by the gold standard, has a fixed value in and of itself. They are utterly, completely wrong, and the view is actually incompatible with the rest of the capitalist system they champion, but that's a topic for another post.)
The problem comes in when you realise where the price of an item - the amount of money you get for it - comes from. When I sell my potatoes, I'm selling them because I can't use them myself; one man can only eat so many potatoes before they spoil. Likewise, the fish I buy from the fishmonger is not going to be a fish he was intending to eat.
In both cases, the price measures the worth of something that the seller (who is also the price-setter) can't use. I can't use this potato, so I sell it to you. The fishmonger can't use the fish, so he sells it to me. If I don't sell the potato, the effort I put into growing it is wasted.
And that's the key point. The price you put on your produce is a measure of how much effort you wasted on them from your point of view. It's a measure not of worth but of worthlessness. (Bear with me, I recognise this sounds weird - I'll explain, but I need to get the next point sorted first).
This brings up an important point about worth, the essence of that complexity I mentioned earlier. Worth is subjective and contextual. It depends on who you are and what you need. The potato is worthless to me, because I have many potatoes. The fish, on the other hand, is worth something to me, but not to the fisherman, who has many fish.
And this isn't just a feature of commodities. It holds for everything, and particularly for books. '50 Shades of Grey' isn't worth toilet paper to me; my shelf of Janny Wurts novels, on the other hand, is one of my most treasured collections, but is wasted on my father who has little patience for fiction and none for fantasy. My vast stack of academic philosophy books would be useless and impenetrable to someone without the academic background to enable them to contextualise it all; I lack the academic background to contextualise a medical textbook, so it would all be Greek (more likely Latin) to me.
Worth is subjective. Or to put it another way, there's no such thing as 'worth'; there's only worth to someone. And different books are worth different amounts to different people (a topic which I will expand upon in another post).
I hear ya. It seems very counterintuitive to say that price measures worthlessness, even subjective worthlessness. But it's not as far from our ordinary way of thinking as you might imagine. I have two illustrations to offer.
First, think about how much of a bookstore shelf price in the traditional model goes to people other than the author. Most of the price goes on various stages of the unit production and distribution process (well, and the New York rents of big 6 publishers, but I did promise I'd stop getting angry in these debates...). It goes to bookstore employees, whose work selling you the book may be momentarily fulfilling, but ultimately doesn't have much value to them. It goes to truck drivers and warehouse staff, who certainly aren't get much out of their work besides the wage. It goes to cover artists and layout people (who may get some fulfilment, but I would guess in almost all cases would prefer to be not working to spec). And so on.
The way I look at my price is that it's got a lot more to do with the effort I spend bringing my book to market (more on this in a moment). I get a lot out of the writing process, in terms of pleasure and satisfaction. From the end of the first draft, though, it's all downhill. Editing benefits the manuscript and that's satisfying, but it's a chore to do. Chasing up beta readers, designing covers, writing promotional copy, formatting and so on? All labour I get nothing out of. The writing itself has lots of worth to me.
Which brings me to my second point. We tend to equate 'doing it for the money' with 'not being passionate about it'. Just look at the music business - there's no worse insult than to call a musician a 'sell-out', and people complain endlessly about superficial, 'commercial' pop. Unpack this in any detail, and you find exactly the issue about worth that we're discussing.
People are subconsciously sensitive to the fact that we price primarily the labour we don't get a return on. This is also why we're generally willing to take lower-paying jobs if they have other benefits. I'm happy to do my part-time support-work job, even though it makes me less than £4000 a year, because I feel less of the effort on my part is worthless to me than if I made more money flipping burgers. I get to learn all sorts of new things from disciplines other than my own, and I get the satisfaction of knowing I've helped other people learn as well.
And if people are sensitive to what price means, then we need to consider very carefully how our prices will be interpreted. Demand a high price, and you proclaim you have less passion about your work than those of us who are happy to work for less. And I for one firmly believe art pursued with less passion ends up less good.
Two clarifications: first, yes, there is an issue with competing against the cachet which traditionally-published prices hold in readers' eyes, at the moment. Readers in general still seem to assume that content which has been vetted by a publisher is 'better' (by which I mean 'will be worth more to them') than content which hasn't. This is because the new self-publishing model is still just that: new. The distinction between the two fields has yet to become fixed in the general consciousness (again, a topic I'll come back to; whether self-publishing and trad publishing are really the same industry at all from a consumer perspective). It's a state of affairs that will pass.
Secondly, I'm not saying all (or even necessarily any) books should be free. At least, I'm not going to say that based on this argument. I make no promises not to go looking for other arguments to something like this effect. But I'm not saying that this post and argument mean all books should be free. I've already said that a lot of effort involved in bringing a book to market is stuff authors don't profit from; while I enjoy outlining and drafting, I get nothing but a headache from formatting, and I seldom feel that the results of cover design justify the effort except insofar as having a good cover makes my end product more viable.
As and when I put a price on my work, it's all that stuff I'll be charging for. I'm not going to charge for a story that I wrote for my own benefit, one that I have a passion for and care deeply about - and I'm not going to bring any other kind of story to market.
(And if I do sell out at some future point, feel free to spam me to death with links back to this post ;D).
|
OPCFW_CODE
|
Novel–Release that Witch–Release that Witch
1428 Criteria For Balance precede woman
Dialing them cases had not been an exaggeration not simply were there clear lids and availabilities towards the boxes, your entire matter was about 30 cm long and can even be carried within a hand. The dimensions of these two bins were actually not even close to all of the innovative models that caused people to exclaim in affection, and in many cases lacked the splendour to get hailed as ‘revolutionary.’
bj archmage novel
A conflict of destiny would typically disclose its ferocity only at that moment.
hail the king wiki
He experienced brought up his uncertainties to Valkries, but got a harsh retort from her.
All at once, the b.l.o.o.d.y Moon located on the top of the skies faded without any trace, as if it experienced never existed.
a collection of beatrix potter stories
Soon after enjoying her, Roland unveiled a smile and believed to Tilly, “Don’t rush back today. Be the night within the fortress. Coincidentally, I have something more challenging to pa.s.s for your requirements.”
Immediately after listening to her, Roland discovered a grin and said to Tilly, “Don’t speed back nowadays. Continue to be the evening in the castle. Coincidentally, I actually have new stuff to pa.s.s to you.”
Even though he acquired long equipped her for doing this, she never anticipated for that last item being so elaborated! She got antic.i.p.ated the device to inhabit a big section of s.p.a.ce when loaded at a airplane. Naturally, the ma.s.sive scale of the iron tower venture ended up being displayed, to reduce it to the magnitude of a ‘Fire of Heaven’ was already an inconceivable thought.
Not very long later, the 3 divided themselves for the outside and inside with the experimental clinical and talked. Quickly, the area was full of a lighthearted surroundings.
The Collected Short Fiction of Ramsey Campbell
The key from the transmitter-recipient was the vacuum tubing which was able to amplifying, detecting, and vibrating. It had been also the level of humanity getting into the Electronic Get older, and Roland naturally recognized how difficult it was subsequently to get it. The s.h.i.+ny sc.r.a.p stainless steel that piled up beyond the North Slope lab was confirmation. Furthermore, he could hardly information them in is important of electronic engineering as he did before. A huge area of the venture relied on Anna to slowly commence by trial and error herself.
Roland found the received. It was subsequently Anna.
the rose and the ring
Despite the fact that he got very long geared up her for it, she never expected for any closing product being so elaborated! She got antic.i.p.ated the device to take up a substantial component of s.p.a.ce when geared up with a airplane. After all, the ma.s.sive proportions of the steel tower venture have been demonstrated, to get smaller it to the size of a ‘Fire of Heaven’ was already an inconceivable notion.
Roland found her concerns and launched the cover of your field.
In Roland’s view, the prosperity of the wireless transmitter-receiver was considerably more crucial when compared to new 20mm autocannons—real time communications substantially broadened and authorized for coordination involving the aviators to execute aerial methods. With specific co-ordination, the fleet’s dealing with durability was basically simply being improved by the notch. It is also claimed that provided that the Aerial Knights become competent at accomplis.h.i.+ng this could they be hailed as a authentic oxygen push.
Not extended later on, the 3 divided themselves into the outside and inside of the experimental research laboratory and talked. Easily, the space was filled with a lighthearted natural environment.
Now, humanity used to be again standing up on the similar precipice.
Roland observed her concerns and opened the cover with the package.
But Roland realized that the warfare had not been through.
Ghosted – A Novel
A challenge of future would typically expose its ferocity only right then.
The sole distinction between both cases along with other cases was that their entry section were riddled with rows of bright and metallic-coated b.u.t.all kinds and k.n.o.bs.
To Roland’s comprehending, the typical plan was about the same as immediately a.s.sociating Internet slang into the children given birth to after the nineties.
Various century in the past, the demons grasped the chance as soon as the b.l.o.o.d.y Moon shone over the areas to create their obelisks, silently looking forward to for your pillars to develop into imposing monuments. Only after stabilizing their foothold have they officially begin their a.s.sault.
But on this occasion, they were completely different.
Roland laughed outside in embarra.s.sment—if the created vocabulary in the Four Kingdoms had been considered to appear to be altered earthworms, than the demonic words was more complex, some of their character types even resembled witchcraft representations. Putting that Roland possessed depended completely on memory to version it downward, in reference to his strokes and product lines not efficient, it created your entire experience of the dialect start looking substantially more messy. Who knew if Hackzord would ever make out what he possessed composed.
From the workshop, Tilly spotted the ‘revolutionary’ cool product outlined by Roland—two rectangular-shaped wooden cardboard boxes.
Perfect as she was wanting to keep, the North Slope Clinical smartphone around the office workplace suddenly rang.
“This is a mobile and wi-fi transmission gadget,” Anna discussed. “It will be the similar to a shrunken steel cable tower, the benefit of it is that it can right get seem as well as, its much larger length capacity.”
“It was all due to Sibling Anna for operating past due times every day, for the prototype to become made so speedily,” put in the a.s.sistant, Lucia. “Primarily, the vacuum pipes require the vacuums to generally be managed and lots of parts would have to be packed in. It would was difficult without the help of her Blackfire.”
“At any rate, seeking it out will likely not need a long time or effort…” Roland feigned an indifferent expression. “Can you imagine if it is successful?”
As well, the b.l.o.o.d.y Moon located on the top of the skies faded without using a trace, just like it possessed never existed.
Valkries believed within the feasibility of an human being copying the demonic people, because it proved she had not been shed from the Whole world of Brain plus tell you her very own predicament by being able to pa.s.s info through Roland. Whenever they experienced utilized her handwriting as a substitute, it could possibly easily spook the careful Hackzord—if she could send out characters, why not merely depart the World of Imagination right?
“It is to buy the Atmosphere Lord to try out his far better to avoid undertaking all out war, then i need the Standard Staff to think of methods to give this to your demons.”
Correct as she was willing to keep, the North Slope Clinical telephone about the business work desk suddenly rang.
Heretic Doctor Zihou
Within the work shop, Tilly found the ‘revolutionary’ new product outlined by Roland—two rectangular-designed solid wood containers.
Tilly obviously saw this aspect and right after finishing the experiment somewhat unwillingly, she urged on her behalf exclusive airplane to always be furnished with a lot more wireless network transmitter-receivers.
Various century previously, the demons grasped an opportunity when the b.l.o.o.d.y Moon shone over the lands to construct their obelisks, silently looking forward to for any pillars to build into imposing monuments. Only immediately after stabilizing their foothold performed they officially start off their a.s.sault.
Soon after enjoying her, Roland revealed a grin and believed to Tilly, “Don’t rush back nowadays. Remain the night time from the fortress. Coincidentally, I have got something new to pa.s.s for you.”
Tilly obviously saw this point and just after finishing the try things out somewhat unwillingly, she urged on her behalf unique airplane to always be pre-loaded with even more wireless transmitter-receivers.
Novel–Release that Witch–Release that Witch
|
OPCFW_CODE
|
Sound proofing: mass-spring-mass or mass-mass-spring?
I am trying to improve the sound proofing of a metal box. The box is made of steel of thickness $0.8 \;\mathrm{mm}$. I have additional sheets of steel with $3 \;\mathrm{mm}$ thickness for reinforcement. For vibration reduction, I have access to some bitumen mats (anti-vibration mats for damping cars) and some alubutyl (butyl rubber with an aluminum foil on top).
What would be the best way to improve the insulation properties of the box?
(A) Glue both steel plates together $(0.8 \;\mathrm{mm}+3.0 \;\mathrm{mm})$, then add bitumen/alubutyl to reduce vibrancy.
(B) Put the anti-vibration mat right into the middle to achieve kind of a mass-spring system.
The bitumen mats are pretty stiff and thus will probably be not sufficient to act like a damping spring. The alubutyl however is more like gooey grubber and might be better for attenuating the mechanical vibration. On the other hand: It is only $2 \;\mathrm{mm}$ thick.
My take on that is that (A) might be a more rigid construction (if glued tightly) while (B) might be better for blocking mechanical vibration to travel from the inside to the outside.
The noise source I want to isolate (computer hard drive) adds both structure-borne and air-borne resonance in/to the case.
I am looking forward to your suggestions.
If similar rules as in electronic engineering applied, then stacking mass-spring-mass-spring... would yield exponential reduction of noise; adding mass+mass+...+spring+spring+... would result in linear reduction only.
Here are some general rules of thumb:
Before you start designing your sound blocker, you need to know the spectrum of sounds you wish to block. This will determine which wall design will do the trick. For example, if the noise spectrum consists primarily of high frequencies (above, say $3000 \;\text{Hz}$), then two sheets of steel with a stiff rubber layer in between will be the best. If the spectrum consists primarily of low frequencies (say, $500 \;\text{Hz}$ and below), then sheets of steel with soft rubber sandwiched in between will be the best. For very low frequencies $100 \;\text{Hz}$ and below), lots of mass (many steel sheets with soft rubber in between) will be needed. For very high frequencies (say $6000 \ Hz$ and above), a single sheet of steel and a single sheet of stiff rubber will be the best.
More rule of thumb. DON'T bolt/screw your noisemaker to a larger frame. All you are doing is giving it a bigger "speaker" to distribute the noise. Mount the drive on vibration-absorbent feet. This removes 95% or so of audible noises, and the lack of such vibration absorption is the cause of most HDD noise. It also helps protect the hdd against casing-impact-caused damage, I have no clue why most manufacturer want to bolt the drives to a metal frame.
Try sheets of cardboard. They don't transmit sound well.
Consider if you leave an opening for cooling air flow, you have an opening for sound. Open cell foam might help? Pointing the opening away from listeners?
Computer fans are typically noisier than hard drives. But if it is really the drive, you can get a solid state drive.
Thanks for your reply.
I have actually made bad experiences with cardboard, which appears to transmitt mechinal vibration quite well without having any real benefit in acoustic matters.
My initial question is how the 0.8mm steel can be made less prone to pickup up vibration from air or solid objects (mounting of the noise source). I think that adding more mass and making the frame more rigid will help suppressing some of the resonances. But I am still wondering whether setup A) or B) will do a better job after all.
|
STACK_EXCHANGE
|
I want to point out to anyone who comes here--quickly, while it is still early enough to do some good--that there is a worthy project on Kickstarter that needs a little financial support. A Scottish company called Runtime Revolution, or RunRev for short, wants to clean up the source code for its main product and release it to the world under an Open Source license. The product is a programming environment called LiveCode. All told, it will cost about a half-million dollars to get the thing done. (More exactly, they're asking for £350,000). Go here to pledge.
Of course, you want to know why you should pledge before you start reaching for your wallet. The Kickstarter page has some videos to give an idea of what LiveCode is like. However, to give some idea of the potential of LiveCode made free, I have to tell you a true story.
Back in 1987, Apple Computer released a program called HyperCard. This was the brainchild of Bill Atkinson and Dan Winkler. Atkinson had written the very first Mac application, a paint program called MacPaint. He wanted to enhance it so that people could click on parts of a picture to see details or text descriptions. Dan Winkler created an English-like programming language called Hypertalk so that objects placed on the picture (such as buttons or text fields or backgrounds) could each contain a program. Click the button, and its program would execute. Click the text field, and start typing in it. Have the button's program alter the text in the text field. There were possibilities here.
The pair got the bit between their teeth and added more ideas. Why have only one picture in a file when you could have any number of them, and flip between them like cards in a rolodex? What if each of these cards had two layers, a foreground layer specific to one card and a background layer that can be shared between cards? What if both layers, and the stack of cards itself, were objects that could hold bits of programming?
Finally, what if the HyperCard program were free? What would people do with it?
Well, the pair, somehow, were able to demand that HyperCard be released for free, and the odd program found its audience. At the lowest level, people approached it as a paint program, a slightly advanced MacPaint. Later, they might add text fields to label or describe the painting, or add in an essay or poem or short story. Sound effects and music might come into it. This could involve placing buttons with very simple scripts such as
play "cat's miaow"
Alternatively, the button might have an arrow icon and, when clicked, take the user to the next card in the stack. This script would do the trick:
go to next card
The user would naturally, with increasing confidence, try more and more complicated scripts, or study the scripts of other users, copy and paste buttons from one stack to another, and gradually become expert.
Kids loved to make games with it. Teachers made self-grading tests (I did that), chat programs for the local network, presentation programs (where did the idea for PowerPoint come from, do you think?), and study units. Businesses made information kiosk programs. I did a company handbook, an e-book maker, a multi-file search-and-replace program, a recipe book, and many other programs. In short, I used to say that HyperCard was the only program I needed. These days, with the internet being a big part of everyone's computing life, I'd say that HyperCard and a Web Browser were the only programs I need.
That is, I'd say that if I could still get and run HyperCard. I don't think that Steve Jobs ever really "got" HyperCard, and he axed it in March 2004, when he returned to Apple. I never really forgave him for that. Quite a few others went through the grieving process. Some tried, with mixed success, to recreate the old mixture of paint tools, text tools, stack of cards, and programming language. The most successful of these attempts eventually became LiveCode.
LiveCode's programming language is basically HyperTalk, but with a much larger vocabulary. Many old HyperCard stacks can be imported into LiveCode and simply work; others can be tinkered with until they work again. The paint tools are there, but are in full colour instead of black and white. The programs can be made to work in Windows, Mac, Linux, or on a cell phone or iPad. In other words, LiveCode is, to a great degree, a modern version of HyperCard.
What it lacks, that HyperCard had, is free access to an audience of curious and uncommitted users: children, businessmen, parents, teachers, and other amateurs of every stripe. If LiveCode becomes Open Source, it may get that aspect of HyperCard going, too.
|
OPCFW_CODE
|
UDE (Ubuntu Diolinux Edition)
This is nothing more than a Ubuntu 12.04 Remaster made by me, and with the programs I use most often. As it takes a long time to reinstall all programs in a possible format. I created this live-DVD (due to the size of the ISO) with all programs installed in the best plug and play style.
A brief description of what we have here:
Ubuntu 12.04 with Unity as Graphical Interface (2D and 3D)
With all updates at 06/04/2012 at 17:00 hours
Kernel 3.2.0-24-generic 64 Bits
The entire System base has been retained, and some packages have been added and some removed for optimization.
If you want to know more about the system keep reading, if you don't have patience you can click here
Software / Packages removed.
- Rhytmbox (Music Player)
- Lens Vdeo (Ubuntu Dash Video Search)
- DejaDup (Backup and Restore)
- unity-scope-musicstores (useless for my use)
- Bluetooth monitoring process disabled (Since my computer does not have these devices)
- gnome-online-accounts (Unused by Me)
- Gwibber (Microblogging client, too heavy for his job in my opinion)
- Emphaty (Messenger client, replaced by Pidgin)
If you want to recover or reinstall (or undo) any modifications, or even increase the optimization take a look at this post (how to optimize Ubuntu)
Added Software / Packages
- Furius Mount ISO (Daemon Tools ISO Style Assembler)
- Weather Indicator
- BlueFish Editor (Programming)
- Inkscape (Vector Graphic)
- GIMP 2.8 (Photoshop-style graphical editor, same with single window)
- Google Chrome (Browser)
- Pidgin (IM Standard Client)
- Torrent Search
- Skype (Communication about this protocol)
- Acid Riper (DVD Ripper)
- Audacity (Audio Editor)
- Arista Transcoder (Video Converter)
- Sound Converter
- Open Shot (Video Editor)
- VLC Media Player (Multimedia Player)
- Clementine (Standard JukeBox instead of Rhytmbox)
- Open JDK 7 (the "Java Open Source")
- Gnome-tweak-tool (Configuration and Customization Tool)
- Ubuntu-Tweak (Configuration Tool, allows you to add a desktop show button in Unity toolbar for example)
- Mount Manager (Mount Windows parties)
- NTFS-CONFIG (A complementary option for mounting Windows parties)
- Synaptic (Package Manager, an old acquaintance)
- Indicator Keys (CAPS, NUM and SCROLL LOCK Indicator)
- VirtualBox with USB Extended Pack (Virtual Machine)
- Wine (Run Windows programs on Linux)
- Wine Tricks (A WINE add-on to download some tens needed to run some programs)
- Gdebi (Graphic Installer of .deb Packages)
- Ubuntu restricted Extras (To get Ubuntu ready to play all kinds of media, mp3, avi, etc …)
- Remastersys itself for anyone who wants to make their own modification.
- Tweaks to some Gedit plugins to open faster and some nautilus scripts to make it easy to create shortcuts in Ubuntu.
The (.ISO) file is hosted on Google Drive with approximately 1.6 GB.
just a customization, I didn't change nor the default wallpaper, just the boot screen and the name on installation by pure Ego same = P.
I hope you enjoy it and have saved them some time installing Software.
NOTE: Since GDrive cannot check for viruses due to file size you will need to click "download anyway" or "download anyway" to download.
Founder of blog and channel Diolinux, passionate about technology and games.
|
OPCFW_CODE
|
#include "MultipleSelection.h"
MultipleSelection::MultipleSelection()
{
}
void MultipleSelection::handleEvents(sf::Event e, const sf::RenderWindow& window, sf::Vector2f displacement)
{
for (auto& selection : m_selections)
selection->handleEvents(e, window, displacement);
}
void MultipleSelection::update(const sf::Time& deltaTime)
{
for (auto& selection : m_selections)
selection->update(deltaTime);
}
void MultipleSelection::draw(sf::RenderTarget& target)
{
for (auto& selection : m_selections)
selection->draw(target);
}
void MultipleSelection::addSelections(std::vector<std::unique_ptr<Selection>> _selections)
{
m_selections = std::move(_selections);
}
void MultipleSelection::addSelection(std::unique_ptr<Selection> _selections)
{
m_selections.push_back(std::move(_selections));
}
std::vector<const bool*> MultipleSelection::getPointersToValues()
{
std::vector<const bool*> vecOfPointers;
vecOfPointers.reserve(m_selections.size());
for (auto& selection : m_selections)
vecOfPointers.push_back(selection->getPointerToSelected());
return vecOfPointers;
}
|
STACK_EDU
|
The Cosmos-SDK is a framework for building multi-asset public Proof-of-Stake (PoS) blockchains, like the Cosmos Hub, as well as permissionned Proof-Of-Authority (PoA) blockchains.
The goal of the Cosmos SDK is to allow developers to easily create custom blockchains from scratch that can natively interoperate with other blockchains. We envision the SDK as the npm-like framework to build secure blockchain applications on top of Tendermint.
It is based on two major principles:
Composability: Anyone can create a module for the Cosmos-SDK, and integrating the already-built modules is as simple as importing them into your blockchain application.
Capabilities: The SDK is inspired by capabilities-based security, and informed by years of wrestling with blockchain state-machines. Most developers will need to access other 3rd party modules when building their own modules. Given that the Cosmos-SDK is an open framework, some of the modules may be malicious, which means there is a need for security principles to reason about inter-module interactions. These principles are based on object-capabilities. In practice, this means that instead of having each module keep an access control list for other modules, each module implements special objects called keepers that can be passed to other modules to grant a pre-defined set of capabilities. For example, if an instance of module A's keepers is passed to module B, the latter will be able to call a restricted set of module A's functions. The capabilities of each keeper are defined by the module's developer, and it's the developer's job to understand and audit the safety of foreign code from 3rd party modules based on the capabilities they are passing into each third party module. For a deeper look at capabilities, jump to this section.
SDK Application Architecture
At its core, a blockchain is a replicated deterministic state machine.
A state machine is a computer science concept whereby a machine can have multiple states, but only one at any given time. There is a
state, which describes the current state of the system, and
transactions, that trigger state transitions.
Given a state S and a transaction T, the state machine will return a new state S'.
+--------+ +--------+ | | | | | S +---------------->+ S' | | | apply(T) | | +--------+ +--------+
In practice, the transactions are bundled in blocks to make the process more efficient. Given a state S and a block of transactions B, the state machine will return a new state S'.
+--------+ +--------+ | | | | | S +----------------------------> | S' | | | For each T in B: apply(T) | | +--------+ +--------+
In a blockchain context, the state machine is deterministic. This means that if you start at a given state and replay the same sequence of transactions, you will always end up with the same final state.
The Cosmos SDK gives you maximum flexibility to define the state of your application, transaction types and state transition functions. The process of building the state-machine with the SDK will be described more in depth in the following sections. But first, let us see how it is replicated using Tendermint.
As a developer, you just have to define the state machine using the Cosmos-SDK, and Tendermint will handle replication over the network for you.
^ +-------------------------------+ ^ | | | | Built with Cosmos SDK | | State-machine = Application | | | | | v | +-------------------------------+ | | | ^ Blockchain node | | Consensus | | | | | | | +-------------------------------+ | Tendermint Core | | | | | | Networking | | | | | | v +-------------------------------+ v
Tendermint is an application-agnostic engine that is responsible for handling the networking and consensus layers of your blockchain. In practice, this means that Tendermint is responsible for propagating and ordering transaction bytes. Tendermint Core relies on an eponymous Byzantine-Fault-Tolerant (BFT) algorithm to reach consensus on the order of transactions. For more on Tendermint, click here.
Tendermint consensus algorithm works with a set of special nodes called Validators. Validators are responsible for adding blocks of transactions to the blockchain. At any given block, there is a validator set V. A validator in V is chosen by the algorithm to be the proposer of the next block. This block is considered valid if more than two thirds of V signed a prevote and a precommit on it, and if all the transactions that it contains are valid. The validator set can be changed by rules written in the state-machine. For a deeper look at the algorithm, click here.
The main part of a Cosmos SDK application is a blockchain daemon that is run by each node in the network locally. If less than one third of the validator set is byzantine (i.e. malicious), then each node should obtain the same result when querying the state at the same time.
Tendermint passes transactions from the network to the application through an interface called the ABCI, which the application must implement.
+---------------------+ | | | Application | | | +--------+---+--------+ ^ | | | ABCI | v +--------+---+--------+ | | | | | Tendermint | | | | | +---------------------+
Note that Tendermint only handles transaction bytes. It has no knowledge of what these bytes mean. All Tendermint does is order these transaction bytes deterministically. Tendermint passes the bytes to the application via the ABCI, and expects a return code to inform it if the messages contained in the transactions were successfully processed or not.
Here are the most important messages of the ABCI:
CheckTx: When a transaction is received by Tendermint Core, it is passed to the application to check if a few basic requirements are met.
CheckTxis used to protect the mempool of full-nodes against spam. A special handler called the "Ante Handler" is used to execute a series of validation steps such as checking for sufficient fees and validating the signatures. If the check is valid, the transaction is added to the mempool and relayed to peer nodes. Note that transactions are not processed (i.e. no modification of the state occurs) with
CheckTxsince they have not been included in a block yet.
DeliverTx: When a valid block is received by Tendermint Core, each transaction in the given block is passed to the application via
DeliverTxto be processed. It is during this stage that the state transitions occur. The "Ante Handler" executes again along with the actual handlers for each message in the transaction.
EndBlock: These messages are executed at the beginning and the end of each block, whether the block contains transaction or not. It is useful to trigger automatic execution of logic. Proceed with caution though, as computationally expensive loops could slow down your blockchain, or even freeze it if the loop is infinite.
For a more detailed view of the ABCI methods and types, click here.
Any application built on Tendermint needs to implement the ABCI interface in order to communicate with the underlying local Tendermint engine. Fortunately, you do not have to implement the ABCI interface. The Cosmos SDK provides a boilerplate implementation of it in the form of baseapp.
|
OPCFW_CODE
|
- Little Snitch is a paid app but the price is definitely worth it, especially in these times when malware attacks are getting more aggressive and rampant. What is Little Snitch for Mac? Little Snitch is a nifty monitoring tool for outgoing traffic that was developed by Objective Development, a software development company based in Vienna, Austria.
- Little Snitch gives you control over your private outgoing data.
- Little Snitch 4.4.3 for Mac Review Little Snitch 4.4.3 for macOS is a trustworthy and handy program that helps users to monitor network traffic and block various connections in order to protect privacy. It is considered one of the best tools for tracking network traffic.
Little Snitch is a popular Mac app that detects outbound connections and lets you set up rules to block those connections. Little Snitch is an application firewall able to detect applications that try to connect to the Internet or other networks, and then prompt the user to decide if they want to allow or block those connection attempts. It is a super-useful addition to OS X because you directly observe and control the network traffic on your.
Free open source sequence diagram tool for mac os. Little Snitch is probably the best host-based application firewall solution for macOS app. I’ve been using it for quite a while but recently ditched it when I found a free alternative that equally works great.
If you’re using the free version of Little Snitch, you have to deal with the fact that it automatically quits after every three hours. To avoid this, you have to buy the full version. If you’ve been looking for a free Little Snitch alternative that works with macOS Mojave and previous macOS versions, Lulu is what you need.
Unlike Little Snitch, Lulu is an open source software with its source code already on GitHub. This means that it’s not just free, but also anyone can contribute to its development.
Same approach to application firewall
If you’ve been using Little Snitch before now, you shouldn’t have a problem using Lulu. Lulu uses the same approach to application firewall
Little Snitch Macos Catalina
After installing it, you can choose to allow all default Apple apps and existing third-party apps to connect to the Internet without confirmation.
Little Snitch Mac Crack
The choice you make here depends on how you wish to use the program. Personally, I only allow Apple-signed programs to connect automatically, all third-party apps require manual confirmation to create rules.
Clicking the Block or Allow button determines whether the application will access the Internet or not. Checking the temporarily box makes the rule temporary for that specific program ID. It resets when you quit the app or restart your computer and the dialogue box will pop up again.
Just like Little Snitch, it has a panel where you can remove existing rules and add new ones manually:
Ever since I upgraded to macOS Mojave, I’ve been using the new system-wide dark theme which Lulu neatly blends in with.
For a free app, Lulu is incredibly well-built. It’s been about a week now and I haven’t encountered a bug. If you don’t want to spend a dime on a firewall app, this free little alternative is really worth trying. You can download it from the official website or take a look at the source code on GitHub.
About the App
- App name: Little Snitch
- App description: little-snitch (App: Not Available)
- App website: https://www.obdev.at/products/littlesnitch/
Install the App
Command+Spaceand type Terminal and press enter/return key.
- Run in Terminal app:
ruby -e '$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)' < /dev/null 2> /dev/null ; brew install caskroom/cask/brew-cask 2> /dev/null
and press enter/return key.
If the screen prompts you to enter a password, please enter your Mac's user password to continue. When you type the password, it won't be displayed on screen, but the system would accept it. So just type your password and press ENTER/RETURN key. Then wait for the command to finish.
brew cask install little-snitch
Done! You can now use Little Snitch.
|
OPCFW_CODE
|
import {setTimeout} from 'node:timers/promises';
import test from 'ava';
import getStream, {MaxBufferError} from '../source/index.js';
import {createStream} from './helpers/index.js';
import {
fixtureString,
fixtureBuffer,
fixtureTypedArray,
fixtureArrayBuffer,
fixtureUint16Array,
fixtureDataView,
} from './fixtures/index.js';
const setupString = (streamDef, options) => getStream(createStream(streamDef), options);
const generator = async function * () {
yield 'a';
await setTimeout(0);
yield 'b';
};
test('works with async iterable', async t => {
const result = await getStream(generator());
t.is(result, 'ab');
});
test('get stream with mixed chunk types', async t => {
const fixtures = [fixtureString, fixtureBuffer, fixtureArrayBuffer, fixtureTypedArray, fixtureUint16Array, fixtureDataView];
const result = await setupString(fixtures);
t.is(result, fixtureString.repeat(fixtures.length));
});
test('getStream should not affect additional listeners attached to the stream', async t => {
t.plan(3);
const fixture = createStream(['foo', 'bar']);
fixture.on('data', chunk => t.true(typeof chunk === 'string'));
t.is(await getStream(fixture), 'foobar');
});
const errorStream = async function * () {
yield fixtureString;
await setTimeout(0);
throw new Error('test');
};
test('set error.bufferedData when stream errors', async t => {
const {bufferedData} = await t.throwsAsync(setupString(errorStream));
t.is(bufferedData, fixtureString);
});
const infiniteIteration = async function * () {
while (true) {
// eslint-disable-next-line no-await-in-loop
await setTimeout(0);
yield '.';
}
};
test('handles infinite stream', async t => {
await t.throwsAsync(setupString(infiniteIteration, {maxBuffer: 1}), {instanceOf: MaxBufferError});
});
const firstArgumentCheck = async (t, firstArgument) => {
await t.throwsAsync(getStream(firstArgument), {message: /first argument/});
};
test('Throws if the first argument is undefined', firstArgumentCheck, undefined);
test('Throws if the first argument is null', firstArgumentCheck, null);
test('Throws if the first argument is a string', firstArgumentCheck, '');
test('Throws if the first argument is an array', firstArgumentCheck, []);
|
STACK_EDU
|
Updated March 21, 2023
Introduction on Hive Drop Table
The keyword “DROP” refers to deletion. For the deletion of data, we require data to be present in the hive.
In Hadoop, we have two functionalities:
- Data Storage
- Data Processing
For data storage, HDFS (Hadoop Distributed File System) comes into the picture. Now when we say we have data in hive table it means two things:
- Data is in HDFS
- We have a hive table created over that HDFS file, and we load that HDFS file’s data into the hive table.
Basically, for the hive drop table to have the data, the data file is a prerequisite. In this article, we will see how to drop tables in the hive, what happens when the table is dropped and all things related to the drop table in the hive.
Types of Drop Table in Hive
In the hive, there are two types of tables:
- Internal Table or Managed Table
- External Table or Unmanaged Table
Managed Table/Internal Table
- In Hive,” user/hive/warehouse” is the default directory. Internal tables are stored in this directory by default. We do not have to provide the location manually while creating the table.
- “Drop table” command deletes the data permanently.
- Hive manages all the security for managed tables.
I do have a table already present in the “user/hive/warehouse” directory called “codes”.
To check if the existing table is managed or unmanaged, we could use the below command:
Describe formatted table_name;
Let us see the data presented in the table “codes”.
First, using hive command-
Second, using Hue (Hadoop User Experience a Web UI)
Delete command: Drop table table_name;
Now, if I want to select the data from “codes” it will give me an error because the table is deleted.
Also, it will not be able to see this table in the default directory which is
Unmanaged table/ External table
- External tables, we are required to provide the path where we need to store that table using the keyword ‘location’ in create table command.
CREATE EXTERNAL TABLE stg_s2_json.products
( product no string, product name string, description string, active string, created date string, updated date string) row format delimited fields terminated by ‘,’
- Hive only deletes the metadata. Data is permanent.
- These tables could be used by anyone who has access to HDFS, so they need to manage security at the folder level.
For understanding the dropping of the external table, we will use the table “products”.
Let’s check if the table is internal or external. Again, “describe formatted table_name” command.
Observe “limit 10” in the select command. Table Products contains the below data:
To check it in Hue, it looks like this:
Let’s see what happens when we drop this table:
Drop table table_name;
Now, if trying to retrieve the table’s data, It throws an error.
First, using the “select” command on the terminal, it is going to throw me an error which means the metadata for the external table is deleted.
Observe Error here:
Second, checking on hue the state of data, the file “products.json” is still present in HDFS, which means the data is permanent.
I am going to make it easy and provide you with key points for both kinds of tables. You decide which type will suit your requirements.
|Also called “Managed Table.”
|Also called “Unmanaged table.”
|No need to provide location, Hive default directory manages this data.
|Need to provide location
|Deletes table’s metadata as well as (Data is temporary)
|Hive will leave the data untouched(Data is permanent)
|The hive itself controls the security of the table.
|Need to manage security at the folder level
This is a guide to Hive Drop Table. Here we discuss a brief overview with types of Drop Table in Hive along with Syntax respectively. You can also go through our other suggested articles to learn more –
|
OPCFW_CODE
|
I've only had it for 10 minutes but it's better than I expected. I was looking for something I can launch and add a quick note (without clicking an extra step to add a txt note vs image, etc). This launches quick and is simple. Even better, I can connect to notes online. (If I type up my grocery list on the laptop, they show up in my app. Yay!)
It's a must app for every smartphone. It has a very fast indexing of words and high-lighting inside notes. One small request is please add "tag searching option and multi tagging check box inside each notes" -for fast work progress. I deleted other note taking apps as it has very best simple and organizing tools for making notes. Please improve with the options which I mentioned. Thanks alot.
What a great little note taking app! I use it daily as a rough note book. My job as a SysAdmin and developer requires me to capture a lot of notes to & from my Mac and remote servers. Simplenote makes it a breeze to paste code/log snippets from here & there. Works like a charm 😊 An app from WordPress developers can never disappoint.
Great app. Very simple. Have used it for a long time without issue. Exactly what I wanted. However, I see that new users are required to make an account to use the app. If this is implemented to older downloads, I will be uninstalling and finding a different app. Making an account should be an optional feature for extra security. Should not be mandatory for a note taking app. Will no longer be "simple" if forced account creation is necessary. Food for thought.
1. Full cross platform Markdown support. Not much use if it doesn't work everywhere. 2. Full open file access with options to sync across Dropbox, Drive, Synthing, etc. and note encryption. Frankly, having the apps open source and forcing users to use your proprietary sync service across your own servers just comes across as suspicious. What do you have to gain when your apps are open and free anyway?
I love this app and its a 5-star for functionality, for taking notes and writing up and editing and all the key writing tasks it is perfect. the markdown support is top, and the live link sharing is a great feature. plus it has a great desktop app experience that matches the mobile app. It's perfect, and has the exact feature set I'm looking for. However, i hate this app because twice I've been signed in on different devices and somehow the sync has deleted work on the device I'm working on with an earlier draft from the other device. So it's a 1-star for me too. it's mortifying, and I'm terrified of losing something really essential. but when i look around the market no app comes close (Journey pro comes pretty close, but it's not quite right). i have tried every writing app in the play store and most in the Apple store. the best apps are all on apple sadly (Drafts/Bear etc) but this is the best on Android and 99.8% excellent. And its still terrifying.
* New users of the app must sign in with a Simplenote account.
* Minor UI and reliability improvements.
Microsoft OneNote is your notebook for capturing what's important in your life
Take handwritten notes for class, work, or fun! Easily markup PDFs and share ✏️️
|
OPCFW_CODE
|
Component Architecture -- Follow-up post
After posting my thoughts on component architectures I asked Stu Herbert to provide me with any comments he had on this particular topic, having been the original inspiration. He was kind enough to do so and I have extracted some of his thoughts and weaved them into this post along with some other thoughts I've had.
First of all, I did a facepalm when I realized there were things I wanted to talk about in the original post that I had missed. In his presentation at PHP UK Conference Stu pointed out that PHP has not made the commitment to reusable components like the other major scripting languages used currently for web development, Ruby and Python. Ruby has it's excellent Ruby gems system to allow the installation and distribution of components written in Ruby. Python has two solutions that I am aware of in EasyInstall and the Python Package Index. They both serve the same purpose: allowing the installation and distribution of 3rd party components. In PHP PEAR is the system we should all be using for doing this. The reasons why are interesting, and I'd like to share my thoughts before we see what Stu had to say.
It seems to me that the difference between PEAR and the solutions offered in Ruby and Python can be thrown into one of two piles: cultural and technical. On the cultural side, both Python and Ruby have encouraged developers to use these 3rd party systems as the primary means of distributing code. I think if you look at the popular components available in something like Rails, I think you would be hard-pressed to find one that did not exist as a gem. My early experiments with Rails back in 2004 made me think that the gem system was the perfect way to handle it. Sure, you can end up in dependency hell trying to figure out what gems go with what other gems, but I do not think there is ever an easy solution to that problem.
When you look at the technical issues, this is where I think PEAR breaks down. As far as I can tell, to make your component available to install with PEAR you have to create your own PEAR channel. What? Really?!? Am I the only one who thinks that this is an unnecessary limitation? When I added Djaml to PyPi, all I had to do was create two metadata files in a specific format and then push it up to PyPi using tools that are provided by the same CLI utility you install other packages with. Bingo presto, my package was now available to anyone who wanted to use it. I didn't have to set up my own channel. To me, this the main reason why PEAR is not the dominant installation tool that it should be.
In a perfect world I would like to see all the major PHP frameworks make themselves available via PEAR as their main method of distribution. Wishful thinking, I know.
Okay, so now time for us to hear from Stu:
Your question "how do you decide what stuff can be extracted out and built into a component?" merits more than just an email ... I'm sure this is a conference talk / tutorial day topic in its own right :) Would you say that most developers could recognise a component if they saw one? * Clearly-defined purpose * Clearly-defined API * Clearly-defined data structures * Separation of concerns * Reusable * Re-installable on multiple computers * Replaceable / substitutable
But seeing one when designing (or refactoring!) software is something fewer PHP developers have had the opportunity to practice?
Stu is, of course, absolutely right. It is impossible to extract code into a reusable component if you don't even know how to identify it. Like many, MANY skills in programming, the ability to refactor and extract code is a skill that needs to be cultivated and learned. I myself have run into this many times during a coding session while refactoring. Does this sound familiar?
- implement some functionality
- get a request to add something
- realize that the new request is similar to something you've already done
The trick is realizing that the next step in this chain is not "cut-and-paste the previous functionality because we supposedly have no time". The next step is to extract that functionality into something that can be re-used. Usually this in the context of the application itself (ie extracting that code into a helper method if you're using a framework) but it is worth thinking about how to make that a component that can exist OUTSIDE of the application itself.
More from Stu:
I think you hit the nail on the head towards the end of your article, when you started talking about services. A component could be defined as being: * a self-contained set of code * that provides a reusable service * to a larger body of code * by being aggregated into that code
This differentiates it from a service-oriented architecture in one crucial detail: a component runs as part of your app - same address space, same process ID - whereas a service runs outside your app, and is contacted either locally via IPC or remotely via networking.
But none of that helps the first-time component writer, I fear! This is big-picture stuff, or perhaps better described as 20/20 hindsight stuff - things that developers can only see after they've learned how to do it :) What they need is their first step to making a component - an additive process that builds on that first step until components are as natural a strategy as factories, DI, and the like. This is very similar to how one teaches martial arts, where we start from the floor (how a fighter stands, how they step) and work upwards.
Stu goes on to share some super sekret info with me surrounding his plans in this area and I look forward to seeing them come to fruition. Thanks Stu!
|
OPCFW_CODE
|
Users of the PC version of Gears of War have been unable to run the game since yesterday (29th January 2009). If they try, they get a message:
You cannot run the game with modified executable code
Joe Graf from Epic has acknowledged the problem:
We have been notified of the issue and are working with Microsoft to get it resolved. Sorry for any problems related to this. I’ll post more once we have a resolution.
The workaround is to set back your system clock. An ugly solution. Of course, some users went through the agony of full Windows reinstalls in an effort to get playing again.
So what happened? This looks to me like a code-signing problem, not a DRM problem as such, though the motivation for it may have been to protect against piracy. Code signing is a technique for verifying both the publisher of an executable, and that it has not tampered with. When you sign code, for example using the signwizard utility in the Windows SDK, you have to select a certificate with which to sign, and then you have an option to apply a timestamp. The wizard doesn’t mention it, but the consequences of not applying a timestamp are severe:
Microsoft Authenticode allows you to timestamp your signed code. Timestamping ensures that code will not expire when the certificate expires because the browser validates the timestamp. The timestamping service is provided courtesy of VeriSign. If you use the timestamping service when signing code, a hash of your code is sent to VeriSign’s server to record a timestamp for your code. A user’s software can distinguish between code signed with an expired certificate that should not be trusted and code that was signed with a Certificate that was valid at the time the code was signed but which has subsequently expired … If you do not use the timestamping option during the signing, you must re-sign your code and re-send it out to your customers.
Unfortunately, there is no timestamping for Netscape Object Signing and JavaSoft Certificates. Therefore you need to re-sign your code with a new certificate after the old certificate expires.
I don’t know if this is the exact reason for the problems with Gears of War, and I’m surprised that the game refuses to run, as opposed to issuing a warning, but this could be where the anti-piracy measures kick in. Epic’s programmers may have assumed that the only reason the certificate would be invalid is if the code had been modified.
I blogged about a similar problem in February 2006, when a Java certificate expired causing APC’s PowerChute software (a utility for an uninterruptible power supply) to fail. That one caused servers to run slow or refuse to boot.
As far as I know, there is no way of telling whether other not-yet-expired certificates are sitting on our PCs waiting to cause havoc one morning. If there are some examples, I hope it does not affect software running, say, Air Traffic Control systems or nuclear power stations.
If you are a Windows developer, the message is: always timestamp when signing your code.
|
OPCFW_CODE
|
import scipy.integrate as integrate
import matplotlib.pyplot as plt
from numpy import *
from mathdefs import *
from svans import *
import os
v = -0.0850769519806
angle = 2*pi/3
def generateFrames(t):
C = 1
curve = generateData(C, angle, v, t)
cvals = arange(-1, 0.99, 0.1)
stoftar = [generateData(c, angle, v, t) for c in cvals]
fig = plt.figure()
ax = plt.axes(xlim=(-200,200), ylim=(-200,200))
line, = ax.plot([], [])
stoftlines = list()
for _ in range( len(stoftar)):
l, = ax.plot([], [])
stoftlines.append(l)
#~ print stoftlines
ax.plot(0, 0, 'yo') # solen
ax.set_aspect('equal')
def animate(i):
x = [k for k,_,_,_ in curve[:i*50]]
y = [r for _,r,_,_ in curve[:i*50]]
line.set_data(x,y)
for j in xrange(0,len(stoftlines)):
stoftline = stoftlines[j]
x = [k for k,_,_,_ in stoftar[j][:i*50]]
y = [r for _,r,_,_ in stoftar[j][:i*50]]
stoftline.set_data(x,y)
return line,
from matplotlib import animation
# call the animator. blit=True means only re-draw the parts that have changed.
anim = animation.FuncAnimation(fig, animate, frames=t / 50, interval=100, blit=True)
print "Now trying to save..."
anim.save('basic_animation.mp4')
generateFrames(10000)
|
STACK_EDU
|
HSRP (Hot Standby Router Protocol) is one of the First Hop Redundancy Protocol (FHRP) that is Cisco proprietary. To configure HSRP on Cisco devices, there are specific configuraiton commands. In this lesson, we will learn HSRP Configuration, on Cisco routers.
For our Cisco HSRP Configuration Example on GNS3, we will use the below GNS3 network topology. Some of the below configurations has done before this example. Here, we will focus on HSRP part only. As a summary, on each router, below configurations is needed.
R1 & R2
The ip configurations of the topology will be done, like the topology picture.
Now, let’s start our Cisco HSRP Configuration with Router 1.
You can Download GNS3 HSRP Configuration File.
Table of Contents
On Router 1, at the beginning, we will configure the interface ip address. After that we will start HSRP Configuration. Firstly, we will assign HSRP Group Number as 10. Then, we will assign a Priority value as 110. As you know, by default HSRP priority is 100. And the router that has the highest HSRP priority is elected as Active Router. After that, we will usee “preempt” command to open preemption. Laslty, for tracking, we will assign a decrement value for a link. If a failure occurs in this link, then priority value will decrease and this will effect Active router selection.
R1(config-if)# ip address 10.0.0.1 255.255.255.0
R1(config-if)# standby 10 ip 10.0.0.3 //Assigning group for HSPR ‘10’
R1(config-if)# standby 10 priority 110 // Assigning priority
R1(config-if)# standby 10 preempt // Forceful assignment of active role
R1(config-if)# standby 10 track FastEthernet0/0 20 //Tracking the WAN interface for Failover
We will configure Router 2 like Router 1. Here, we will only not used a tracking command and give a different HSRP priority value. The physical interface of this router will be different but the Standby IP address will be the same of Router 1.
R2(config-if)# ip address 10.0.0.2 255.255.255.0
R2(config-if)# standby 10 ip 10.0.0.3 //Assigning group for HSPR ‘10’
R2(config-if)# standby 10 priority 100 // Assigning priority (100 – Default)
R2(config-if)# standby 10 preempt // Forceful assignment of active role
To verify our Cisco HSRP Configuration, firstly we will use “show standby brief” command onn R1 and R2. With this command, we will see the Active and Standby routers. As you can see below, R1 is the Active Router.
And, R2 is the Standby router, in other words, backup router.
The traffic will traverse via R1, because R1 is the Active router..
To test our HSRP Configuration during a failure, we will shut down fa0/0 interface of R1 . Remember, we have track configured in HSRP for fa0/0 whose action is to decrement the priority value by 20 when the interface goes down.
After this shutdown, Router 1 fa0/0 interface HSRP priority will decrement 90 from 110. And this will make Router 1 as new Standby router. Because, Router 2 has priority value as 100 and this is higher that the new priority value of Router 1.
As you can see below, Router 2 is now in Active state.
And if we use, trace command to check the traffic way, we will see that, traffic will go through Router 2.
|
OPCFW_CODE
|
The Rise & Fall of The Mall
Share this & earn $10
Published at : 07 Dec 2020
Subscribe to Ordinary Things
Outro by Jeff Jensen:
When did the first mall in America open? When was the Mall's hayday? And are all malls doomed to be dead and abandoned? Ride the vapourwave to the malls of yesterday...and tomorrow to find out.
what happened to malls
comments powered by Disqus.
Can your business be liable for providing WiFi access?
JUST SURF THE NET
Parking Garage Services Playset for Kids!!! Tayo Bus and Car Toys for Children
Naana Brown D3sc3nd On Ayisha Modi For Mentioning Her Name In Nana Agradaa's video She Did
What's so funny about mental illness? | Ruby Wax
The Kid LAROI x Einer Bankz - So Done [Acoustic]
Кайрат Примбердиев «The Best» - Нокауты - Голос - Сезон 5
Search Engine Marketing Interview Questions | Google Ads Interview Questions & Answers | Simplilearn
How to Produce: Twenty One Pilots - Chlorine
Maja Keuc - No One (Slovenia) Live 2011 Eurovision Song Contest
Jerry Garcia "Visions Of Johanna"
CPR, Heimlich Manuver, Back Blows, Chest Thrust
Looking for his wife on the CCTV camera, he found her lying on the floor
funny trolls of students[fun in online classes] funny accounting class
Difference Between Period Cramps And Pregnancy Cramps & Can You Have Pregnancy Cramps At 3 Weeks.
Lil Baby - We Should (Lyrics) Ft. Young Thug
How to Look After a Hamster - Basic Care Needs
Our School students on... Supply Teachers
Little Singham Super Squad #1 | Saturday 19th Dec 11.30 AM | Discovery Kids
Don't Litter | Moral Stories For Kids | English Story For Kids | English Moral Stories Ted And Zoe
9 Tips to Keep Hair Clean and Voluminous Longer
How to Apply Stoic Philosophy to Your Life | Tim Ferriss
Cyberpunk 2077: The Illusion Of Choice
At Gueilai, burdock is a elaborate planning!_gl002
[MV] BOL4(볼빨간사춘기) _ Travel(여행)
3 Reasons Why You Should Be Using A VPN On Your Smartphone
Fedotkin Zakhar - Reveal the unseen: Getting access to data with graphic file editing libraries
4 ridiculous WWE match stipulations
iRacing: 3 MONATE GRATIS!!!! - 3 MONTH FOR FREE!!!! [incl. 7 Cars & 10 Tracks]
Searching For Treasure In Popular LAGOON!! ft. Dallmyd (found Jewelry)
Zelda: Breath of the Wild | Good-Sized Horse Side Quest - Wasteland Tower Region
Erykah Badu - Certainly (flipped it)
Parents Learn Their Nanny Had Their 11-Year-Old Son's Baby
Topic, A7S - Breaking Me (Acoustic Video)
Разница между HAVE BEEN TO и HAVE GONE TO - English Spot
HIGHWAY 2 C63 AMG VS SWEDISH POLICE BEST CHASE
Disability does not mean inability | Yasmin Sheikh | TEDxTwenteU
Run the wall to avoid the wall!!
What Kind of Finish Should You Use? | WOOD FINISHING BASICS
BEST SQUAD EVER!! (Rogue Company)
Matthew 7 (Part 3) :7-11 "Ask, and it will be given to you"
Minecraft - 1vs1 Plugin auf deinem Aternos Server!
Sony's Continued Commitment to Single Player Games is Astounding | Other Publishers Should Take Note
SCHOOL TIME 2 | Jokes | CS Bisht Vines | Desi Comedy Video | School Classroom Jokes
АИД РАССТАЛСЯ С ДАШЕЙ😢😢
अपने दिमाग को नियंत्रित करे | How to Control Your Mind
Stretch and Elongate your Wash and Go
|
OPCFW_CODE
|
I'm kind of a big fan of this whole game development and game developers conference thing. This is especially true since the main conference started on Thursday. The Indie/Serious Game Summits are both fantastic, but the lectures and sessions in the main conference are just so good
. And it's hard to deny how awesome it is to see people you respect and who made great games talk about a topic they're passionate about.
After the normal, at this point, morning in the Marriott lobby writing about the prior day, I went on over to the conference to attend Richard Rouse III's "Environmental Narrative" talk. Coincidentally (or not?) enough, this session took place in the same room as the excellent Harvey Smith and Matthias Worch talk on environmental storytelling
on Thursday. This means that there was a significant amount of people who wanted to get into this session in one of the smaller rooms of the conference that were unable to fit in. Rouse's lecture went through a series of examples on various types of objects/scenarios that can be used to both convey a story in the environment as well as aid players in navigation through a level via visual cues and flow hints. Much like Smith/Worch's talk, Bioshock
was frequently cited as a brilliant recent example of a game with a very carefully and effectively designed environmental narrative. Once Rouse had gotten through a series of techniques and practices, he used his work on The Suffering
(a superb game, by the way) to demonstrate ways that he and the rest of the development team handled the game's design. One of the more interesting examples is that, despite gathering an abundance of information on prisons through the internet, The Suffering
's development team did not actually get to visit a real prison until late in the game's development. This trip gave them several ideas as to how they could make a more cohesive, believable prison (such as using awful shades of paint to visually separate various wards of the prison), but since it was so late in development a lot of the more interesting discoveries were unable to be used.
While Rouse presented some solid level design techniques and ideas, I feel like the entire presentation failed to make the leaps in critical thinking and design methodology when it was so close
to doing just that. And this was actually an issue I discovered with a couple sessions throughout the day: a seeming unwillingness to attempt to draw general design lessons from experiences or to think critically about why (and where) a given design technique "works." Going up to the podium to talk about how a game handled its approach to level design is interesting, but failing to think critically about why
that design approach works is a step I consider both incredibly useful to a wider audience of designers and necessary for a compelling lecture. Granted, it's hard to think critically about why the practices and techniques we employ as designers "work" (or don't), but it's the effort put into that thought which should define our role as designers. When I think about the talks/presentations I've heard from GDC either in-person or ones which have been archived online, they're the ones that make that extra logical leap to answer "why?" When Clint Hocking
gives a talk inspired by one of his games, he talks about the design lessons (such as intentionality vs. improvisation, simulation boundary, etc.), he does not point to a feature on a game, show the audience a video, and then cap it off with "so we did that." The Worch/Smith session from the day earlier, for instance, covered how people, in general, "fill in the blanks" of a situation by going through an elaborate series of events to, ultimately, come to a conclusion. Worch/Smith then take that extra step to explain that this player-initiated investment into a situation not only enriches the environment they're in, but brings that player closer to the game as a whole. I'm not intending to single out Rouse's talk for this rant (because it's actually inspired by another session that I won't mention), but Rouse gave a very solid lecture that just came so close to that last necessary step.
Next up: Sid Meier's keynote, "The Psychology of Game Design (Everything You Know is Wrong)." I had been told by several people throughout the course of the week that, generally, the keynotes are generally a letdown. Supposedly this is due to the incredibly large, diverse audience of people and disciplines that keynotes have to appeal to, but I was hoping that, being Sid Meier, this wouldn't be the case this year. Unfortunately, it was. Sid Meier took audiences through a series of explanations as to why things that seemed "cool" ended up being received poorly by players. The primary example that Meier cited was that of "Mathematics 101," which he exemplified in the display of Civilization Revolution
's pre-combat information. When the aggressor had an attack rating of 1.5 and the defender had a defense rating of 0.5, Meier said this was a fairly self-explanatory display of the odds (3:1): the aggressor would win three times out of every four attempts. Players, he said, did not interpret it like this and, instead, assumed that their number was higher so they should win. He then took the audience on a few iterations of this concept in what I actually took to be somewhat
of a condescending manner towards the players. In essence, the combat in Civilization Revolution
evolved because players couldn't get the "mathematics 101" of the game, so Meier went on several iterations to make the ratio representation make sense to the player as well as to take into account how prior battles fared so that if the attack:defense was 2:1, then players wouldn't lose two fights in a row.
One of Meier's strangest examples throughout the keynote was that of flight simulators, though. He feels the genre started out by being "accessible" and "easy to play." Then as they went through iterations they became more complex and more realistic and "pretty soon the player went from 'I'm good' to 'I'm confused'. My plane is falling out of the sky." Then, Meier said, "the fun went out of it." He wrapped up this analogy by saying "keep your player feeling good about themselves." I thought this little anecdote actually put me off from a lot of the rest of the keynote: who is anyone to say that the evolution of the flight simulation genre was a bad thing? It's a definite niche genre, but that doesn't make the genre bad or completely invalidate the design evolution it took. Then again, it's an anecdote, so I'm probably over-thinking Meier's intent.
After meeting with some old friends from Stardock for a bit, I went to the "What Color is Your Hero" panel featuring Mia Consalvo, Leigh Alexander, Manveer Heir, and Jamin Brophy-Warren. Without even a doubt in my mind, the panel was one of my highlights of GDC. It was an intelligent, insightful, and important conversation about the role of diversity in both video games and in the game development community. I wish I had some of the stats that Consalvo presented at the beginning of the panel, but alas. Heir championed the idea that utilizing a character's racial/social background can enrich a game experience in ways that most all video games fail to realize; specifically, Heir cited the Native-American protagonist in Human Head's Prey
. The lead in Prey
was ashamed of his background, wanted off the reservation, and was completely uncomfortable with who he was, but through the course of the game he learned to "spirit walk," talked to his ancestor in a vision (which took place at what looked like a burial site, if I remember correctly), and so on. This feature of Prey
's narrative transformed what would have otherwise been a game about dudes shooting aliens into somewhat of a Native American spiritual journey.
Alexander, in a discussion about the role of the developers and creatives in creating a more diverse cast of characters in their own games, raised a very noteworthy point: Resident Evil 5
. In the case of Resident Evil 5
, there are developers who were attempting at diversifying the characters and settings of their game and this, essentially, completely blew up in their faces. Alexander went on to say that it is understandable that a culturally homogenous development community would be nervous about attempting to portray a non-white character and subsequently screwing it up. She went on to say, however, that it can be done, the cultural/gender research just has to be done. The Wire
was cited as an example of the work that series creator/writer David Simon did to present a wide variety of characters in a responsible way (though the series did take fire for its presentation of women
). This was a great panel which gave a proper kick-off to some very necessary, important conversations.
My final session of the day was Lee Perry's "Prototyping Based Design: A Better, Faster Way to Design Your Game." Perry, a senior gameplay designer at Epic Games, took audiences through Epic's process for game design starting with Unreal Tournament
as the studio moved forward to the bigger, more cohesive project that eventually became Gears of War
. The studio had a very design document-heavy and haphazard design process which was yielding poor results for what needed to be a more well-designed game than the studio's prior projects. Kismet, which was an unrelated tool and "smaller problem" at the time, was being developed around the time when design documents were being tossed around the studio. One day Perry mentioned that he was screwing around with Kismet and tossing scaled-up shoulder pads on this big monster in order to, in a way, get this buff, big dude in the game. He tossed some "boom" speech bits on the character, showed it to some people, and eventually this little prototyped monster became the Gears of War
Perry took the audience through the transition in design practices that occurred after this prototype was done; this involved the change from "design bibles" (very large, unwieldy design documents) to very active, designer-driven prototypes in the Unreal Engine using very basic Kismet parts such as elevators, triggers, and so on. Perry indicated the need for a designer to be more of a Chef, actively involved in the creation and iteration on a design, rather than a Food Critic, a designer who writes a doc and waits for the plate to be prepared by someone else before providing feedback. Perry's session was a very practical, thorough, and well-presented lecture on the importance that rapid iteration and quick prototypes when it comes to showing everyone in a studio an idea. The importance of feedback (blood, audio, camera shake, etc.) to a prototype was also stressed; regardless of how quick a prototype is, the prototype must sell everyone in the studio on the idea and, as a result, it needs to properly and effectively communicate that idea.
Immediately after this session ended, I went on over to the IGDA/GameDev.net mixer being held at Jillian's in the Metreon. I was held up at the door momentarily since I didn't have the proper "IGDA Party" ribbon on my badge, but then I flashed my badge at Joshua Caulfield at the door and say "I'm GameDev.net" and was let immediately in. I felt powerful for approximately five minutes. And that was a fun little power trip.
Finally, I ended the day with an immaculate dinner organized by Michael Abbott
. I met people like Matthew Burns, Simon Carless, Borut Pfeifer, Chris Dahlen, Krystian Majewski, and oh my god the list goes on and on and on and on. It was an incredibly couple of hours filled with the kind of fascinating conversation you'd expect from some of the most insightful writers in the game industry. It was a great 'end' to GDC (as I only have a couple sessions on Saturday and then I'm off to the airport).
|
OPCFW_CODE
|
Given that the vast majority of PyMOL downloads
logged are not for Linux (nearly 10 to 1 against), given the recently
splintering of the Linux desktop market, and given the
importance of maintaining backward compatibility with older Linux distributions,
we must take a simple lowest-common denominator approach toward our
precompiled Linux binaries.
Thus, our 32-bit Linux builds are currently
prepared in a glibc-2.3 environment using GCC3. These builds deliver
reasonable performance and compatibility on a wide set of distributions.
Obviously, we cannot prepare optimized builds for each member of the diverse
combinatorial population of systems (DISTRO x VERSION x RUNTIME x GPU x DRIVER)
where it needs to run. Based on PyMOL usage alone, nearly 90% of our
platform-specific effort should be directed at Windows and Mac, not Linux.
Supporting one Linux binary out of three total is already an over-allotment
Practically speaking, the only way to achieve top
PyMOL performance on Linux is to build from source code using libraries and
compilers optimal for your specific hardware and distribution. That is one
of the reasons why the PyMOL open-source code is targeted at Linux
Thanks for all the input!
It seems like people
do not notice any difference in speed between the different linux
distributions. Also not between KDE or Gnome. That is good to know.
Apparently I have some other problem (unrelated to pymol, probably
openGL/glx related) which causes the dramatic difference in speed between our
Redhat (5 fully patched) and Suse (10.2 fully patched). We are using the
latest nvidia drivers from the nvidia website.
Accelrys/Insight/Discovery Studio. We got Insight/Material Studio (we do not
have the license for DS1.6 or 1.7) to run under Fedora7. Pymol also runs fine
so probably I will switch both machines to Fedora7. Anyway, thanks
Laboratory of Organic
Swiss Federal Institute of Technology
[mailto:firstname.lastname@example.org] On Behalf
Of Mathias W.
Sent: 13 July 2007 12:48
Subject: Re: [PyMOL] best linux distribution to run pymol
Joris Beld schrieb:
(unfortunately i cannot switch to Suse since i cannot get
Insight/Discovery Studio to run under Suse).
This due to the ignorance of the accelrys developers. Even
the newest version of DS (1.7) does not run on modern
Distributions using glibc-2.4 or higher. They say that they
only support RHEL4 which is real old (and uses glibc-2.3).
I guess it would have been no big deal to test DS on a
glibc-2.4 system and to find the bug which is preventing it
from running (I don't think they are still using
linuxthreads). As of the time DS 1.7 was released glibc-2.4
was already widely spread and so I call this ignorance. I got
a test version of DS 1.7 and I don't want to use an old linux
just because of this modelling program. As you pointed out
you end up with the situation that you cannot run any other
newer program (or only with great effort)...
granted that I am not getting money from accelrys ;) I think we should
differentiate between support to different distributions and
compatibility with different distribution.
I have an "unsupported" DS 1.7 running on RHEL 5 (with very nice native
support for nvidia graphics) and a week ago it was running "unsupported"
on a Kubuntu 7.04 (feisty fawn).
At the moment I can not say how pymol is performing on RHEL 5.
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
PyMOL-users mailing list
|
OPCFW_CODE
|
Relational Database Management System
Oracle provides a flexible RDBMS called Oracle7. Using its features, you can store and manage data with all the advantages of a relational structure plus PL/SQL, an engine that provides you with the ability to store and execute program units. The server offers the options of retrieving data based on optimization techniques. It includes security features that control how a database is accessed and used. Other features include consistency and protection of data through locking mechanisms.
Oracle applications may run on the same computer as the Oracle Server. Alternatively, you can run applications on a system local to the user and run the Oracle Server on another system (client-server architecture). In this client-server environment, a wide range of computing resources can be used. For example, a form-based airline reservation application can run on a client personal computer while accessing flight data that is conveniently managed by an Oracle Server on a central computer
Oracle8 is the first object-capable database developed by Oracle. It extends the data modeling capabilities of Oracle7 to support a new object relational database model. Oracle8 provides a new engine that brings object-oriented programming, complex datatypes, complex business objects, and full compatibility with the relational world.
Oracle8 extends Oracle7 in many ways. It includes several features for improved performance and functionality of online transaction processing (OLTP) applications, such as better sharing of runtime data structures, larger buffer caches, and deferrable constraints. Data warehouse applications will benefit from enhancements such as parallel execution of insert, update, and delete operations; partitioning; and parallel-aware query optimization. Operating within the Network Computing Architecture (NCA) framework, Oracle8 supports client-server and Web-based applications that are distributed and multitiered.
Oracle8 can scale tens of thousands of concurrent users, support up to 512 petabytes, and can handle any type of data, including text, spatial, image, sound, video, and time series as well as traditional structured data.
Oracle8i, the database for Internet computing, provides advanced tools to manage all types of data in Web sites.
It is much more than a simple relational data store. The Internet File System (iFS) combines the power of Oracle8i with the ease of use of a file system. It allows users to move all of their data into the Oracle8i database, where it can be stored and managed more efficiently. End users can easily access files and folders in Oracle iFS via a variety of protocols, such as HTML, FTP, and IMAP4, giving them universal access to their data.
Oracle8i interMedia allows users to web-enable their multi-media dataincluding image, text, audio, and video data. Oracle8i includes a robust, integrated, and scalable Java Virtual Machine within the server...
|
OPCFW_CODE
|
AA:AA:AA:AA:AA:AAand my remote BD Address with
BB:BB:BB:BB:BB:BBin the following post.
We will start to detect the Bluetooth version of the local device, in my case a Thinkpad W520. Very helpful
for me was the command
hciconfig -a hci0: Type: BR/EDR Bus: USB BD Address: AA:AA:AA:AA:AA:AA ACL MTU: 1021:8 SCO MTU: 64:1 UP RUNNING PSCAN ISCAN INQUIRY RX bytes:1303 acl:0 sco:0 events:139 errors:0 TX bytes:1290 acl:0 sco:0 commands:86 errors:0 Features: 0xff 0xff 0x8f 0xfe 0x9b 0xff 0x79 0x87 Packet type: DM1 DM3 DM5 DH1 DH3 DH5 HV1 HV2 HV3 Link policy: RSWITCH HOLD SNIFF PARK Link mode: SLAVE ACCEPT Name: 'user-THINK' Class: 0x00010c Service Classes: Unspecified Device Class: Computer, Laptop HCI Version: 3.0 (0x5) Revision: 0x2ec LMP Version: 3.0 (0x5) Subversion: 0x4203 Manufacturer: Broadcom Corporation (15)
The LMP Version is already saying that my notebook computer has only Bluetooth 3.0.
Very helpful for detecting remote devices was the command
hcitool. The following command shows all
hcitool con Connections: < ACL BB:BB:BB:BB:BB:BB handle 12 state 1 lm MASTER
The BD Address of the remote device is needed to find the Bluetooth version with the following command.
hcitool info BB:BB:BB:BB:BB:BB Requesting information ... BD Address: BB:BB:BB:BB:BB:BB Device Name: Nintendo RVL-CNT-01-TR LMP Version: 2.0 (0x3) LMP Subversion: 0x1d8d Manufacturer: Cambridge Silicon Radio (10) Features: 0xbc 0x02 0x04 0x38 0x08 0x00 0x00 0x00 <encryption> <slot offset> <timing accuracy> <role switch> <sniff mode> <RSSI> <power control> <enhanced iscan> <interlaced iscan> <interlaced pscan> <AFH cap. slave>
It seems to be impossible to connect a Wii Remote controller via Web Bluetooth to a web browser. A Wii Remote Plus controller is using Bluetooth 2.0 as you can see above.
|
OPCFW_CODE
|
using System;
using System.Collections.Generic;
using BeaverSoft.Texo.Core.Actions;
using BeaverSoft.Texo.Core.Markdown.Builder;
using BeaverSoft.Texo.Core.Path;
using BeaverSoft.Texo.Core.View;
namespace BeaverSoft.Texo.Commands.FileManager.Extensions
{
public static class MarkdownExtensions
{
public static void WritePathList(this MarkdownBuilder builder, IEnumerable<string> paths, string relatedTo)
{
LinksModel model = BuildLinks(paths, relatedTo);
WritePaths(model.Directories, builder);
WritePaths(model.Files, builder);
WritePaths(model.NonExists, builder);
}
public static void WritePathLists(this MarkdownBuilder builder, IEnumerable<string> paths, string relatedTo)
{
LinksModel model = BuildLinks(paths, relatedTo);
WritePathsWithHeader(model.Directories, $"Directories ({model.Directories.Count})", builder);
WritePathsWithHeader(model.Files, $"Files ({model.Files.Count})", builder);
WritePathsWithHeader(model.NonExists, $"Non-Existing ({model.NonExists.Count})", builder);
}
public static void WritePathOverview(this MarkdownBuilder builder, IEnumerable<string> paths, string relatedTo)
{
LinksModel model = BuildLinks(paths, relatedTo);
if (model.Directories.Count > 0)
{
builder.Bullet($"Directories ({model.Directories.Count})");
}
if (model.Files.Count > 0)
{
builder.Bullet($"Files ({model.Files.Count})");
}
if (model.NonExists.Count > 0)
{
builder.Bullet($"Non-Existing ({model.NonExists.Count})");
}
}
public static void WriteRawPathList(this MarkdownBuilder builder, List<string> paths)
{
paths.Sort(StringComparer.OrdinalIgnoreCase);
foreach (string path in paths)
{
builder.Bullet();
builder.Write(path);
}
}
private static void WritePathsWithHeader(List<ILink> paths, string title, MarkdownBuilder builder)
{
if (paths.Count <= 0)
{
return;
}
builder.Header(title, 2);
builder.WriteLine();
WritePaths(paths, builder);
}
private static void WritePaths(List<ILink> paths, MarkdownBuilder builder)
{
if (paths.Count <= 0)
{
return;
}
paths.Sort((first, second) =>
string.Compare(first.Title, second.Title, StringComparison.OrdinalIgnoreCase));
foreach (ILink directory in paths)
{
builder.Bullet();
builder.Link(directory);
}
}
private static LinksModel BuildLinks(IEnumerable<string> paths, string relatedTo)
{
LinksModel result = new LinksModel();
foreach (string path in paths)
{
PathTypeEnum type = path.GetPathType();
switch (type)
{
case PathTypeEnum.Directory:
result.Directories.Add(new Link(path.GetFriendlyPath(relatedTo), ActionBuilder.PathOpenUri(path)));
break;
case PathTypeEnum.File:
result.Files.Add(new Link(path.GetFriendlyPath(relatedTo), ActionBuilder.PathOpenUri(path)));
break;
case PathTypeEnum.NonExistent:
result.NonExists.Add(new Link(path.GetFriendlyPath(relatedTo), ActionBuilder.PathUri(path)));
break;
}
}
return result;
}
private class LinksModel
{
public readonly List<ILink> Directories = new List<ILink>();
public readonly List<ILink> Files = new List<ILink>();
public readonly List<ILink> NonExists = new List<ILink>();
}
}
}
|
STACK_EDU
|
How does Test Mode work?
- How to toggle an active gateway account to Test Mode?
- Flush Test Transactions
- Can I move transactions that were processed in Test Mode to Active Mode?
- As the partner, how can I check to see when a merchant account was put in Test Mode?
- If the account is an Active account and we put it in Test Mode, is it still billable?
- When the account is in Test Mode, do the AVS and CVV data get passed?
- Is there a demo account for the payment gateway and test credit card/account number that we can use for testing?
- How do you trigger errors in Test Mode?
- Virtual Pin Pad Testing via the Virtual Terminal
- Can I test Payer Authentication 2.0 transactions in Test Mode?
- What cannot be tested in Test Mode?
- Video Tutorial
Test Mode allows the merchant to toggle their entire active gateway account into and out of Test Mode. While in Test Mode, merchants can submit test transactions to the Payment Gateway. Test transactions that are submitted while the account is in Test Mode are not live and do not charge real credit cards.
The merchant's user will need the 'Access Administrative Options' permission to be able to toggle into and out of Test Mode. Primary users have this permission set by default and cannot be removed.
Navigating to Test Mode settings page - in the Merchant Portal, on the left side panel → click on Options → Settings → under Transaction Options click on Test Mode or from the homepage My Settings → All Settings → under Transaction Options click on Test Mode.
How does Test Mode work?
Test Mode is a helpful gateway feature for merchants who are setting up their website to accept payments so they can test functionality before starting to accept transactions. Keep in mind that if an account is in Test Mode, the gateway will simulate all valid credit cards to be approved but no charges will actually be processed and nothing will be sent to the credit card or ACH processor. Customers will not see charges on their accounts. The Gateway is simulating responses rather than reaching out to the Processor or Card Issuer for a real response. Test Mode is not user-specific (e.g. if one user puts the account in test mode then it is set for the entire account and every user who logs in will see it in test mode) and does not apply to specific sources.
***IMPORTANT*** Transactions ran in test mode DO NOT process at the Bank and the Merchant will NOT be funded. Once you are done using Test Mode and ready to process live transactions, you MUST change the account back to 'Live Mode' by going to the Options menu in the Control Panel and disable Test Mode.
Any transactions, recurring subscriptions, customer vault IDs, or invoices that you create in Test Mode will not appear in live mode and any transactions, recurring subscriptions, customer vault IDs, or invoices created in live mode will not appear in Test Mode.
An account in Test Mode will have a pink pop up upon logging in, and a Test Mode notice will ‘hover’ in the upper right-hand corner once you close the popup.
How to toggle an active gateway account to Test Mode?
First, inform the merchant that by changing their live account to Test Mode will cause real transactions to NOT process and the merchant will NOT be funded.
Log in to the Merchant Portal, on the left side panel → click on Options → Settings → under Transaction Options click on Test Mode, from here click on the "Enable Test Mode" button; or from the homepage under Utilities → click on Settings → under Transaction Options click on Test Mode, from here click on the "Enable Test Mode" button.
You will follow the same steps when you are ready to disable Test Mode.
Flush Test Transactions
Test Transactions can be purged, or ‘Flushed’, using the tools in the Test Mode section. Simply select the type of record you wish to flush (Transactions, Subscriptions, Vault Records, Invoices, and Product Manager) and the Date Range, and click the "Flush Test Transactions" button.
This action is not reversible, and the information is NOT recoverable. Flushed records are purged from our databases.
Can I move transactions that were processed in Test Mode to Active Mode?
No. Transactions that were done in test mode CANNOT be moved to the Active Mode. Those transactions will have to be ran again in Active Mode. Please warn your merchants about this prior to activating Test Mode.
As the partner, how can I check to see when a merchant account was put in Test Mode?
To check when the merchant account was put into Test Mode, log in to your Partner Portal and head over to ‘List Accounts’ → click on the merchant account → under ‘Merchant Status’ click on ‘Edit’. This section will display a 'History' log where you can see when the status of the account was changed, by which user, and the date and time from oldest to newest. Test Mode status is
If the account is an Active account and we put it in Test Mode, is it still billable?
If you take an active account and put it into Test Mode, the account is still billable. However, the per-transactions fees are not. Meaning, when you are running a test transaction in Test Mode, the merchant is not being billed for the per transaction fees.
When the account is in Test Mode, do the AVS and CVV data get passed?
The AVS and CVV status will not be updated since that data is not being passed in a test account. This is a default setting.
However, if the merchant wants to test the AVS and CVV status we can enable a flag (listed below) that will allow the merchant to do so. We will need this request in an email. Please include the merchant's business name, their gateway ID, the request, and send it to firstname.lastname@example.org.
The flag is called 'Use New Test Mode Responses'.
Is there a demo account for the payment gateway and test credit card/account number that we can use for testing?
Transactions can be tested using one of two methods. First, transactions can be submitted to any merchant account that is in Test Mode. Second, the Payment Gateway Demo Account can be used for testing at any time. For more information, please visit our integration portal under the Dedicated test account section. The integration portal contains Test Data that you may use for either method.
You may also use the following username and password in your message for testing with this account:
How do you trigger errors in Test Mode?
More information on triggering errors in Test Mode can be found in our integration portal - Testing Information, at the bottom of the page.
Virtual Pin Pad Testing via the Virtual Terminal
The Virtual Pin Pad (VPP) can be used for testing the Customer-Present Cloud API without a device and via the Virtual Terminal. To do so, use our virtual registration codes which can be submitted to any Test Gateway Account or a gateway account that is in Test Mode.
T00001 code in the License Manager, and your virtual "fake" device will pop up into the Virtual Terminal Sale, Auth, and Credit pages. This registration code returns a valid “virtual” POI Device ID which will only work in test mode or a test account.
To use this code, in the Merchant Portal (while the account is in Test Mode) go to the left side panel → click on 'Options' → 'Settings' → under 'General Options' click on 'License Manager' → go to 'Registered Devices'* → enter this code in the 'Registration Code' field → click the 'Register' button.
*the 'Registered Devices' section is only available on merchant accounts that have the Value-added Service "Encrypted Devices" active.
- The virtual POI Device ID will be randomly generated, except for the last section, which will be all zeroes.
- If developers wish to generate a failed registration, they can send
T00002. This will simulate an invalid registration code error.
For more information on using the Virtual Pin Pad for testing the Customer-Present Cloud API without a device, please visit our Integration Portal → VPP Testing Information.
Can I test Payer Authentication 2.0 transactions in Test Mode?
Yes. Payer Authentication 2.0 can be tested when a merchant account is in Test Mode AND the merchant account has the Payer Authentication 2.0 service active on their merchant account. When a merchant toggles themselves into Test Mode, they can run test transactions that will use test credentials for the Payer Authentication 2.0 service they have active.
What cannot be tested in Test Mode?
- The Miura M0x0 will not work when the account is in test mode. When attempting a transaction with a Miura, the gateway account will need to be in live mode.
- Test Mode does not simulate Level III data entry. It will display the information entered, but will not display the 'Level III Data Success' line in the transaction history at the bottom of the detail, or in general reporting.
- Production Cloud Terminals will not work in Test Mode.
- Transaction Routing will work on a Test Gateway Account, but not a live account when the account is in Test Mode.
|
OPCFW_CODE
|
Earlier this week, I wrote a post on how programming requires little more than your own head to build things. I had intended to follow it up with a post about what programming actually does require in terms of physical materials and resources. I wasn’t in a rush to write it, though, since the hardware and infrastructure details of programming are very real, yet not in the spirit of the original post. The reality of the limitations hit me today, though, as my web hosting provider is suffering an outage.
I can still code, and my website still works “on my machine.” But the sad reality is that I cannot share it with anyone else at the moment. A web app won’t work without a hosting provider. An online game can’t run without a server. The real limit of “coding is purely with your mind” is that you need a lot of real devices and infrastructure in place to distribute the code.
A Very Real Physical Requirement of Coding: Distribution
The Internet really is just a big “net” of wires. It is literally a “web” of cables running all around the world. Sure, there are wireless aspects of this Internet, but even those devices have cables running around houses and cities and countries. At the ends of these cables are machines — real, physical computers — that respond to requests for web pages and services. When I log into my blog, I’m sending electric signals from my computer through a series of other cables that ultimately connect to my web hosting provider’s servers. These servers are physical machines.
These physical machines need a physical place to be stored in. They need tons of electricity to stay powered on. They need air conditioning to keep them cool. They also need a lot of cables running to them. The types and volume of these cables determine how much network traffic can pass through them. If the cables can’t handle the traffic, then they will get “clogged.”
Other distribution problems are easier to imagine if you are making a game for a specific gaming console. It’s obvious to understand that a Nintendo Switch game needs a user with a real Nintendo Switch with which to play it. (Emulators and such intentionally ignored). Even though I never had to pay for a manufacturer to build physical game discs for my games, I still needed users who had physical devices to play them on.
Distribution as a Limiting Factor: Depends on the Volume
The distribution problem in programming really depends on how wide you want to distribute it. Usually, in the beginning, without customers, you will not be limited by distribution. Most people argue that an MVP should not worry itself about scalability. But scalability is a real problem… eventually.
The distribution problem can be ignored when brainstorming and prototyping a product. I don’t worry about it when making games. There are plenty of people with iPhones and Androids, and each has a store with sufficient uptime and bandwidth to handle users downloading my game.
In the end, the problem with my blog today was that, although I could write and code, I couldn’t distribute it to end users. As I finished writing up this post, my hosting provider came back online. So I can now continue to distribute my product to users.
Leave a Comment
|
OPCFW_CODE
|
Flutter: Soft keyboard animation is causing tremendous jank on iOS after updating from Flutter 2.2.3 to Flutter 3.0.0
Since I updated my application code from Flutter 2.2.3 to Flutter 3.0.0, I am facing tremendous jank on my application whenever the soft keyboard is opened and close.
While this jank is more visible on iOS, it is not nonexistent on Android.
Link to demo of the issue (note that this a high-end iOS device hence the jank is probably the least on it): https://user-images.githubusercontent.com/53447798/173196265-f2de6864-2e6c-4bab-9253-faac7735ece1.MP4
My research showed that this has to do with a new feature that was introduced after 2.2.3, which is "Smooth keyboard animation on iOS". You can learn more about it here: https://github.com/flutter/engine/pull/29281
As it turns out, due to the new feature, during the keyboard opening or closing animation, the MediaQuery changes several times, causing all widgets using MediaQuery to rebuild causing the jank. The issue, however, is that the widgets that I have used do not use the height parameter of MediaQuery which is changing due to the keyboard. In fact, my widgets only use the width parameter (i.e. MediaQuery.of(context).size.width) which does not change with the keyboard opening. However, MediaQuery resets completely and does not just update one aspect (i.e. height).
To fix this, moffatman suggested the following solution which allows MediaQuery to use InheritedModel and update just one aspect: https://github.com/flutter/flutter/pull/97928
However, this solution has not yet been merged to Flutter beta or stable channels so I do not know how to use it.
So my questions are as follows:
Is there any other workaround for this? (Note that downgrading back to 2.2.3 is not an option as I need a lot of the new features from 3.0.0)
If not, how can I use the solution suggested by moffatman? Do I have to wait or is there a reasonably easy way to use his solution in my code. (Note that my app is in production and has live users)
Lastly, if all else fails, is it possible to not use this specific feature (i.e. smooth keyboard animation) from Flutter 3.0.0?
I am facing a very similar issue with Flutter 3.7.7. Problem is, I make no use of MediaQuery that I could get rid of, but some of my dependencies probably do. Have you found a workaround yet? Is https://github.com/flutter/engine/pull/29281 something we can turn off by any chance?
|
STACK_EXCHANGE
|
Topologically associating domains (TADs) are genomic self-interacting regions containing multiple genes. To investigate their function and evolution we have studied the position of pairs of paralog genes with respect to TADs . We observed significantly more pairs within TADs than expected. Since most paralog gene pairs are formed by tandem duplication we propose that there is selective pressure to keep paralogs in the same TAD. Paralogs can have related functions and might require common regulatory mechanisms. Our results support that TADs may provide such mechanisms. We also found that paralog pairs within TADs have a bias to have fewer contacts than similar pairs of genes; their coded proteins also interact less than expected. Our interpretation of these results is that there is a population of paralog pairs within TADs that code for subunits that replace each other in complexes and thus need to be expressed in an exclusive manner.
We provided further evidence of the functional importance of TADs by interpreting the pathological effects of chromosomal abnormalities in non-coding regions of 17 subjects in terms of the 3D structure of the genome . The individuals were selected for balanced chromosomal abnormalities (translocations and inversions) apparently not affecting coding genes, but suffering from abnormal developmental and cognitive phenotypes. Many of these rearrangement breakpoints disrupt TADs. We used known chromatin contact information to predict the genes whose expression could be disrupted by the rearrangements and computed similarity between the phenotypes of affected individuals and annotated phenotypes of genes close to the rearrangement breakpoints. This resulted in novel associations of genes to developmental diseases and provided computational evidence of a pathological mechanism by which structural variants disrupt 3D genome architecture and thus gene regulation.
Yet another way to study the importance of TADs is the analysis of their resilience to genomic rearrangements along evolution. We compared the human genome to other genomes and observed that regions that can be aligned have significantly borders that coincide to those of TADs . In fact, sometimes TADs are rearranged differently in different organisms, but then this leads to modifications of the patterns of expression of the genes concerned. We deduced this from observations that the pattern of gene expression across tissues of a gene is more similar in mouse and human if the gene is in a conserved TAD.
There are different sequencing techniques available to measure accessible chromatin accessibility. Interpreting the results computationally using peak calling algorithms is currently very sensitive to parameter settings. We have developed a method to predict chromatin accessibility from transcriptomics data, which can be used to complement the chromatin accessibility assays . The method was trained using public datasets of transcriptomics and DNase-seq data, and can be used to predict chromatin accessibility or to optimize the peak calling algorithms. Regarding the function of genes within TADs, we observed that genes in TADs with fewer genes are more often associated to disease . Together with other observations, including that TADs with higher ratios of enhancers to genes also have more disease associated genes, suggests that larger TADs accommodating complex regulatory networks (more genes and more shared enhancers) increase the robustness of the gene regulatory network, supporting the role of TADs in gene regulation.
We developed a method (7C = Computational Chromosome Conformation Capture by Correlation of ChIP-seq at CTCF motifs) to predict chromosomal contacts based on a repurposing of ChIP-seq data. ChIP-seq reports genomic regions that interact with proteins. In human and other species, the CTCF protein interacts with genomic DNA and by dimerization creates a loop. These contacts bring proteins close to two far away DNA positions in sequence, which can be detected as two separate, symmetrical peaks in the ChIP-seq of the corresponding protein . In combination with the detection of CTCF binding motifs, we have used these signals to predict the formation of such loops. The observation that several proteins allow this type of prediction suggest their involvement in complexes at CTCF-regulated contacts.
We participated in a benchmark of methods evaluating the data from ATAC-Seq applied to single cell samples . Assay for Transposase Accessible Chromatin using sequencing (ATAC-Seq) is a sequencing technology that reports chromatin accessible regions in a genome. Its application to single cells is challenging due to the low peak detection. Out of ten methods evaluated in real and synthetic datasets, SnapATAC, Cusanovich2018 and cisTopic performed best. The fact that SnapATAC was the only method that could analyse a dataset of more than 80K cells indicates that memory requirements is an important issue posed by single cell datasets.
Ibn-Salem, J., E.M. Muro and M.A. Andrade-Navarro. 2017. Co-regulation of paralog genes in the three-dimensional chromatin architecture. Nucleic Acids Research. 45, 81-91.
Zepeda-Mendoza, C.J., J. Ibn-Salem, T. Kammin, D.J. Harris, C. Redin, H. Brand. D. Rita, K.W. Gripp, J.J. Mackenzie, A. Gropman, B. Graham, R. Shaheen, F.S. Alkuraya, C.K. Brasington, E.J. Spence, D. Masser-Frye, L.M. Bird, E. Spiegel, R.L. Sparkes, Z. Ordulu, M.E. Talkowski, M.A. Andrade-Navarro, P.N. Robinson, C.C. Morton. 2017. Computational prediction of position effects of apparently balanced human chromosome rearrangements. Am. J. Hum. Genetics. 101, 206-217.
Krefting, J., M.A. Andrade-Navarro and J. Ibn-Salem. 2018. Evolutionary stability of topologically associating domains is associated with conserved gene regulation. BMC Biology. 16, 87.
Jung, S., V. Espinosa Angarica, L. Dutan Polit, M.A. Andrade-Navarro, N.J. Buckley, A. del Sol. 2017. Prediction of chromatin accessibility in gene-regulatory regions from transcriptomics data. Scientific Reports. 7, 4660.
Muro, E.M., J. Ibn-Salem and M.A. Andrade-Navarro. 2019. The distributions of protein coding genes within chromatin domains in relation to human disease. Epigenetics and Chromatin. 12, 72.
Ibn-Salem, J.I. and M.A. Andrade-Navarro. 2019. Computational Chromosome Conformation Capture by Correlation of ChIP-seq at CTCF motifs. BMC Genomics. 20, 777.
Chen, H., C. Lareau, T. Andreani, M.E. Vinyard, S.P. Garcia, K. Clement, M.A. Andrade-Navarro, J.D. Buenrostro and L. Pinello. 2019. Assessment of computational methods for the analysis of single-cell ATAC-seq data. Genome Biology. 20, 241.
|
OPCFW_CODE
|
RECOMMENDED: If you have Windows errors then we strongly recommend that you download and run this (Windows) Repair Tool.
I am using Enterprize architect Sparx 9, and facing the problem. There is a folder in a project, which i have checked out properly.
Naqib Hossain – Developed the Spring XML file for database. response handling and error handling • Implemented Singleton Service Locator design patterns in Cairngorm MVC architecture to interact with backend. • Responsible to create Web.
Adobe Internal Error 2753 Dist_acrodist.exe I am working on Windows xp. I am trying to install Acrobat 6.0 professional. I get the following message, "Internal Error 2753.
Oracle acquired Sun Microsystems in 2010, and since that time Oracle’s hardware and software engineers have worked side-by-side to build fully integrated systems and.
Known Issues for Oracle SOA Products and Oracle AIA Foundation Pack for 11g Release 1 (184.108.40.206x)
Sep 11, 2007. In my blog entry from August 10, 2005, Oracle and XML In Action – A Real World Example, Fred Wang left me a comment: Hi. ORA-31011: XML parsing failed. ORA-19202: Error occurred in XML processing. If I am building the XML myself, I have two approaches. Enterprise Architecture & EAI.
Enterprise Architect is a collaborative modeling, design and management platform based on UML and related standards. Agile, intuitive and extensible with fully.
This section lists the features of Enterprise Architect 9.0, for the following builds:. Synchronizing with a data model created in an earlier build of EA will no. Import and export ArcGIS XML files through the XMI import and Publish Model dialogs. Prevented error showing properties dialog in Visual Studio Integration.
Enterprise Architect's Revision History Version 6.0 – Sparx Systems – Fixed Delphi parsing error due to comments inside the unit end token. using CVS, where "a.xml" was confused with "aa.xml"; Improved code import to create.
Kb970437 Error Product sites – Please try one of the following options: Check the address for typing errors. Click the Back button and try
May 28, 2013. I happen to read through a chapter on XML parsing and building APIs in Java. Discover how you can achieve enterprise agility with microservices and API. O' Reilly Microservice Architecture Book: Aligning Principles, Practices and Culture. An error occurred while retrieving sharing information. Please.
In this capacity, Dave will renew his focus on creating differentiated capabilities that will. we had two greater than 10% customers in the quarter, a wholesale and enterprise carrier and a cable operator. Rounding out our top five customers.
How to install Enterprise Architect on Linux or macOS environments using Wine or CrossOver.
Validating XML Parser – DTD/Schema Schema-aware XSLT, XQuery.
Orbeon’s engineers constantly work with bleeding-edge XML and J2EE technologies and frequently publish papers and articles. They are the developers of Model 2X and.
I am trying to create a Sparx Enterprise Architect Report and need some. Recently I get the following error when I. eclipse xml-parsing enterprise-architect.
|
OPCFW_CODE
|
So, I have had a little meltdown (literally) in my pc, and rather than throw good money after bad, I am upgrading my pc. I'm using some of my old parts, but I am looking for some advice on some of the new parts.
1) What is your budget?
2) Are there any brands/resellers that you prefer or any you really don't like?
GPU: Powercolor, Sparkle
3) What tasks will you be performing with the Desktop?
Gaming; Browsing; Statistical Modeling; Videos; Word Processing
4) Will you be playing games on it; if so, which games?
Yes. The new Star Wars MMO and BattleField 3 when they come out, and then a lot of older games and simulation games, etc. Not very many super high end graphics games
5) Do you mind buying parts online without seeing them in person?
6) What OS do you prefer? Windows (XP or Vista), Mac OS X, Linux, etc.
Windows 7 (I own a copy already)
7) How much hard drive space is needed?
I own the hard drives already (1 200 gb, one 500 gb)
8) What size desktop would you like?(All in one, compact, large)
I own the case already (Xigmatek)
9) Does the case need to be stylish?
10) What resolution will the screen run at? One or Two screens?
This is a little bit more difficult. I own a 20 in. monitor, but I am moving into a studio and not buying cable, so I'd also like to be able to hook my pc up to my 32 in Dynex LCD TV.
11) Do you need any particular hardware?(Ports, HDD slots, double DVD drives, etc.)
12) How would you rate your technical skills?
I am a knowledgeable amateur. I have been assembling, disassembling and fixing computers for the last 5-6 years.
13) Have you ever built a desktop before?
Yes, a dozen times or so, but often with hodgepodge parts
14) Do you need wireless connectivity?
15) When are you going to be building this?
Within the next few days.
16) Have you considered a pre-built desktop? Or even a notebook?
Not in consideration, because I have many of the parts I needs already
17) Are you going to overclock?
Maybe, haven't quite decided yet.
I am upgrading because my 8800GS melted, and in the process melted the sole pci-e connector in my Gigabyte P35-DS3L motherboard. It is a dead end socket, and most of my parts are old anyways, so I have decided that I just want to upgrade, and I will sell my old parts. However, my budget isn't real high, so I am looking at the following parts:
I am a little torn here. I bought a GTX 260 with the hopes of eventually upgrading my system to the new Core i5's and sli'ing the card, but then the motherboard started to crap out. I cannot afford a motherboard that supports SLI, so I have a friend who has said he will buy the 260 for $85 from me, which I would then put towards buying a 6670. I am thinking I could crossfire that, which would give me better resolution on my large tv. Also, if I did that, I know I would need a bridge of some sort, but I'm not quite sure, as I have never built a two card system.
APEVIA JAVA ATX-JV650W 650W ATX12V / EPS12V This part is already owned, works great, and I am planning on keeping it.
This build rounds out to $463, but with instant and mail in rebates and some promo codes I have it runs about $423 including shipping. I am open to suggestions and evaluation. Any help is appreciated. Thanks everyone.
Also, if you're on a budget, you should buy your parts as sales come up over the course of a couple weeks.
CPU: $75 Phenom II x3 720BE
If you're going AMD, go cheap. If you're getting a Bulldozer compatible board, you may as well use a CPU that's almost as fast, but not so expensive as the 955. Also, it might unlock. This was on sale for $60 a couple weeks back.
I don't agree with the low-end crossfire system. It has its merits, but I think you'd spend more time happier buying a $130 6850 and crossfiring that in 6 months or a year.
My experience is that PSU's under a certain price threshold are a crapshoot. Most work fine, some work like ***, regardless of the brand. That being said, this psu isn't particularly old, and the few tests I've run on it have come back clean. I'll consider a new one.
Normally, I would buy parts piecemeal, its how I've built my previous systems. I am moving next Friday, however, and won't have a great setup for receiving packages, as I will be living in a studio in a backyard. It's just not a good situation to be constantly receiving packages.
I'm not entirely sure how to describe what happened. Moved it into a new, much more properly cooled case, and then three weeks later I got a freeze up with mass discoloration. Restarted, and the card couldn't make it past the windows login screen. When I removed the card, it had gotten so hot that it had burnt the EVGA warranty sticker to almost nothing. I'm thinking it got running real hot in my house (sometimes the ambient house temperature is upwards of 85 to 87 degrees), and just couldn't handle it for whatever reason. At this point, it doesn't work; the display from post to login screen is heavily color and placement distorted (lines don't match up, etc).
PSU: Here's why I'm wary of Apevia http://www.newegg.com/Product/Product.aspx?Item=N82E168...
Notice the 550W Antec provides substantially more power on it's +12V rails? With 4 rails, you can only guess how much power is actually delivered (it depends on specifics of the internals of the PSU), but if we are to assume the Antec is made at least as well as the Apevia...
(13A/18A)*550W = 397W. That Apevia is quite likely more on the order of a quality 400W PSU. So I would be careful with that "650W" rating. Another reason I'm wary of Apevia: http://www.eggxpert.com/forums/thread/323050.aspx
Then again, a single 8800GS build should be fine on 400W.
First...Do you live near a Micro Center? That's the most important thing.
You say you have a 20" monitor, but never mention resolution. I will assume 1440x900, which isn't very demanding. You also say $450 max--and since you start with a Phenom build I'm going to try to go as cheap as I can and undercut that by $100. If you're actually set on spending that much, I'd go for an i5-2500K for $205 and spend a little more for the extra longevity. But honestly, I'd wait a couple weeks to see what Bulldozer does to prices.
|
OPCFW_CODE
|
Let Us Take Care of Your Statistical Visualization Assignment Needs
When it comes to tackling your statistical visualization assignments, we understand the importance of precision, expertise, and delivering quality results. With our team of experienced professionals, you can trust us to handle your assignment with utmost care. We specialize in statistical visualization techniques and have a deep understanding of the tools, principles, and best practices required for effective data representation. Whether you're struggling with choosing the right visualization technique, analyzing complex datasets, or creating visually compelling graphics, we are here to assist you. Rest assured, your assignment is in capable hands as we strive to provide the professional assistance you need to excel in your statistical visualization coursework.
Error-Free Statistical Visualization Assignments: Ensuring Accuracy and Precision
In the realm of statistical visualization assignments, accuracy and precision are paramount. A single error in data representation or visualization technique can lead to misleading interpretations and flawed conclusions. That's why our focus on providing error-free statistical visualization assignments is of utmost importance. With our expert assistance, you can rest assured that your assignments will be meticulously reviewed and verified for accuracy. Our team of experienced professionals will guide you in selecting the appropriate visualization techniques, ensuring data integrity, and implementing best practices. By prioritizing error-free assignments, we aim to equip you with the skills to create visually compelling representations of data that accurately reflect the underlying statistical insights. With our support, you can confidently tackle your statistical visualization assignments, delivering work that is precise, reliable, and of the highest quality.
Extensive Topics Covered in Statistical Visualization Assignment Solving Services
Our Statistical Visualization Assignment Solving Services offer a comprehensive range of topics to cater to students' diverse needs in this field. With our expert assistance, students can delve into various key areas of statistical visualization, mastering essential concepts and techniques. From principles of data visualization and interactive visualizations to geospatial data visualization and network visualization, our services cover a wide spectrum of topics. Additionally, we provide guidance on visualization tools, data storytelling, ethical considerations, and more. With our in-depth knowledge and support, students can confidently tackle their assignments and develop a strong foundation in statistical visualization.
|Data Visualizations Elements
|We offer assignment solving services on understanding the core elements of data visualizations, including visual encoding, color mapping, and layout.
|Data Visualization Techniques
|Our experts provide assistance in applying various data visualization techniques such as bar charts, scatter plots, heat maps, and tree maps.
|We help students analyze complex datasets through interactive visualizations, exploring techniques like exploratory data analysis and visual data mining.
|Our services include guidance on crafting compelling narratives using data, ensuring effective communication and engagement through visual storytelling.
|Information Design and Visual Communication
|We assist in creating visually appealing and informative designs, covering aspects such as infographic design, color theory, and visual rhetoric.
|Big Data Visualization
|Our experts specialize in visualizing large-scale and high-dimensional datasets, providing solutions for visual analytics and scalability challenges.
|Human-Computer Interaction in Visualization
|We offer support in understanding the intersection of visualization and user interaction, covering user-centered design principles and evaluation methods.
|Interactive and Dynamic Visualizations
|Our services include guidance on creating interactive visualizations that allow real-time exploration and dynamic representation of data.
|Geospatial Data Visualization
|We provide assistance in visualizing geographic data and spatial relationships using techniques like choropleth maps, cartograms, and geospatial networks.
|Our experts assist in visualizing complex networks, such as social or biological networks, employing techniques like node-link diagrams and network metrics.
|
OPCFW_CODE
|
package fsm
import (
"fmt"
"time"
)
// CallForWarrior notify all users about the task
func (m *FSM) CallForWarrior() error {
if m.LastNotify != nil && m.LastNotify.Add(m.Config.CallInterval).After(time.Now()) {
return nil
}
userList := UserMapToList(m.Users)
message := "Hi trash agents, it's time to escort trash cans. " +
`Reply "me" to take this mission.` + "\nLeaderboard:\n" +
stats(userList)
return m.NotifyUsers(userList, message)
}
// NotifyTaken notify all users that the task is taken
func (m *FSM) NotifyTaken() error {
if m.LastNotify != nil {
return nil
}
message := fmt.Sprintf("The mission is taken by %v", *m.Taker)
return m.NotifyUsers(UserMapToList(m.Users), message)
}
// RemindMission remind the user to complete the task
func (m *FSM) RemindMission() error {
if m.LastNotify != nil && m.LastNotify.Add(m.Config.RemindInterval).After(time.Now()) {
return nil
}
message := "Dear trash agent, is the trash escort mission done?" +
"Reply the number (1-3) of trash cans you esctored.\n" +
"I'll remind you every " + fmt.Sprint(m.Config.RemindInterval)
return m.NotifyUsers([]*User{m.Users[*m.Taker]}, message)
}
// NotifyComplete notify all users that the task is completed
func (m *FSM) NotifyComplete() error {
if m.LastNotify != nil {
return nil
}
userList := UserMapToList(m.Users)
message := fmt.Sprintf("The mission is completed by %v.\nLeaderboard:\n%v", *m.Taker, stats(userList))
return m.NotifyUsers(userList, message)
}
|
STACK_EDU
|
The mission of VSR is to advance knowledge in distributed and self-organizing systems. Our research, education, and innovation focus lies on Internet, Web, and Social Media.
All Planspiel teams will have to present their current work state on
The presentation time per team is limited to 10 minutes sharp.
A beamer together with a VGA/HDMI cable is available.
The order of the pitches is not known in advance.
The final pitch will then take place on 5 March and 6 March 2019 in a full-day fashion from 9:00 - 17:00 in room 1/305. Your team will have 30 minutes for presenting your final business, followed by a 10 minutes Q&A session. All team members are expected to attend both days in full length.
We are looking forward to seeing excellent presentations.
Feel invited to join, if you are not a Planspiel participant but interested in the team results.
Learning and Knowledge Sharing @ VSR
Starting again in 2019, we offer all our VSR students the great opportunity to exchange knowledge and experiences in our LAKS meetings.
Become part of LAKS, if you
- want to learn and try out topics that are not covered in any regular lecture
- like to work with other students in an informal, exciting way
- have questions and answers on study related topics that you want to exchange with others
- ask for feedback on an upcoming project or presentation.
LAKS is our way for effective Learning And Knowledge Sharing.
Our VSR group therefore provides
- a mailing list firstname.lastname@example.org where you can subscribe here
- a regular meeting place each Wednesday ar 10:00 in our VSR lab 1/B203
LAKS works in an agile, unsupervised, self-organizing, facultative and open-space fashion.
See you on 16th January 2019 at 10:00 in our VSR lab 1/B203.
Best Paper Award for Maik Benndorf
We are pleased to announce, that our researcher Maik Benndorf won the Best Paper Award at the 7th International Conference on Network, Communication and Computing (ICNCC 2018) in Taipei, Taiwan. Congratulations to all authors.
In December 2018, our VSR research group participated in the ICT 2018: Imagine Digital - Connect Europe event of the European Commission, taking place in the Vienna International Center in Austria.
Regarding our H2020 activities, we had valuable sessions to exchange ideas and make new contacts.
If you are also involved in one of the current calls, feel free to contact us via email.read more
In this workshop, we worked together with the group of Prof. Dr. Sören Auer and other researchers on new visionary ways for structured scientific publishing by using Linked Data.
Our VSR research team was able to support this project with our expertise in human-centered approaches to frontend input interfaces for the collection of Linked Data and the encouragement of users to provide qualitative meta data for scholarly artefacts.find out more
VSR SocialFollow VSR on Twitter
Like VSR on Facebook
Subscribe VSR on Youtube
|
OPCFW_CODE
|
Late last year I accepted a job offer from Microsoft, got married and moved to the US. Its been a busy few months! I’m based in Redmond, working in the WCF team as a program manager. I started early January, so I’ve been working for just over a month now and thought it might be worth doing a post on my thoughts on Microsoft so far.
Working at Microsoft is quite an experience. It’s been a change at many levels for me. For example, moving from a small company to a ridiculously big one requires an adjustment in its own right. There are so many people, so many projects with subtle interactions between them. And lets face it - a large company can’t help but breed its own unique flavor of internal politics and an insane number of meetings. I’d have to say that in most ways, the experience has met my expectations. Though I was surprised that it was several days before I had a PC and a working network login – I expected a large company to have that stuff down cold. Were my expectations just completely backwards on this one? Let me know in the comments :)
The move from dev lead to program manager is also a significant change – my productivity suite is Office now, not Visual Studio (hopefully it will get closer to 50 / 50 in time). In essence, program management is about determining what to build, what the user experience is like, planning and overseeing the execution and then letting customers know about the awesome thing your team built. I don’t think I could be a program manager on Office, or Windows or some other product that isn’t developer focused. At least this way when I’m prototyping or refining the user experience or preparing demos I’m working with code. But I still wonder if program management is the best fit for me. Its way too early to tell at this point so I’ll just have to wait and see. If you would like to learn more about program management I suggest you read Steven Sinofsky’s epic, awesome post.
For my first month at Microsoft, there was stark contrast between the first two weeks and the following two. During the first two weeks, a significant portion of my time was spent on “administrivia”: permissions, online training, getting yourself included in the right meetings, filling out forms, etc. At the end of my second week I began to wonder when the “work” was really going to begin – and then it promptly did! I felt very busy and quite overwhelmed for the next two weeks as my responsibilities began to crystalize. This last week felt somewhat more manageable but I have a feeling this won’t last long as various deadlines loom towards the end of the month.
I’d heard that they really liked their TLA’s at Microsoft, but I imagined that it was blown out of proportion. Sure technical people like their acronyms – hell, ask me about programming and I’ll probably start spouting stuff about DDD, TDD, BDD, CQRS, IOC, etc. After spending a month at Microsoft I have to say that their reputation for acronym overuse is well deserved. Its funny, they even change the names of stuff to avoid unpleasant acronyms. Case in point – Microsoft is divided into divisions, but the “Server and Tools” division is actually referred to as a “business”. How amusing.
Putting aside the quirks, the last month has been exciting, challenging and tremendously educational. Its fantastic to get a chance to learn how things work on the inside of the company that I’ve built my career around. I have been consistently impressed with the intelligence and approachability of my coworkers and it is in this aspect that I really feel like the promise of Microsoft has delivered. I wanted to find an environment where I felt challenged by the people around me to learn and grow and I can safely say I’ve found one.
|
OPCFW_CODE
|
In this chapter, you learned how to work with variations of the
Now, before you move on to Chapter 8, "Enhancing Code Structure and Organization," and start learning how to work with procedures, take a few extra minutes and improve the Dice Poker game by completing the following challenges.
Add a menu system to the Dice Poker game, including the following menu items under the File menu: Roll Dice, Roll Again, Stick, and Quit. Also add a Help menu, and provide the player with access to pop-up
Modify the Dice Poker game so that it tracks and displays information about the number of
Enhance the Dice Poker game so that it looks for two of a kind but doesn't add or subtract any dollars from the player's account for this tying hand.
Enhance the Dice Poker game so that it looks to see if the player's hand has two pairs and awards the player a dollar for this winning hand.
Whether you have realized it or not, every application that you have developed so far in this book has relied on procedures to organize and store program code. In this chapter, you will learn how to create your own custom procedures. You will learn how to create
procedures and will understand the difference between the two. You will also learn how to pass data to your procedures for processing and how to return data from
procedures. In addition, you will get plenty of
Specifically, you will learn how to:
Organize the programming logic that makes up your applications into procedures in order to make them easier to develop and maintain
Create custom procedures
Pass and return data to and from procedures
Streamline your applications by placing reusable code within procedures
Develop procedures that can process optional data
In this chapter's project, you will apply your new knowledge of how to work with different types of procedures to the development of the Hangman game. Figures 8.1 to 8.7 show examples from the Hangman game, demonstrating its functionality and overall execution flow.
Figure 8.1: When first started, the game displays a graphic showing an empty hangman's gallows and a series of
Figure 8.2: As the game progresses, each correct guess is displayed at the top of the window, and a visual record of every letter guessed is displayed at the bottom of the window.
Figure 8.3: The game
Figure 8.4: The game only
Figure 8.5: The game prevents the player from entering numeric input.
Figure 8.6: The game congratulates the player when the secret word has been successfully guessed.
Figure 8.7: If the player fails to guess the secret word within six guesses, the game is lost and the picture of the hangman's gallows is updated to show a full hangman image.
By the time you have created and run this game, you will have demonstrated your under standing of how to create custom Sub and Function procedures and how to use them to improve the overall organization and maintenance of your Visual Basic applications.
|
OPCFW_CODE
|
Open problems in human rationality: guesses
post by romeostevensit
score: 19 (6 votes) ·
A couple months back Raemon wrote this excellent question [LW · GW] to which Scott Alexander shared his ongoing list [LW(p) · GW(p)]. I think it would be great to have people try to give their current guesses for a lot of these. My guesses in a comment below. My intuition is that value is created in four ways from this:
1. Discovery of things you didn't realize you believed in the process of writing the answer.
2. Generation of cruxes if people give you feedback/alternatives for answers.
3. Realization that your guess isn't even wrong, but fundamentally wasn't built from building blocks that can, in principle, be rearranged to form a correct answer.
4. Help in coordination as people get a sense of what others believe about navigating this space. Seeing cognitive diversity on fundamental questions has helped me in this area.
Comments sorted by top scores.
comment by romeostevensit
· score: 16 (6 votes) · LW
) · GW
- 1. Which questions are important?
- a. How should we practice cause prioritization in effective altruism?
- Encourage people to follow different prioritization heuristics and see what bubbles up. Funding people who are doing things differently from how you would do them is incredibly hard but necessary. EA should learn more from Jessica Graham.
- b. How should we think about long shots at very large effects? (Pascal's Mugging)
- Seems to vary based on risk appetite and optionality. ie young people can do moonshots and recover in time to do other lower variance things.
- c. How much should we be focusing on the global level, vs. our own happiness and ability to lead a normal life?
- false dilemma. Inquiring into values shows that focusing on the global level contributes to my happiness. 'Lead a normal life' seems to be about priors on 'normal behaviors' leading to well being. But this prior seems bad, average outcomes aren't very happy.
- d. How do we identify gaps in our knowledge that might be wrong and need further evaluation?
- More focus on critiques that induce physical discomfort and avoidance.
- e. How do we identify unexamined areas of our lives or decisions we make automatically? Should we examine those areas and make those decisions less automatically?
- per Buddhism, probably. For intellectual types the how is often somatic skill training.
- 2. How do we determine whether we are operating in the right paradigm?
- a. What are paradigms? Are they useful to think about?
- a paradigm is a collection of heuristics that play well together. ie they chain into each other easily by taking each others outputs as inputs.
- b. If we were using the wrong paradigm, how would we know? How could we change it?
- Observing the outcomes of the people following different stances. I think Opening the Heart of Compassion is an excellent resource here, as is The Five Personality Patterns. Despite woo.
- c. How do we learn new paradigms well enough to judge them at all?
- Ask ourselves the question 'do I want that person's life?' When evaluating strategies.
- 3. How do we determine what the possible hypotheses are?
- a. Are we unreasonably bad at generating new hypotheses once we have one, due to confirmation bias? How do we solve this?
- b. Are there surprising techniques that can help us with this problem?
- Creativity techniques around prioritizing quantity over quality works reliably.
- 4. Which of the possible hypotheses is true?
- a. How do we make accurate predictions?
- By creating the conditions for the outcomes we want. Too many degrees of freedom and hidden variables when trying to predict useful things totally outside our control. Collecting better outside view search heuristics for things we're forced to make predictions for that we can't control.
- b. How do we calibrate our probabilities?
- Practice feeling the somatic difference between 60 and 70% confidence, increase granularity with time.
- 5. How do we balance our explicit reasoning vs. that of other people and society?
- a. Inside vs. outside view?
- Always generate an inside view the best way you know how first. Then when you run an outside view and encounter differences, inquire into the generators of those differences. This is free calibration data any time you're about to search for something.
- b. How do we identify experts? How much should we trust them?
- Judgmental bootstrapping and checking how granular the feedback the expert has received in forming their model. Granularity of feedback should be >= granularity of decision model.
- c. Does cultural evolution produce accurate beliefs? How willing should we be to break tradition?
- Try harder to learn from tradition than you have been on the margin. Current = noisy.
- d. How much should the replication crisis affect our trust in science?
- It should increase our need for consilience in order to be confident about anything. If a conclusion isn't reachable from radically different methods/domains it's fairly suspect.
- e. How well does good judgment travel across domains?
- Granularity of feedback loops applies again. When we see impressive transfer I think it's from a domain with good feedback loops to a domain with poor feedback loops where the impressive person helped clean up those poor feedback loops by applying the methods from the high feedback loop domain.
- 6. How do we go from accurate beliefs to accurate aliefs and effective action?
- a. Akrasia and procrastination
- b. Do different parts of the brain have different agendas? How can they all get on the same page?
- Integrate conflicting parts via some psychotherapy modality like Focusing or Core Transformation or IFS.
- 7. How do we create an internal environment conducive to getting these questions right?
- a. Do strong emotions help or hinder rationality?
- emotional 'strength' seems like the wrong frame.
- b. Do meditation and related practices help or hinder rationality?
- Help on cognitive reflection tests and the general skill of noticing which cognitive heuristics are currently being run. CRTs were among the less correlated with g according to The Rationality Quotient.
- c. Do psychedelic drugs help or hinder rationality?
- Ultra high openess without skepticism/disagreeableness/epistemic hygiene seems to result in loopy beliefs. They should be leveled up in tandem.
- 8. How do we create a community conducive to getting these questions right?
- a. Is having "a rationalist community" useful?
- b. How do strong communities arise and maintain themselves?
- c. Should a community be organically grown or carefully structured?
- d. How do we balance conflicting desires for an accepting where everyone can bring their friends and have fun, vs. high-standards devotion to a serious mission?
- e. How do we prevent a rationalist community from becoming insular / echo chambery / cultish?
- f. ...without also admitting every homeopath who wants to convince us that "homeopathy is rational"?
- g. How do we balance the need for a strong community hub with the need for strong communities on the rim?
- h. Can these problems be solved by having many overlapping communities with slightly different standards?
- I think communities are typically about avoiding responsibility for making personal progress. People who choose to take a more central role in a community typically have emotional problems they are trying to work out via the dynamics in the community. The whole is typically much less than the sum of its parts.
- 9. How does this community maintain its existence in the face of outside pressure?
- The way in which outside pressure is experienced is worth investigating for what internal process it is resonating with.
comment by Raemon
· score: 8 (4 votes) · LW
) · GW
I think communities are typically about avoiding responsibility for making personal progress. People who choose to take a more central role in a community typically have emotional problems they are trying to work out via the dynamics in the community. The whole is typically much less than the sum of its parts.
Just wanted to note that this seemed like an interesting claim that seems relevant to my interests to take seriously.
comment by Raemon
· score: 6 (3 votes) · LW
) · GW
Meta: I think I'd find it easier to process this if this post picked a subset of these questions rather than all at once (And then could devote more space to argue about individual answers to questions or clusters of questions)
comment by ioannes_shade
· score: 1 (1 votes) · LW
) · GW
Funding people who are doing things differently from how you would do them is incredibly hard but necessary. EA should learn more Jessica Graham
What does "learn more Jessica Graham" mean?
|
OPCFW_CODE
|
change file size format when using stat
The formatting character %s makes stat print the filesize in bytes
# stat -c'%A %h %U %G %s %n' /bin/foo
-rw-r--r-- 1 root root 45112 /bin/foo
ls can be configured to print the byte size number with "thousand-separator", i.e. 45,112 instead of the usual 45112.
# BLOCK_SIZE="'1" ls -lA
-rw-r--r-- 1 root root 45,112 Nov 15 2014
Can I format the output of stat similarly, so that the file size has thousand-separator?
The reason why I am using stat in the first place is, I need to output like ls, but without time, therefore -c'%A %h %U %G %s %n'.
Or is there some other way to print the ls-like output without the time?
On what operating system? Should we assume Linux?
Specify the date format, but leave it empty eg.
ls -lh --time-style="+"
Produces
-rwxrwxr-x 1 christian christian 8.5K a.out
drwxrwxr-x 2 christian christian 4.0K sock
-rw-rw-r-- 1 christian christian 183 t2.c
On a GNU system, you can use the ' flag of GNU printf:
$ stat -c"%A %h %U %G %'s %n" /bin/foo
-rw-r--r-- 1 root root 45,112 /bin/foo
This is documented in man 3 pritnf:
' For decimal conversion (i, d, u, f, F, g, G) the output is to be grouped
with thousands' grouping characters if the locale information indicates
any. Note that many versions of gcc(1) cannot parse this option and will
issue a warning. (SUSv2 did not include %'F, but SUSv3 added it.)
Alternatively, you can parse it in yourself:
$ stat --printf="%A\t%h\t%U\t%G\t%s\t%n\n" a | rev |
awk '{gsub(/.../,"&,",$2); print}' | rev
-rwxr-xr-x 2 terdon terdon 4,096 file
Please replace 4,5112 by 45,112 - thousands are separated, not ten thousands.
@Ned64 thanks. Obviously, I wrote that one manually :)
@terdon It was also wrong in the original question, which I edited (might still be under review).
|
STACK_EXCHANGE
|
Cassandra Data Model FAQs
What is a Cassandra Data Model?
A Cassandra database is distributed across machines operating in tandem. Cassandra assigns data to nodes in the outermost container in a ring cluster: the Keyspace. Each node contains a replica that takes charge in case of failure handling.
Data modeling involves identifying items to be stored or entities and the relationships between them. Data modeling in Cassandra follows a query-driven approach, organizing data based on specific queries.
The design of Cassandra’s database is based on fast read and write requirements, so the speed of data retrieval improves with schema design. Queries select data from tables; query patterns define example user phrases and schema defines how table data is arranged.
In contrast, relational databases write the queries that will be made only after normalizing data based on the relationships and tables designed. Unlike the query-driven Cassandra approach, data modeling in relational databases is table-driven. The database expresses relationships between tables in queries as table joins.
The Cassandra data model offers tunable consistency, or the ability for the client application to choose how consistent the requested data must be for any given read or write operation. Tuning consistency is a factor in latency, but it is not part of the formal data modeling process.Find Cassandra data model documentation here.
In Cassandra, a Keyspace has several basic attributes:
- Column families: Containers of rows collected and organized that represent the data’s structure. There is at least one column family in each keyspace and there may be many.
- Replication factor: The number of cluster machines that receive identical copies of data.
- Replica placement strategy: Analogous to a load balancing algorithm, this is simply the strategy for placement of replicas in the ring cluster. There are rack-aware strategies and datacenter-shared strategies.
Cassandra Primary Keys
Each Cassandra table must have a primary key, a set of columns. (This is why tables were called column families in past iterations of Cassandra.) The primary key shapes the table’s data structure and determines the uniqueness of a row.
There are two parts to the Cassandra primary key:
Partition key: The primary key is the required first column or set of columns. The hashed partition key value determines where in the cluster the partition will reside.
Clustering key: Also called clustering columns, clustering keys are optional columns after the partition key. The clustering key determines the order of rows sort themselves into within a partition by default.
What are Cassandra Data Model Design Best Practices?
The overall aim of Cassandra data modeling and analysis is to develop an organized, complete, high-performance Cassandra cluster. Wide column data modeling best practices for Cassandra-API-compatible databases like ScyllaDB are also applicable to Cassandra.
Cassandra Data Modeling: Query-Centered Design
Avoid trying to use Apache Cassandra like a relational database. Aim for query-centered design, and define how data tables will be accessed at the beginning of the data modeling process. Cassandra does not support derived tables or joins so denormalization is critical to Cassandra table design.
The first step in Advanced Cassandra data modeling and analysis is reviewing data access patterns and requirements. The Cassandra data model differs significantly from the standard RDBMS model in that the data model is based around the queries and not just around the domain entities.
Designing a Cassandra database for optimal storage is different than with relational databases, because it is important to optimize data distribution around the cluster. And sorting can be done only as specified in the primary key on the clustering columns, so in Cassandra or any similar NoSQL database, sorting is a design decision.
Selecting an Effective Partition Key
As a distributed database, Cassandra becomes more efficient when data is grouped together on nodes by partition for reads and writes. Response time improves as fewer partitions that must be queried to get an answer to a question, the faster the response.
By hashing a data attribute called partition key, Cassandra groups incoming data into discrete partitions and distributes them among cluster nodes. A successful Cassandra data model selects a partition key that:
- Evenly distributes data across cluster nodes; and
- Minimizes partitions accessed in a single query read
How to Design Cassandra Data Models to Meet Data Distribution Goals
A good Cassandra data model evenly distributes data across cluster nodes, limits partition size, and minimizes the query returns.
Avoid hot spots—where some nodes experience excessive load while others remain idle—and ensure even data distribution around the Cassandra cluster by the model by selecting a partition key with high cardinality. Enhance performance and limit partition size by keeping partition keys between 10 and 100MB with bounds on the possible values. And because reading many partitions at once is costly, it’s ideal for each query to read a single partition.
It is important to the development process to ensure that partition keys have a bounded range of values, distribute data evenly across cluster nodes, and adhere to any restrictive search conditions that affect design.
Cassandra Data Model Examples
Cassandra data modeling focuses on the queries.
Consider crime statistics as an example of how to model the Cassandra table schema to handle specific queries. One basic query (Q1) for crime statistics is a list of murder rates by state, including each state’s name and the recorded murder rate. Each state’s murder rate is uniquely identified in the table, and data can be pulled based on simple queries. A related query (Q2) searches for all states within a category—say, those with murder rates higher than a certain level.
It is essential to consider entities and their relationships during table design. All entities involved in a relationship that a query touches on must be in a single table, since queries are designed to access just one table most effectively. Tables may involve a single entity and its attributes or multiple entities and their attributes.
Cassandra queries can be performed more rapidly because the database uses a single table approach, in contrast to the relational database strategy which stores data in multiple tables and relates it between them using foreign keys.
Cassandra Data Modeling Resources
Three top resources for learning more about Cassandra data modeling include:
Wide Column Store NoSQL vs SQL Data Modeling video: NoSQL schemas are designed with very different goals in mind than SQL schemas. Where SQL normalizes data, NoSQL denormalizes. Where SQL joins ad-hoc, NoSQL pre-joins. And where SQL tries to push performance to the runtime, NoSQL bakes performance into the schema. Join us for an exploration of the core concepts of NoSQL schema design, using ScyllaDB as an example to demonstrate the tradeoffs and rationale.
Data Modeling and Application Development training course: This is an intermediate level course that explains basic and advanced data modeling techniques including information on workflow application, query analysis, denormalization and other NoSQL data modeling topics. After completing this course, you will be able to perform workflow application and query analysis, explain commonly used data types, understand collections and UDTs, and understand denormalization.
Data Modeling Best Practices: Migrating SQL Schemas for Wide Column NoSQL: To maximize the benefits of Cassandra or ScyllaDB, you must adapt the structure of your data. Data modeling for wide column databases should be query-driven based on your access patterns– a very different approach than normalization for SQL tables. In this video, you will learn how tools can help you migrate your existing SQL structures to accelerate your digital transformation and application modernization.
Cassandra Data Modeling Tools
There are several tools that can help manage and design a Cassandra data modeling framework and build queries based on Cassandra data modeling best practices.
Hackolade is a Cassandra data modeling tool that supports schema design for many NoSQL databases. It supports multiple data types including UDTs and collections and unique CQL concepts such as clustering columns and partition keys. It also lets you capture the database schema with a Chebotko diagram.
Kashlev Data Modeler is a Cassandra data modeling tool that automates the Cassandra data modeling principles described in the Cassandra data model documentation, including schema generation, logical, conceptual, and physical data modeling, and identifying access patterns. It also includes model design patterns.
Various CQL plugins for several Integrated Development Environments (IDEs) exist, such as Apache NetBeans and IntelliJ IDEA. Generally, these plugins offer query execution, schema management, and other features. Some Cassandra tools and IDEs do not natively support CQL, and instead use a JDBC/ODBC driver to access and interact with Cassandra. When choosing Cassandra data model tools, ensure they support CQL and reinforce best practices for Cassandra data modeling techniques as presented in the Cassandra data model documentation.
Does ScyllaDB Support Cassandra Data Modeling?
ScyllaDB is a modern high-performance NoSQL wide column store database that is API-compatible with Apache Cassandra. ScyllaDB supports core Cassandra models, with the benefit of deep architectural advancements that increase performance while reducing maintenance, overhead, and costs.
Cassandra was revolutionary when it first debuted in 2008, leading to its broad adoption. However, more than a decade later, many companies have recognized its underlying limitations and have now moved on. Leading companies such as Discord, Comcast, Fanatics, Expedia, Samsung, and Rakuten have replaced Cassandra with ScyllaDB. ScyllaDB delivers on the original vision of NoSQL — without the architectural downsides associated with Apache Cassandra (or the costs at volume of databases like Amazon DynamoDB). ScyllaDB is built with deep knowledge of the underlying Linux operating system and architectural advancements that enable consistently high performance at extreme scale.
Access white papers, benchmarks, and engineer perspectives on ScyllaDB vs Apache Cassandra.
|
OPCFW_CODE
|
PostSharp is a development tool for C# and VB.NET that enables developers to achieve more with less code. Write clean, stable, efficient and concise code that needs less development time, produces fewer bugs and is easier to maintain. Deliver better separation of concerns, reduced code scattering, code tangling and coupling via partially or fully executable design patterns. PostSharp's Aspect Framework enables you to build and inject custom design patterns into your .NET apps.
Key Features and Capabilities
Features will vary depending on the edition that you purchase.
Core Aspect Framework
Advanced Aspect Framework
- Exception handling
- Method interception
- Method decorator
- Property and field interception
- Build-time validation
- Attribute multicasting.
Diagnostics Pattern Library
- Intercept events
- Introduce methods, events, properties, interfaces
- Add custom attributes, managed resources
- Apply aspects using multicast custom attributes
- Apply aspects using multicast XML file and dynamic provider
- Robustly compose aspects.
Visual Studio integration
- Detailed tracing - add logging to your codebase and keep it in sync, automatically, with support for NLog, Log4Net, and Enterprise Library.
Enforce good design
- Code editor enhancements - see which aspects are applied to the code you're editing thanks to code adornments and enhanced tooltips
- Aspect browser - see all aspects present in your solution and which declarations have been affected
- File and line number of error messages.
Model pattern library
- Extended Reflection API - get what System.Reflection does not give to you: programmatically browse used-using, parent-child, or member-type relationships at high speed using PostSharp's internal indexes.
- Syntax Tree Decompiler - decompile methods to Abstract Syntax Trees and perform finer analysis.
- Built-In Architecture Constraints - have a finer control over visibility of types and members.
- Custom Architecture Constraints - enforce your own design rules.
Threading pattern library
- INotifyPropertyChanged - Implement the right property change notifications at the right time, automatically.
- Code Contracts - Add precondition checking to your codebase using custom attributes.
More information on pattern libraries
PostSharp Model Pattern Library
- Thread dispatching - simplify dispatching execution back and forth between background and foreground threads
- Exclusive threading model - prohibit multiple threads from concurrently accessing an object. Throws an exception instead of allowing data corruption
- Reader/writer synchronised threading model - safely share objects between several threads and declare lock level semantically, using custom attributes
- Actor threading model - use Erlang-like actor-based multithreading in C# 5.0
- Deadlock detection - simplify the diagnosis of deadlocks in your project and never allow your application to freeze without an error message.
PostSharp Model Pattern Library provides automation for the most ubiquitous design patterns - the ones typically used in Model or View-Model layers of modern apps.
The implementation of INotifyPropertyChanged does not stop at the obvious. It also analyses chains of dependencies between properties, methods and fields in your source code, and understands that property getters can access several fields and call different methods, or even depend on properties of other objects.
You will never forget to raise a property change notification again!
PostSharp Diagnostics Pattern Library
We've all been there. Sometimes things go wrong in production and you can't reproduce the issue on your machine. At such moments, you wish you could have a detailed trace of your app execution, including arguments and return values. But who has time to add detailed logging to thousands of methods?
This is exactly why PostSharp created the Diagnostics Pattern Library. With just a few clicks you can add tracing to your entire application or to selected parts of it, without a single line of code. PostSharp supports most popular back-ends, including NLog, Log4Net and Enterprise Library, and you can easily switch between them. And because PostSharp takes care of performance, runtime configurability and re-entrance, you can focus on business value.
PostSharp Threading Pattern Library
During the early days of computing, programmers were so busy managing memory and CPU registers they could barely focus on business features. Generations of compilers raised the level of abstraction so well that, today, memory management is no longer a preoccupation. Now, you can repeat this success story with multithreading.
Instead of using locks and other low-level synchronization mechanisms, you can work at a more conceptual level, follow design patterns that are known to work, and rely on the compiler to generate low-level code and validate design rules. There's no need to switch to another language. With the Threading Pattern Library, you can follow multithreading best practices in C# or Visual Basic apps.
Dependent on the edition - please see edition tab for more information.
- Unlimited number of build servers
- Perpetual license
- Includes 1 year of support and updates
- Code Contracts
- Aspect Framework
- Architecture Framework
- Dependency Property
- Aggregatable (parent/child/visitor)
- Thread Affine
- Reader-writer synchronised
- Deadlock detection
- Thread dispatching
- Custom License Agreement
- License Server
- Source Code Blueprint Subscription
|
OPCFW_CODE
|
I frequently get asked by non-technical solo founders if I know any potential hacker cofounders they should talk to. These people give a passionate pitch for the idea and a long list of all the hustling they've done, customers they've spoken to, models they've built, provisional patents they've filed, etc. Most of the time, they are thoughtful and hardworking. But they've often been searching for their technical cofounder for many months, and things have stalled during that process.
When people like this say "I'll do whatever it takes to make this business successful" (which they almost always say), I say something like "Why not learn to hack? Although it takes many, many years to become a great hacker, you can learn to be good enough to build your site or app in a few months. And even if you're not going to build the next version, if you're going to run a software company, it seems like a good idea to know a little bit about it."
Usually the response is something like "That wouldn't be the best use of my time", "I don't like it", or "I don't have that kind of brain". (Earlier today it was "You don't understand, I'm the idea guy. If I'm hacking, who will be talking to investors?", which is what prompted this post.) But every once in awhile people think about it and decide to learn to hack, and it usually works out.
They’re often surprised how easy it is. Many hackers love to help people who are just starting. There are tutorials for pretty much everything and great libraries and frameworks.
As an important aside, if you try to learn on your own, it can be really hard. You’ll hit some weird ruby error and give up. It’s important to have someone—a friend, a teacher at a coding bootcamp, etc.—that get you through these frustrating blocks.
When hackers have to for their startups, they are willing to learn business stuff. Business people should do the same. If you're not willing to do this, you should remember that there are far greater challenges coming in the course of a startup than learning how to code. You should also remember that you can probably learn to code in less time than it will take to find the right cofounder.
Speaking of cofounders, a word of warning: meeting a stranger for the express purpose of cofounders hardly ever works. You want someone you've known for awhile and already worked with. This is another good reason for learning to hack yourself instead of bringing on a cofounder.
You can build the first version of your product, and even if it's terrible (we had a non-technical founder in YC that learned to hack with Codecademy and was still able to learn enough to build a prototype), you'll actually be able to get real user feedback, iterate on something other than mockups, and perhaps impress a great hacker enough to join you. Although you may never win a Turing Award, if you're smart and determined, you can certainly get good enough to build a meaningful version 1.
If you're a solo founder and you can't hack, learn.
|
OPCFW_CODE
|
M: 24 years of Windows package design - zeynel1
http://www.techradar.com/news/software/operating-systems/24-years-of-windows-package-design-643034
R: kajecounterhack
Wow these boxes make me nostalgic... I don't remember anything before 3.1 but
when I was about four that was the first time I used a computer. Guess what
for? Tetris. That's right.
It was the best.
Windows 95 was the first time I used the internet.
Windows 98 was when I spilled milk over my dad's expensive block that he
called a "lap top" and when I downloaded my first virus.
Windows ME...the first time a computer made me cry? jk I was pretty happy
upgrading to XP though
The rest aren't so nostalgic. It was pretty much XP until I discovered Linux.
R: spyrosk
I don't know why (except maybe for the first ones) but this is what I thought
when I saw them together:
IBM (v1 to v3)
Clipart collection (v3.1)
Active desktop has stopped working (95 to ME)
Apple (windows 7)
R: sjs
First they're too boring, then too busy, then too boring again? This guy is
never happy! Personally I think of all the pre-XP boxes Win2k's is by far the
nicest. They finally got rid of all the extra crap he complained about and
then he derides it for being boring. Yeesh.
R: skoob
Wow, 24 years of ugly. The only one that's even half decent is the 3.1 box --
despite the ugly "New!" badge and the ridiculously condensed font...
It's hard to tell from those small images, but what are those strange boils on
the Windows 7 logo supposed to be?
R: aaronbrethorst
Leaves, flowers, lens flares, a pine tree (I think). Snatching defeat from the
jaws of victory again, in any case.
R: allenbrunson
for me, windows nt workstation 4.0 was where the product peaked. we finally
got a consumer version of a real 32-bit os, rather than a flimsy shell sitting
on top of dos. its gui was attractive, by microsoft standards. it wasn't yet
loaded down with crap.
windows 2000 was the one that made me jump ship. to me it looked like "baby's
first computer" or something.
after that i spent several years in the wilderness, trying various
alternatives. linux didn't appeal to me for very long. i eventually settled on
beos, including a short stint working for be. when they went out of business,
i switched to macs, and here i still am.
|
HACKER_NEWS
|
Meta Full Stack Developer Resume Examples
Published 8 min read
This article will provide an overview of how to craft a resume for Meta as a Full Stack Developer. It will discuss the key components of a successful resume, such as highlighting relevant experience, education, and technical skills, as well as providing tips on how to focus on achievements and showcase your unique abilities. Additionally, it will provide examples of how to effectively communicate these points in order to make your resume stand out.
Meta Full Stack Developer Resume Created Using Our Resume Builder
Meta Full Stack Developer Resume Example
Janean Osner, Full Stack Developer
Grand Rapids, MI
Lead Full Stack Developer at Quicken Loans, MI
Jan 2023 - Present
- Developed a complex web application that significantly reduced the time required to process loan applications by 35%, resulting in a 10% increase in customer satisfaction.
- Implemented a new API that enabled Quicken Loans to securely process customer data in half the time, resulting in a 15% decrease in processing time.
- Led a team of 5 developers in creating a mobile application that improved the customer experience by providing real-time updates on loan status, resulting in an 8% increase in user engagement.
Senior Full Stack Developer at Ford Motor Company, MI
Aug 2020 - Nov 2022
- Successfully developed a customer-facing web application for Ford Motor Company’s customer service portal, resulting in a 25% increase in customer satisfaction ratings and a 15% reduction in customer service inquiries.
- Developed a new vehicle maintenance system that improved dealer service efficiency by 20%, resulting in an estimated $1.2 million in cost savings annually.
- Implemented a new inventory tracking system that allowed the company to reduce its inventory costs by 10%, resulting in an estimated $500,000 in annual savings.
Bachelor of Science in Computer Science and Software Engineering at Michigan State University, East Lansing, MI
Aug 2015 - May 2020
Relevant Coursework: Algorithms and Data Structures, Operating Systems, Computer Architecture, Computer Networks, Software Engineering, Database Management Systems.
- Database (SQL, MongoDB)
- REST API
- Certified Full Stack Developer (CFSD)
- Certified Professional in Web Technologies and Applications (CPWTA)
Tips for Writing a Better Meta Full Stack Developer Resume
2. Highlight Your Achievements: Employers want to know what you have achieved in the past and how it can benefit their business. Showcase any projects or applications you have built with a link to them if possible.
3. Quantify Your Experience: Whenever possible, add numbers to your experience section to make it more powerful. For example, if you wrote code for an e-commerce website say how many lines of code or how much time it took you to complete the project.
4. Include Other Skills: Show that you have other technical skills outside of full stack development like database management or DevOps experience. This will prove that you’re a well-rounded developer who is capable of taking on multiple roles in a project team.
5. Customize Your Resume: Don’t use generic resume templates; customize your resume for each job application so that it demonstrates why you’re the perfect candidate for the job at hand.
Related: Full Stack Developer Resume Examples
Key Skills Hiring Managers Look for on Meta Full Stack Developer Resumes
Using keywords from the job description when applying for a Full Stack Developer opportunity at Meta is essential. This is because Meta uses Applicant Tracking Systems (ATS) to filter and rank applicants. ATS are programmed to look for specific keywords in resumes and cover letters that match the job requirements, so including them will increase the chances of your application being noticed. Additionally, incorporating the right keywords in your resume and cover letter will help you stand out from other candidates and show that you have the skills necessary for the position.
When applying for a full stack developer position at Meta, you may encounter common skills and key terms such as:
|Key Skills and Proficiencies|
|Nginx||AWS/Azure/Google Cloud Platform|
|Responsive Design||Object-Oriented Programming (OOP)|
|Agile Methodology||Security and Authentication|
Common Action Verbs for Meta Full Stack Developer Resumes
Finding varied action verbs to use on a resume can be difficult. It's important to use different verbs that accurately reflect your experience, as this will help your resume stand out from the competition and make you appear more qualified. By using varied action verbs, you will also be able to create a Meta Full Stack Developer Resume that effectively highlights all of the skills and accomplishments that make you an ideal candidate for the job.
Gain a competitive advantage and make your resume stand out with this list of powerful action verbs. Use them to highlight your accomplishments and increase your chances of landing that next interview:
Related: What does a Full Stack Developer do?
|
OPCFW_CODE
|
Q: How long shall the RSA key be in order to be secure against practical attacks?
A: Impractically large. This does not imply that RSA is unsafe against practical attacks; only that some of these attacks must be prevented by ways other than increasing the key size.
That's because key size is not a parameter with a major impact on the efficiency of many attacks against RSA, with the exception of factorization of the public modulus. For example, in (Simple) Power Analysis of RSA (without CRT), the secret exponent $d$ is directly recovered from observation of the power trace of one execution of the private-key function, thus any increase in public modulus length making attack impractical also makes usage impractical.
For less severe signal leakage (like in the sound pattern attack linked to in the question), the attack may require a number of executions of the private-key function growing with the key size (all things being equal), perhaps linearly or per some other slow-growing function; but again increasing the key size will make the system impractical before it gets safe.
Q: Is there any new advancement in factorization?
A: Publicly, no experimental progress. The state of the academic art remains close to that when factorizing RSA-768 in late 2009. However there has been claims of breakthrough by the NSA which could be explained by factoring 1024-bit RSA moduli; quoting that Wired article quoting an unnamed former senior intelligence official
“They made a big breakthrough” (..) “They were thinking that this computing breakthrough was going to give them the ability to crack current public encryption.”
One of the few notable theoretical progress that I'm aware of is: Daniel J. Bernstein and Tanja Lange, Batch NFS, in proceedings of SAC 2014, free on ePrint in revised version.
Independently of NSA breakthrough claims, because both technical and theoretical progress has not stopped, it would be prudent to assume that 1024-bit RSA is vulnerable to factorization these days, if even a small fraction of the funding of the NSA is poured into trying that; see this answer.
Q: What is your recommendation?
A: With respect to public modulus size: French authorities recently vetted 2048-bit RSA good enough for civilian use up to year 2030 at least (rather than year 2020 formerly), and ask for 3072-bit RSA afterwards. When there is incentive (like: efficiency, compatibility, availability, or cost) not to use something wider, or/and good reason to fear side-channel or fault attacks attacks (e.g. in a Smart Card), I find 2048-bit a reasonable choice for many systems.
A: With respect to choice of implementation: know precisely what you trust and why, or delegate that to competent parties that you have reasons to trust. That's what Common Criteria security certification aims at. Microsoft RSA Implementation is not something well defined enough that advice can be given about it.
|
OPCFW_CODE
|
In our previous installment on this tutorial, we demonstrated how to load data into R from existing sources like an Excel spreadsheet by saving it as a Comma Separated Value file and using R’s utility functions to load and check the data. The same approach also works fairly well when you’re trying to get a file of data from your IT department; most business intelligence and querying tools can dump out their results as a CSV and these files can be easily emailed between departments. I used this approach for years before seeking better methods.
However, there are certain types of projects where the “download and save to CSV trick” gets either tedious or impractical.
So what if we could… perhaps… connect R directly to the corporate database in question and pull the data directly?
It turns out this sort of process is indeed “a thing” and, depending on how liberal your corporate system administrators are, can significantly simplify your work. It’s also fairly simple to implement using R – for this next example, we will share how you can use two protocols called ODBC and SQL to pull data directly R from a larger corporate system.
What is ODBC?
ODBC is a common protocol that allows computer software to talk to many common databases (Oracle, SQL Server, etc). SQL stands for Structured Query Language and is the primary language used to extract information from relational databases; it’s fairly simple (good intro here, including a code playground) and a “good thing” for analysts to know.
Introducing The R ODBC Package
In this example, we will use a R library called RODBC to connect to a database and run a simple query. The results of this query will be pulled back into R and available for us to use as a standard data frame.
Incidentally, your local installation of R likely doesn’t already have the RODBC package installed; so we will need to download and install it. This is quite simple.
# r odbc - odbc package installation
The system will take it from there (it may prompt you to pick a download site from a list); congratulations, you’ve likely just installed your first R package. These little libraries can handle a wide range of tasks and are one of the best things about the R community. No matter what you’re trying to do, someone else has probably already attempted a similar problem and published a helpful library for the technique.
Connecting to a Database using R ODBC
Once installed, we will load the library into the R environment and connect it to a database.
# r odbc example - loading library
# r odbc connect - odbcconnect
myconn <- odbcConnect('my corporate datasource')
Rodbc SqlQuery – First Query Example
Boom, we’re connected. And now for a little query….
# r sql query
customers <- sqlQuery(myconn, "select cust_id as cust,
count(sales) as num_sales,
sum(sales) as tot_sales
group by cust_id")
That particularly epic example of an SQL query just aggregated sales data from my invoice table by customer, returning the result as a neat little table of customers and sale amounts which we can use for future analysis. For example, we could examine the distribution of sales by customer size to understand if our business was driven by a handful of accounts or broadly distributed across a base of small customers.
Broader Use of R ODBC
SQL is an incredibly flexible language – you can join different tables together, summarize results, and filter down to the items you want. This approach also has the advantage of allowing you to run heavy processing on a remote machine (usually more powerful that your computer) and only bring back the summarized results to your own machine.
|
OPCFW_CODE
|
Open source Gamecube bootrom
!!! For educational purposes only !!!
- Reverse engineering of retail
GameCubebootrom (USA / NTSC version was used as reference)
- Write own bootrom, based on IPL reversing
- Use bootrom in gamecube emulators to have fun :-)
Toolchain (mostly official):
CodeWarriorIDE for Nintendo
- Dolphin SDK
- Dolwin debugger
- Bootstrap 1 stage disassembly DONE
- Way to compile it back in binary file DONE
- Bootrom fonts investigated DONE
- IPL listing DONE
- Identify all library calls
- IPL intro
- IPL menus
- Utility to merge all pieces in single binary ROM file
Gamecube Bootrom details:
Same chip shares non-volatile memory (SRAM) and real-time clock (RTC).
Bootrom size is 2 MB.
First logical part of bootrom (reset vector) called Bootstrap 1 (BS1). This small procedure is written on assembly and started from 0xfff00100 physical address. It prepares Gamecube hardware, checks memory, initialize virtual addressing and load second logical part, known as Bootstrap 2 (BS2) or IPL (Intial Program Loader).
IPL is written on C. It's compiled as DOL executable, by using early version Dolphin SDK as system API.
Code entrypoint for start routine is made to 0x81300000 location (virtual address), by link script.
Almost 50% of IPL binary payload is occupied by Dolphin SDK library calls.
Important Note: Bootrom is encrypted itself. Decryption is done by MX chip, during block reading of bootrom data. On early stages (BS1) decryption is done on-the-fly as Gekko load 32-Byte bursts in instruction cache. Later its decrypted by EXI DMA, during BS2 copy. Its very important to watch scrambler not to go out-of-sync, otherwise trash appear as output. Symmetric encryption algorithm (XOR-based) was reversed by segher : Descrambler
Also Bootrom contains two sets of raster fonts. One for ANSI charset and another for SJIS:
These fonts are rarely used by some games. Font data is not encrypted (BS1 disables bootrom scrambler after BS2 was copied in RAM, so subsequent Font reading is actually done over 0x00000000 XOR stream. Same thing applies to first 0x100 bytes of bootrom with Copyright strings)
When IPL starts, following sequence appear :
First is rotating cube intro:
Next IPL menu appear, looking like rotating glass cube:
Each side of cube representing different menus (memory card manager, calendar settings etc.)
Deep inside cube are floating small cubes, appearing as different patterns.
YouTube video :
Credits: Credits go to Gamecube scene members and my good friends : groepaz and tmbinc :=)
Thanks to Nintendo and ArtX team for such sexy console ^^
|
OPCFW_CODE
|
Self-introduction of a new vimpulse user
ch.lange at jacobs-university.de
Sun Oct 31 19:20:27 CET 2010
Hi Vegard, hi all,
thanks a lot for your quick help!
Sunday 2010-10-31 18:38 Vegard Øye:
> We no longer use Trac, so the mailing list is indeed the preferred
> medium for bug reports. By the way, which Emacs version are you using?
GNU Emacs 23.2 on Linux (Gtk/X.org interface)
> Note that Vimpulse is now hosted at Gitorious, not Assembla.
Sorry, it was easy to misunderstand that from my comments, but I was
aware of that. The only reason why I referred to Assembla was the
Trac, for which I had not noticed a replacement.
> To get the latest Git commits, use "git pull"; to see a list of
> updates, use "git log". (If something stinks, you can always revert
> to an earlier commit with "git checkout <SHA-1>" -- e.g.,
> "git checkout a007716". To go back to the newest commit, use
> "git checkout master".)
But thanks for these git hints – all that is good to know.
> PS: Your e-mails are not wrapped properly. Could you please
> toggle autowrapping in KMail or use "M-q" in Emacs? Thanks!
I should have realized that reading non-wrapped mails is particularly
inconvenient in the online list archive. And of course I should
respect the preferences of the recipients of my mails, i.e. you. So I
hope I won't forget wrapping mails to this list any more.
Other than that, and I'm aware that on a list related to text editors,
I am opening a can of worms, my opinion is: Most contemporary mail
clients wrap mails with long lines to the width of the window. 70
characters on a widescreen is IMHO not always user-friendly. I, as the
reader of a mail, would like to have full control over the display of
a mail, and when the mail contains long lines, I have full control in
that I can resize the window as preferred. I know few mail clients
that give me that much control if the lines of a mail are wrapped,
i.e. mail clients that would re-wrap such mails from, say, 70 to 90
columns, if my window is 90 columns wide. (Maybe mutt can do that, I
don't recall.) For that reason I intentionally switched to non-wrapped
lines in e-mails some time ago.
Christoph Lange, Jacobs Univ. Bremen, http://kwarc.info/clange, Skype duke4701
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 198 bytes
Desc: This is a digitally signed message part.
Url : https://lists.ourproject.org/pipermail/implementations-list/attachments/20101031/bc6f4721/attachment.pgp
More information about the implementations-list
|
OPCFW_CODE
|
// Copyright (c) Microsoft. All rights reserved.
// Licensed under the MIT license. See LICENSE file in the project root for full license information.
using System.Buffers;
namespace System.Collections.Sequences
{
public static class SpanSequenceExtensions
{
[Obsolete("we will use multiple Memory<T> instances to represent sequences of buffers")]
public static Span<T> Flatten<T>(this ISpanSequence<T> sequence)
{
var position = Position.First;
Span<T> firstSpan;
// if sequence length == 0
if (!sequence.TryGet(ref position, out firstSpan, advance: true)) {
return Span<T>.Empty;
}
Span<T> secondSpan;
// if sequence length == 1
if (!sequence.TryGet(ref position, out secondSpan, advance: true)) {
return firstSpan;
}
// allocate and copy
Span<T> result;
// if we know the total size of the sequence
if (sequence.TotalLength != null) {
result = new T[sequence.TotalLength.Value];
result.Set(firstSpan);
result.Slice(firstSpan.Length).Set(secondSpan);
int copied = firstSpan.Length + secondSpan.Length;
Span<T> nextSpan;
while (sequence.TryGet(ref position, out nextSpan, advance: true)) {
nextSpan.CopyTo(result.Slice(copied));
copied += nextSpan.Length;
}
return result;
}
else {
var capacity = (firstSpan.Length + secondSpan.Length) * 2;
var resizableArray = new ResizableArray<T>(capacity);
firstSpan.CopyTo(ref resizableArray);
secondSpan.CopyTo(ref resizableArray);
Span<T> nextSpan;
int copied = firstSpan.Length + secondSpan.Length;
while (sequence.TryGet(ref position, out nextSpan, advance: true)) {
while (copied + nextSpan.Length > resizableArray.Capacity) {
var newLength = resizableArray.Capacity * 2;
resizableArray.Resize(newLength);
}
nextSpan.CopyTo(ref resizableArray);
copied += nextSpan.Length;
}
return resizableArray._array.Slice(0, copied);
}
}
[Obsolete("we will use multiple Memory<T> instances to represent sequences of buffers")]
public static Span<T> First<T>(this ISpanSequence<T> sequence)
{
Span<T> result;
var first = Position.First;
if(!sequence.TryGet(ref first, out result)) {
ThrowHelper.ThrowInvalidOperationException();
}
return result;
}
/// <summary>
///
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="sequence"></param>
/// <param name="destination">The parameter is updated if it's longer than the items in the sequence</param>
/// <param name="skip">number of items from the begining of sequence to skip, i.e. not copy.</param>
/// <returns>True if all items did fit in the destination, false otherwise.</returns>
/// <remarks>If the destination is too short, up to destination.Length items are copied in, even if the function returns false.</remarks>
[Obsolete("we will use multiple Memory<T> instances to represent sequences of buffers")]
public static bool TryCopyTo<T>(this ISpanSequence<T> sequence, ref Span<T> destination, int skip = 0)
{
var position = Position.First;
Span<T> next;
int copied = 0;
while(sequence.TryGet(ref position, out next, advance : true)) {
var free = destination.Slice(copied);
if(skip > next.Length) {
skip -= next.Length;
continue;
}
if(skip > 0) {
next = next.Slice(skip);
}
if(free.Length > next.Length) {
free.Set(next);
copied += next.Length;
}
else {
free.Set(next.Slice(0, free.Length));
return false;
}
}
destination = destination.Slice(0, copied);
return true;
}
}
}
|
STACK_EDU
|
The sinkhole sings a stirring song,
The avalanche avails,
Lava lurks in lumps along
The Pyromancers' trails.
Splinter - Burning Lands, the Fire Splinter
Set - Reward Edition
Class - Epic Ranged Blaster
Habitat - Pyromancer is one of the only callings in the Burning Lands that requires years over travel throughout the Splinters. They must live without possessions as vagabonds, at the charity of others. They are also called in this time to perform lava purifications, a complex ritual of the Torch religion in the furthest reaches of the Planet. After this period of ten years, called the Wandering, when they finally return to the Burning Lands, a Pyromancer is given their crowning orb, the pinnacle of Pyromancer growth and success.
Size - Pyromancers almost always come from high class Efreet, for it is considered a high privilege to become a protector of the flame. Efreet, or fire elves, are generally slightly smaller than Humans, with a tougher skin of red and long, pointed ears. When a Pyromancer returns from the Wandering, he is always stronger than when he left.
Lifespan - The reason that every rich Efreet wants their child to be chosen as a Pyromancer is simple: Pyromancers live longer. Every Pyromancer throughout history has been blessed with extraordinarily long life, often 3 to 4 times the average Efreet lifespan. This seems to be some kind of side effect to lava purification, a skill which only 1 in 100 Efreet youngsters even have the fortitude and intelligence to learn.
Weapon - The staff of the Pyromancer speaks to the core of the Planet like nothing else in the world. When struck upon the ground, a message is delivered faster than light, and if the proper incantation has been uttered, the lava will come. The staves are called poza rods, and only Pyrmancers have the skills necessary to hold them. The worth of a Pyromancer is determined by how many orbs he can magnetically balance around his poza rod. Twelve on the horizon and the crowning orb above will one day bring the ultimate lava purification that changes the world forever.
Diet - During the time of the Wandering, each Pyromancer develops unique tastes, with one thing in common: They are not picky. The religion requires them to accept no money for work, only food, and they must often work for scraps. Since their journey takes them all over the Splinterlands, they must accept many different types of food as commonplace, and they quickly learn to not complain.
Allies - The Pyromancers of the Burning Lands work for the Torch, but in an entirely different way than its many military bodies. They represent an ancient religion with strong roots in the Planet. This religion has been around since before the Torch, even before the Thousand Year War. Pyromancers travel the world as missionary zealots, spreading the good news of fire and awakening the lava flows that have slept for thousands of years. Their actions align with the interests of the Ferexia Torch.
Enemies - Of all the people they have met in their travels, the Pyromancers have found the people of Lyveria in the Earth Splinter the least hospitable. They refuse to let the Pyromancers into their walled city, claiming they are afraid the fire wizards will burn it down. Ironically, they have no problems with the Pyromancers settling in the forest, which is much more flammable than the walled kingdom.
Pastimes - Pyromancers are usually contemplative types who enjoy long bits of time to themselves. They are always ready to jump into combat with ferocity, but most of them would prefer to spend their days strolling through nature and admiring the beauty of the Planet. Many Pyromancers continue wandering the Splinters for many years after their required ten. When asked why they didn’t come home earlier they always say something like “there were too many places to see.”
The True Story of Splinterlands
Once upon a time your game purchase meant something. You could go to the store and purchase a game, after which you would simply own that game. You could play as often as you'd like, because it was your game. As the game companies were one by one swallowed up by larger and larger game companies, a terrible thing happened to the gaming world. While the games themselves were always making improvements, the players were constantly throwing more and more of their hard-earned money into a corporate black hole from which they reaped no rewards.
How did the corporations convince the players to pay this money? Loot. They showered the players with in-game riches designed to create a sense of accomplishment, but with no real value. Not only are these in-game "assets" entirely subject to the whims of corporate overlords who rarely (if ever) have the player's interests at heart, but they never really belong to the player at all. They belong exclusively to the game for which they were created. If a player wants to quit playing the game, they must also abandon their in-game treasures.
Not anymore. In the last couple years, Play-to-Earn has rushed to the scene. Blockchains are giving power and ownership back to the players, and it's about time. In this incredible and rapidly expanding world of technology it seems like such an outdated argument to be making, but the players (not the company) should own their gaming rewards. Blockchain, non-fungible tokens and games like Splinterlands are now making that possible.
In 2021, Splinterlands has more exciting things in store than ever before, including leaps and bounds of growth in cooperative guild play, boss fights, a massive Land Expansion and more! Come join our incredible community and experience the power of P2E for yourself! Tell them Splinterbard sent you!
Subscribe to the GOLD FOIL PRESS
DYGYCON - Next Event Coming Soon!
Splinterlands on Twitter
Splinterlore on Twitter
SplinterLands on Peakd.com
Splinterlands on Facebook
Spliterlands Discord Community
Splinterlands Telegram Community
|
OPCFW_CODE
|
import {
ApplyForce,
BatchedRenderer, Bezier,
ConstantColor,
ConstantValue,
ParticleSystem, PiecewiseBezier,
RenderMode, SizeOverLife,
SphereEmitter
} from "../src";
import {MeshBasicMaterial, NormalBlending, Scene, Texture, Vector3, Vector4} from "three";
describe("BatchedRenderer", () => {
test("update", () => {
const scene = new Scene();
const renderer = new BatchedRenderer();
const texture = new Texture();
const glowBeam = new ParticleSystem({
duration: 1,
looping: true,
startLife: new ConstantValue(2.0),
startSpeed: new ConstantValue(0),
startSize: new ConstantValue(3.5),
startColor: new ConstantColor(new Vector4(1, 0.1509503, 0.07352942, .5)),
rendererEmitterSettings: {startLength: new ConstantValue(40)},
worldSpace: true,
emissionOverTime: new ConstantValue(100),
shape: new SphereEmitter({
radius: .0001,
thickness: 1,
arc: Math.PI * 2,
}),
material: new MeshBasicMaterial({map: texture, blending: NormalBlending}),
startTileIndex: new ConstantValue(1),
uTileCount: 10,
vTileCount: 10,
renderOrder: 2,
renderMode: RenderMode.Trail
});
glowBeam.addBehavior(new SizeOverLife(new PiecewiseBezier([[new Bezier(1, 0.95, 0.75, 0), 0]])));
glowBeam.addBehavior(new ApplyForce(new Vector3(0, 1, 0), new ConstantValue(10)));
glowBeam.emitter.name = 'glowBeam';
renderer.addSystem(glowBeam);
scene.add(glowBeam.emitter);
scene.add(renderer);
renderer.update(1 / 60);
expect(glowBeam.particleNum).toBeGreaterThan(0);
const previousCount = glowBeam.particleNum;
scene.remove(glowBeam.emitter);
renderer.update(1);
expect(glowBeam.particleNum).toEqual(previousCount);
expect(renderer.batches[0].systems.size).toEqual(0);
});
});
|
STACK_EDU
|
As stated in the last blog all mathematical symbols possess both quantitative and qualitative aspects.
Though in isolation it is indeed possible to seek to interpret numbers with respect merely to their quantitative properties, when using interdependent frames (implying a dynamic interactive approach), numbers are necessarily quantitative and qualitative (and qualitative and quantitative) with respect to each other.
So applying this again to prime numbers, one can indeed attempt to interpret the individual primes and their general distribution (in isolation) using a merely quantitative frame of reference.
However when we seek to understand the interdependence of specific primes and their general distribution, we must incorporate both quantitative and qualitative aspects of appreciation. And this poses insuperable difficulties for the conventional (Type 1) mathematical approach, which in formal terms is solely based on mere quantitative interpretation.
And the interdependence of primes, with respect to their individual identity and overall distribution, is clearly manifested in the relation of the trivial non-zeros on the one hand to the general prime number distribution (and the corresponding relationship as between the individual primes and the general distribution of the non-trivial zeros).
The non-trivial zeros themselves represent the unlimited possible solutions for s (where s represents a complex dimensional number of the form a + it). And as discussed in the last blog, the relationship of (base) numbers to their dimensional powers likewise is as quantitative to qualitative (and qualitative to quantitative).
Though this understanding is absolutely central to true appreciation of the nature of the Riemann Hypothesis, it completely eludes conventional (Type 1) analysis, which once again totally lacks, in formal terms, any distinctive qualitative aspect of interpretation.
As we have already seen the non-trivial zeros can be used in an ingenious manner (after a couple of other small adjustments) to gradually correct any remaining deviations arising from Riemann's general function for the prediction of prime number frequency. So in principle through using this approach we should be gradually able to zone in on the precise location of each individual prime, while ultimately correctly prediction the overall frequency of primes (up to a given number).
Now this is based on acceptance of the Riemann Hypothesis.
It is often stated in this manner that given the truth of the Riemann Hypothesis (i.e. that the real part of all these zeros of s lies on the line = 1/2, then in principle we can exactly predict the prime numbers (from their general frequency).
However much greater subtlety is required in this statement, which indeed is required to reveal the true nature of the Riemann Hypothesis.
As befits the proper distinction as between quantitative and qualitative, we need likewise to carefully distinguish as between (actual) finite and (potential) infinite meaning. By its very nature Conventional Mathematics inevitably reduces in any context (potential) infinite to (actual) finite notions of meaning!
So it is true as we progressively add in the corrections based on the non-trivial zeros that we move ever closer to the integer values of the primes.
However in actual terms this process can never be completed for no matter how many non-trivial zeros we seek to consider, an unlimited set of non-trivial zeros will remain. So there is an inherent uncertainty attached to this process whereby the successive approximation through determination of non-trivial zeros of the location of the primes is always based on an unlimited set of non-trivial zeros, which must remain indeterminate.
So in actual finite terms (which is the proper domain of quantitative interpretation)we can never exactly pinpoint the location of the primes, with an uncertainty thereby necessarily attached to their precise values.
However when we switch to a potential infinite context (which is the proper domain of qualitative meaning) we can indeed say that in potential terms, if the infinite set of non-zeros is included, that then we would indeed exactly obtain the discrete integer values of the primes. And in doing this our reconciliation of the primes with the trivial non-zeros would be complete.
However a purely potential state equally implies that no phenomenal identity can remain to the primes (in actual terms).
In other words the full reconciliation of the primes with the non-trivial zeros (both of which are mutually encoded in each other) points to an ineffable state with no phenomenal existence.
And as this process is based on the assumption that the Riemann Hypothesis is true, it thereby is pointing to this ineffable state.
So the Riemann Hypothesis is directly concerned with the ultimate reconciliation of quantitative and qualitative meaning (where finite and infinite can at last become identical).
So in this respect Hilbert was indeed correct. The implications of the zeros of the Zeta function in the context of Riemann's Hypothesis could not be more important.
The famous Buddhist heart sutra states this identity of finite and infinite in the following manner:
"Form is not other than Void;
Void is not other than Form"
The Riemann Hypothesis in fact is simply a restatement of this sutra related to the ultimate nature of mathematical meaning:
"The Quantitative is not other than the Qualitative;
The Qualitative is not other than the Quantitative"
The deeper implications of the true nature of the primes are awe inspiring.
There are two capacities that reveal themselves in nature, one for independence and the other for interdependence i.e quantitative and qualitative aspects (which ultimately relates to the nature of the prime numbers). In an original state - as mere potential for physical phenomenal existence - these two capacities are identical.
Then when operating through the veils of phenomenal reality, they become separated with full understanding of their identical nature again ultimately taking place in an ineffable spiritual manner.
So the task of understanding the mathematical (objective) nature of the primes cannot be ultimately divorced from the psychological nature of their (subjective) interpretation.
The mystery of the primes can therefore be validly seen as embracing the entire course of created evolution.
Once again it is all about the reconciliation of quantitative and qualitative notions of meaning.
And Mathematics would make enormous strides in simply grasping this key fact!
|
OPCFW_CODE
|
import requests
import logging
RED = '\033[0;31m'
GREEN = '\033[0;32m'
ORANGE = '\033[0;33m'
NO_COLOR = '\033[0m'
def website_checker(server, session_id, machine):
""" :account,machine """
for website in server.list_websites(session_id):
protocol = 'https' if website['https'] else 'http'
for subd in website['subdomains']:
for app in website['website_apps']:
appurl = "{}://{}{}".format(
protocol,
subd,
app[1]
)
print('Checking: {}'.format(appurl))
ssl_msg = ''
color = GREEN
try:
r = requests.get(appurl)
except requests.exceptions.SSLError:
color = ORANGE
r = requests.get(appurl, verify=False)
ssl_msg = '{}Invalid Certificate{}'.format(RED, NO_COLOR)
except requests.exceptions.ConnectionError:
r = False
ssl_msg = '{}Connection Error{}'.format(RED, NO_COLOR)
if r is False:
print("{}Error: {} not available{}".format(
RED, appurl, NO_COLOR
))
elif r.status_code != 200:
print("{}Error {}: {} {}{}".format(
RED, r.status_code, appurl, ssl_msg, NO_COLOR
))
try:
r.raise_for_status()
except Exception as e:
logging.error(e)
else:
print("{}Available: {} {}{}".format(
color, appurl, ssl_msg, NO_COLOR
))
|
STACK_EDU
|
(PUP-10844) Continue when server_list server fails
When a server from server_list is available at the beginning of the run,
it will be cached, and the cached server will be used even if it is no
longer reachable.
Now, when a server from server_list becomes unavailable, the cached
server is invalidated and a new reachable server from server_list will
be selected.
In case of a server outage in the middle of a puppet run, I think the current flow is like this:
puppet agent -t
...
1
Finding functional server -> no cached server -> functional server found and cached
HTTP GET https://functional_server_found_1.puppetlabs.net:8140/status/v1/simple/master returned 200 OK
Info: Using configured environment 'production'
2.
Info: Retrieving pluginfacts
Finding functional server -> using cached server
Debug: HTTP GET https://functional_server_found_1:8140/puppet/v3/file_metadatas/pluginfacts returned 200 OK
3.
Info: Retrieving plugin
Finding functional server -> using cached server
Debug: HTTP GET https://functional_server_found_1:8140/puppet/v3/file_metadatas/plugins returned 500 FAIL
Server unreachable, clear cached server
4.
Info: Retrieving locales
Finding functional server -> no cached server -> functional server found and cached
Debug: HTTP GET https://functional_server_found_2:8140/puppet/v3/file_metadatas/locales returned 200 OK
5.
Info: Loading facts
Finding functional server -> using cached server
Debug: HTTP POST https://master:8140/puppet/v3/catalog/thorny-terror.delivery.puppetlabs.net returned 200 OK
...
If a server becomes unavailable right after it was checked (https://github.com/puppetlabs/puppet/blob/6.x/lib/puppet/configurer.rb#L225), puppet will fail on https://github.com/puppetlabs/puppet/blob/6.x/lib/puppet/transaction/additional_resource_generator.rb#L52, and I am not sure if it should recover from here.
In case of a server outage in the middle of a puppet run, I think the current flow is like this:
puppet agent -t
...
1
Finding functional server -> no cached server -> functional server found and cached
HTTP GET https://functional_server_found_1.puppetlabs.net:8140/status/v1/simple/master returned 200 OK
Info: Using configured environment 'production'
2.
Info: Retrieving pluginfacts
Finding functional server -> using cached server
Debug: HTTP GET https://functional_server_found_1:8140/puppet/v3/file_metadatas/pluginfacts returned 200 OK
3.
Info: Retrieving plugin
Finding functional server -> using cached server
Debug: HTTP GET https://functional_server_found_1:8140/puppet/v3/file_metadatas/plugins returned 500 FAIL
Server unreachable, clear cached server
4.
Info: Retrieving locales
Finding functional server -> no cached server -> functional server found and cached
Debug: HTTP GET https://functional_server_found_2:8140/puppet/v3/file_metadatas/locales returned 200 OK
5.
Info: Loading facts
Finding functional server -> using cached server
Debug: HTTP POST https://master:8140/puppet/v3/catalog/thorny-terror.delivery.puppetlabs.net returned 200 OK
...
If a server becomes unavailable right after it was checked (https://github.com/puppetlabs/puppet/blob/6.x/lib/puppet/configurer.rb#L225), puppet will fail on https://github.com/puppetlabs/puppet/blob/6.x/lib/puppet/transaction/additional_resource_generator.rb#L52, and I am not sure if it should recover from here.
The current behavior is intended to match how puppet 5.x and 6.x resolve the server_list. So once we've processed the server_list and found an available puppetserver, then we use that host and port for the duration of the agent run, even if that server goes down or there are networking issues along the way. See the comment in: https://github.com/puppetlabs/puppet/blob/06ad255754a38f22fb3a22c7c4f1e2ce453d01cb/lib/puppet/util/connection.rb#L11-L12.
Recovering from an error after that is a big change, as currently Session#route_to expects to resolve everything up front and never try again. Instead it might be better to have each resolver return a list of URLs to connect to, pass the list to the Client#connect method:
def route_to(name, ...)
...
urls = @resolvers.get_urls
resolved_url = @client.connect(urls)
raise Puppet::HTTP::RouteError, "No more routes to #{name}" unless resolved_url
Puppet::HTTP::Service.create_service(@client, self, name, resolved_url.host, resolved_url.port)
end
And then in the Client#execute_request method, if a networking error occurs, close the current connection, get the list of URLs from the resolvers, and try to connect again. But you have to be careful to only retry errors like Puppet::HTTP::ConnectionError and Puppet::SSL::SSLError, but not Puppet::HTTP::ResponseError.
The current behavior is intended to match how puppet 5.x and 6.x resolve the server_list. So once we've processed the server_list and found an available puppetserver, then we use that host and port for the duration of the agent run, even if that server goes down or there are networking issues along the way. See the comment in: https://github.com/puppetlabs/puppet/blob/06ad255754a38f22fb3a22c7c4f1e2ce453d01cb/lib/puppet/util/connection.rb#L11-L12.
Recovering from an error after that is a big change, as currently Session#route_to expects to resolve everything up front and never try again. Instead it might be better to have each resolver return a list of URLs to connect to, pass the list to the Client#connect method:
def route_to(name, ...)
...
urls = @resolvers.get_urls
resolved_url = @client.connect(urls)
raise Puppet::HTTP::RouteError, "No more routes to #{name}" unless resolved_url
Puppet::HTTP::Service.create_service(@client, self, name, resolved_url.host, resolved_url.port)
end
And then in the Client#execute_request method, if a networking error occurs, close the current connection, get the list of URLs from the resolvers, and try to connect again. But you have to be careful to only retry errors like Puppet::HTTP::ConnectionError and Puppet::SSL::SSLError, but not Puppet::HTTP::ResponseError.
More informations here: https://tickets.puppetlabs.com/browse/PUP-10844 on why I am closing this PR.
More informations here: https://tickets.puppetlabs.com/browse/PUP-10844 on why I am closing this PR.
|
GITHUB_ARCHIVE
|
Why won't my system now show the directories and files in /media/sda7?
Summary:
Could you tell me please if there is a way to restore the directories and files
beneath /media/sda7 that are no longer being shown by my file manager or ls?
My system is Debian.
Full Details:
Please could you help me out of this fix. I have a spare partition on /dev/sda7
(428 Gb), and have been storing files and directories on it (mainly video
files). "mount" shows the setup as
/dev/sda7 on /media/sda7 type reiserfs (rw,nosuid,nodev,relatime)
I access /media/sda7 using my file manager (pcmanfm). Until earlier today,
pcmanfm has happily let me create new directories and files beneath
/media/sda7, such as /media/sda7/vids/horizon.mp4. But now, when I specify for
example the directory /media/sda7/vids in pcmanfm, I get the message:
The specified directory is not valid
Now, none of the directories or files beneath /media/sda7 are displayed (the file list window is blank). Also ls in a terminal shows no
directories or files beneath /media/sda7.
I have an idea that the reason might be because shortly before this started
happening, I had launched the "GParted" disc partitioning utility. I did this
to try and see how much disc space I'd used up on /media/sda7. I used GParted
because du and df were showing zero and very little used disc space respectively, but I reckon the usage should be about 10 Gb.
I didn't carry out any actions in GParted - I just looked at what it was
displaying, then exited. The thing is, I've got about 200 video files (mostly mp4) on /media/sda7, and I dearly don't want to lose them.
Could you tell me please if there is a way to restore the directories and files?
NOTE: I'm leaving the laptop powered on in case I'll lose everything if I shut down.
It may be that GParted simply unmounted the partition. Does sudo mount /media/sda7 help?
grep sda7 /proc/mounts: does that show an entry? If yes, please look through the output of dmesg (or in /var/log/kern.log) for errors. If no entry from that grep, @JosephR's suggestion sounds good.
The simplest way to know if a filesystem is mounted on a directory is comparing the reported free space. If the free space inside that directory matches the free space of the filesystem where it should be mounted, chances are that your sda7 partition is not mounted there, and you should only mount it again:
# mount /dev/sda7
But if you can access the mountpoint, the reported free space is different as that of the "parent" filesystem and indeed there isn't anything, chances are that it got corrupted and your best bet is to run a filesystem check:
# fsck -fyv /dev/sda7
Alternatively, you can use the gnome-disks tool to view all your drive's partitions and whether they're mounted or not, and if they indeed hold a filesystem or they're only free space or if they do have a filesystem but are reported as empty (if the filesystem was corrupted).
You need to make sure the filesystem is unmounted before running a fsck. And fsck -y isn't a good idea, really.
|
STACK_EXCHANGE
|
package mseffner.twitchnotifier.networking;
import com.android.volley.Response;
import mseffner.twitchnotifier.data.Database;
import mseffner.twitchnotifier.data.ThreadManager;
/**
* BaseListener should be subclassed by all request listeners. It will automatically
* decrement the appropriate counter in RequestTracker depending on the type of the
* child listener.
*
* @param <T> a class defined in Containers
*/
public abstract class BaseListener<T> implements Response.Listener<T> {
/**
* This method provides a hook for custom response handling. Database insertions
* and RequestTracker decrementing will be done automatically after this method
* returns. It is called on a background thread.
*
* @param response the request response object
*/
protected void handleResponse(T response) {}
@Override
public final void onResponse(T response) {
ThreadManager.post(() -> {
handleResponse(response);
insertDecrement(response);
});
}
private void insertDecrement(T response) {
/* The inserts must happen before the corresponding decrement so that
the process will be fully complete before RequestTracker is notified. */
if (response instanceof Containers.Streams) {
Database.insertStreamsData((Containers.Streams) response);
UpdateCoordinator.decrementStreams();
} else if (response instanceof Containers.StreamsLegacy) {
Database.insertStreamsLegacyData((Containers.StreamsLegacy) response);
UpdateCoordinator.decrementStreams();
} else if (response instanceof Containers.Games) {
Database.insertGamesData((Containers.Games) response);
UpdateCoordinator.decrementGames();
} else if (response instanceof Containers.Follows) {
Database.insertFollowsData((Containers.Follows) response);
UpdateCoordinator.decrementFollows();
} else if (response instanceof Containers.Users) {
Database.insertUsersData((Containers.Users) response);
UpdateCoordinator.decrementUsers();
}
}
}
|
STACK_EDU
|