text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Screenshot testing with React and Storybook
Visual regression testing for fun, profit, and peace of mind
A friend of mine recently related a scary story about the lack of automated visual regression testing where he works—a huge international tech firm that you’ve heard of and probably use. He’d added a CSS class to an element to hide it from the user. Unbeknownst to him, and despite using BEM-style naming for CSS classes, this class name was already being used for an important checkbox in the user settings screen of his company’s web app. The result? Users could no longer change the setting represented by the checkbox, because it was invisible!
I asked him why that hadn’t been caught by automated tests. He explained that, although they did have end-to-end tests in place that went through the UI and tested its functionality, the tests didn’t catch the bug. Selenium was still able to check the original checkbox via its selector, so its visibility had no effect on the outcome of the test. The only way this could have been caught is via visual regression testing — or, as it is sometimes called, screenshot testing.
💅 Switching to styled-components
I recalled this story recently while working on a sizable project at Clue: converting all of our “native” CSS to use styled-components. The new helloclue.com website consists of dozens of components. An article page, for example, contains at least ten different React components, all of which have accompanying CSS files. Converting all of this CSS to styled-components means there’s an enormous risk of bugs exactly like the one my friend experienced. So before the project began, I investigated setting up screenshot testing for the site.
📕 Enter Storybook
(Note: if you’re already familiar with Storybook and @storybook/react, feel free to skip to the next section.)
We use Storybook when developing simple presentational components. This way, we can test them in every possible state, without having to reproduce all the logic and so forth required to get them to that state.
For example, on our author pages (the
AuthorPage component), there are four possible states:
- Both the author and the author’s articles are loading.
- The author is loaded, but the articles are still loading.
- The articles are loaded, but the author is still loading.
- Both the author and the articles are loaded.
The loading states are reflected in the UI via placeholder elements. The problem is, how do I make sure these elements look right? Using our office’s speedy internet connection, I only get a split second to check the loading state of the author page before the placeholders are replaced with real content.
That’s where Storybook comes in. With Storybook, you can render a component in the specific state you want to test it in, and keep it in that state for as long as you need¹:
// stories/AuthorPage.tsx
// Note that the code samples in this article are written in TypeScript.import { storiesOf } from "@storybook/react"
import AuthorPage from "../components/AuthorPage/AuthorPage"storiesOf("AuthorPage", module)
.add("loading", () => (
<AuthorPage
isLoadingArticles={true}
isLoadingAuthor={true}
/>
))
In the example above, I’ve created a “story” for the author page in which both the articles and the authors are still loading. I can then run
yarn run storybook (or
npm run storybook), and it starts a Storybook server where I can view the
AuthorPage component in its loading state. (Of course, I created additional stories for each of the other states, as well.)
While viewing the component, I can use Chrome Dev Tools to inspect the elements on the page, debug UI issues, and generally have a stable environment in which to develop the loading state of the component.
📸 But what about screenshots?
Since each component can be isolated and locked into a specific state in Storybook, and I wanted to screenshot every possible state of many of the site’s components, our Storybook setup seemed like a great place to do screenshot testing.
A bit of searching turned up an excellent library called storybook-chrome-screenshot, a Storybook addon. Using Storybook Chrome Screenshot, I can add a decorator to my Storybook stories:
// stories/AuthorPage.tsximport { initScreenshot, withScreenshot } from "storybook-chrome-screenshot/lib"
import { addDecorator, storiesOf } from "@storybook/react"import AuthorPage from "../components/AuthorPage/AuthorPage"addDecorator(initScreenshot())storiesOf("AuthorPage", module)
.add("loading", withScreenshot()(() => (
<AuthorPage
isLoadingArticles={true}
isLoadingAuthor={true}
/>
)))
When I run
storybook-chrome-screenshot -p 9001 -c .storybook, a Storybook server is once again spun up. But this time, the Storybook Chrome Screenshot addon goes through each story that I’ve added the
withScreenshot decorator to and takes a screenshot of it. (The decorator can also be added to the entire Storybook setup to automatically screenshot every story.²)
I can then add the
storybook-chrome-screenshot command to the
scripts key in my
package.json, and automatically run it as a part of my CI process.
🔬 Comparing screenshots: before and after
OK, so I have screenshots of every component in the site in every possible state. But the whole point of taking screenshots is to compare them against master, to make sure nothing unintentionally changed.
That’s where reg-suit comes in. It’s an NPM package that does the actual work of pixel-by-pixel visual regression testing, and then generates an HTML report of the results.
First, I created
regconfig.json in our app’s root directory (with a few non-JSON compliant comments added for your benefit):
{
"core": {
// The directory where storybook-chrome-screenshot dumps its screenshots, which reg-suit will use to compare against screenshots from master.
"actualDir": "__screenshots__",
// The directory where reg-suit will dump its HTML report, as well as images showing the visual differences if there are any.
"workingDir": "__screenshots_tmp__",
// This determines how forgiving reg-suit should be of differences between screenshots.
"threshold": 0,
"addIgnore": true,
"thresholdRate": 0,
"ximgdiff": {
"invocationType": "client"
}
},
"plugins": {
// Use CI environment variables to determine which commits to compare screenshots from (in this case, master vs. HEAD).
"reg-simple-keygen-plugin": {
"expectedKey": "${TARGET_GIT_COMMIT}",
"actualKey": "${GIT_COMMIT}"
},
// Notify GitHub of the results of the visual regression test.
"reg-notify-github-plugin": true,
// Publish screenshots and test results to S3. Note that your CI will need to have AWS credentials configured for this to work.
"reg-publish-s3-plugin": {
"bucketName": "<BUCKET_NAME>"
}
}
}
Given the above config, reg-suit stores the screenshots for each commit in an S3 bucket, under a directory named for the Git SHA of that commit. It also stores master screenshots in the same bucket under its own Git SHA directory. It then uses the
reg-simple-keygen-plugin to identify those directories and run comparisons between the screenshots in each.
🍱 Putting it all together
I mentioned earlier that the impetus for screenshot testing of helloclue.com was the conversion of CSS files to styled-components. Once we got the setup described above working on helloclue.com, I started the conversion process.
What was so cool about having automated screenshot tests during this process was that I could make tons of changes to the styling of components without worrying too much about making mistakes, since I knew they’d be caught.
And of course, I did make mistakes! While converting the author page to styled-components, for example, I mistakenly left out the styling that made the placeholder for the author’s title appear as a gray bar. Since our visual regression tests are integrated with GitHub, reg-suit commented on my PR to inform me that some visual comparisons had failed³: they didn’t match what was on
master. My visual regression testing was actually working!
🎁 Fin
There are two things that I hope are clear from this article. First, visual regression testing is important! It can help you catch major UI bugs that your end-to-end tests missed. And second, there are tools available to make this easy in React. It’s just a matter of putting them together to make them work for you!
I’d love to hear from you in the comments:
- Was there anything that could be made clearer about the setup steps in this article? I’m happy to explain in the comments, or even to edit this article to make it easier to understand.
- What other tools, if any, are you using right now to do visual regression testing—particularly for React?
- Any other thoughts or feedback?
¹ Note that this works best with presentational components. Our presentational components simply take properties (such as
isLoading) and output DOM. API calls, timeouts, etc. are all handled in container components. If you’d like to read more about separating business logic from presentation logic in React, I highly recommend Dan Abramov’s excellent article on the topic.
² Our actual screenshot configuration (with comments added for your benefit) looks like this:
// stories/index.tsximport { addDecorator } from "@storybook/react"
import { initScreenshot, withScreenshot } from "storybook-chrome-screenshot/lib"addDecorator(initScreenshot())// Rather than wrapping an individual story in `withScreenshot()(...)`, we'll add a decorator to the entire Storybook instance. This way, it'll take screenshots of every single story.
addDecorator(withScreenshot({
// A one-second delay ensures that fonts load before screenshots are taken.
delay: 1000, // We take screenshots at multiple viewport sizes, to ensure that various media queries are covered.
viewport: [
{
width: 320,
height: 568,
isMobile: true,
hasTouch: true,
},
{
width: 768,
height: 1024,
isMobile: true,
hasTouch: true,
},
{
width: 1024,
height: 768,
isMobile: true,
hasTouch: true,
},
{
width: 1280,
height: 800,
},
{
width: 1440,
height: 900,
},
],
}))import "./AuthorPage.tsx"
// [import all other stories...]
³ It’s worth noting that we use a custom GitHub integration which doesn’t actually fail the build when screenshot comparisons “fail.” This is because changes to the UI are often intentional. Instead of failing the build, our GitHub integration simply comments on the PR with a count of how many screenshots changed from
master to the PR. If that count is greater than 0, I can manually review the visual regression report and determine whether or not all the changes in the PR were intentional. | https://medium.com/bleeding-edge/screenshot-testing-with-react-and-storybook-19ab7e49ec92 | CC-MAIN-2021-31 | refinedweb | 1,680 | 54.42 |
Cross-Platform Game Development for C++ Developers
Do you dream of writing the next hit game title but aren't sure how to get started? Are you interested in game development just for the fun of it? Take a close look at a versatile cross-platform gaming engine that's freely available for the beginning game developer.
A (Very) Brief History of 3D Gaming Engines
In gaming, more so than any other programming discipline, it is important to specify the platform correctly from the start. Will you support Windows, Linux, and OS X? Isn't using OpenGL enough to get you there? OpenGL was designed in the early 1990's for $25,000 Unix CAD workstations and later adapted to Windows and low-end platforms as the gaming industry drove the cost of graphics accelerators down from $2,000 a pop to the $150 mass-market price point you see today.
Simple DirectMedia Layer for C++, Java, and More
Well, that's all very interesting history, but it doesn't really address the question of where fragging coders should start: Not every game is going to be a Quake clone. One option that has been touted for its many virtues is Simple DirectMedia Layer (SDML). This cross-platform multimedia library provides low-level access to audio, keyboard, mouse, joystick, OpenGL, and 2D video framebuffer. SDML supports most every platform I can think of, including Linux, Windows, all MacOS variants, WinCE, Dreamcast, and others. It shows up in MPEG players, hardware emulators, and many popular games, including the award-winning Linux port of Civilization: Call to Power.
SDML is written in C, but works with C++ natively, and has bindings to several other languages, including Ada, Eiffel, Java, Lua, ML, Perl, PHP, Pike, Python, and Ruby. The sky is the limit with SDML, which happens to be the engine for my favorite open source flight simulator, GL-117 (see Figure 1). In fact, 513 games currently are built on top of the SDML engine and registered on the SDML homepage.
Figure 1. The View from GL-117
Tunnel Vision Demo Program
The best way to get inside a game engine is to look at some sample code. Take a brief look at a 2D tunnel-type display in SDML (see Figure 2) to see what you can do in just a few lines of code. This example might be something you use for a screen-saver, music visualization, and so forth. I've trimmed the actual drawing code for brevity. Follow my comments for a description of how SDML works:
#include "Tunnel.h" // SDL Stuff SDL_Surface *screen; SDL_Surface *bBuffer; SDL_Surface *Image; SDL_Rect rScreen; SDL_Rect rBuffer; // -------------------------------------------------------------- int main (int argc, char **argv) { int flag = SDL_SWSURFACE; // Requests a software surface. // Software surfaces are in // system memory, and are not // generally as fast as hardware // surfaces #ifdef WIN32 int fullscreen = MessageBox(NULL, "Show Full Screen (y/n):", "Screen Setting", MB_YESNO); if (fullscreen==IDYES) { flag |= SDL_FULLSCREEN; // Take over whole screen, if // user desires } #endif Tunnel_Timer(); // Read initial system clock SDL_Init( SDL_INIT_VIDEO ); // Initialize just the video // subsystem // Set screen to 320x240 with 32-bit color screen = SDL_SetVideoMode( 320, 240, 32, flag); // Request hardware buffers for the screen surface, if available bBuffer = SDL_CreateRGBSurface( SDL_HWSURFACE, screen->w, screen->h, screen->format->BitsPerPixel, screen->format->Rmask, screen->format->Gmask, screen->format->Bmask, screen->format->Amask); // This is the seed image that you will convolute when you get going Image = SDL_LoadBMP( "tunnel_map.bmp" ); Image = SDL_ConvertSurface(Image, screen->format, SDL_HWSURFACE); rBuffer.x = 0; rBuffer.y = 0; rBuffer.w = bBuffer->w; rBuffer.h = bBuffer->h; // Ignore most events, including mouse, and disable the cursor SDL_EventState(SDL_ACTIVEEVENT, SDL_IGNORE); SDL_EventState(SDL_MOUSEMOTION, SDL_IGNORE); SDL_ShowCursor( SDL_DISABLE ); Tunnel.Set( 320, 240 ); // Tunnel will fill the whole buffer Tunnel.Precalc( 16 ); // Inner circle diameter while (SDL_PollEvent(NULL)==0) { float fTime = Tunnel_GetTime(); // Surfaces must be locked before modification, especially // if the buffer resides in the graphics hardware memory SDL_LockSurface(bBuffer); SDL_LockSurface(Image); Tunnel.Draw(bBuffer, Image, 180*sin(fTime), fTime*100); SDL_UnlockSurface(bBuffer); // After updating, you may // unlock SDL_UnlockSurface(Image); // Push the buffer out to the screen draw area and force // a repaint SDL_BlitSurface( bBuffer, NULL, screen, &rBuffer ); SDL_UpdateRect( screen, 0, 0, 0, 0 ); } Tunnel.Free(); }
Figure 2. Spinning and Twisting 2D Tunnel Demo
Page 1 of 2
| http://www.developer.com/net/cplus/article.php/3525391/Cross-Platform-Game-Development-for-C-Developers.htm | CC-MAIN-2014-42 | refinedweb | 709 | 50.97 |
Data Analysis using Sparks, Pandas, and Matplotlib using Jupyter Notebook for data in S3(Minio)
By Prashant Shahi- 13 minutes read - 2709 words. Similarly, Matplotlib is another python library used for 2D plotting which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms.
Jupyter Notebook is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations, and narrative text.
Minio is an open-source cloud object storage server which follows Amazon S3 protocol and at times, referred to as an Open-Source Amazon S3 alternative, which is available for anyone on internet for free to deploy on their machines.
Setup
System configuration (VM/Instance)
The system configuration selected for the task is as mentioned below :
- 6 cores CPUs
- 10 GB memory (RAM)
- 200 GB Disk Size (ROM)
It should be fine even if your machine configurations are lower the one used here.
Downloading Minio Server and Client
Minio Server
Follow the steps below to setup Minio Server :
# Downloading Minio binary and copying to /opt sudo wget -O /opt/minio # Changing the file permission of binary to mark as executable sudo chmod +x /opt/minio # Creating a symbolic link to /usr/local/bin to make the file executable from any path sudo ln -s /opt/minio /usr/local/bin/ # Making the data directory for storing the objects/data for minio server mkdir ./data # Running the Minio server with the data directory parameter minio server ./data
Minio Client
Follow the steps below to setup Minio Client :
# Downloading minio binary and copying to /opt sudo wget -O /opt/mc # Changing the file permission of binary to mark as executable sudo chmod +x /opt/mc # Creatin a symbolic link to /usr/local/bin to make the file executable from any path sudo ln -s /opt/mc /usr/local/bin # Executing the command with help parameter to ensure that installation was a success mc --help
Follow the steps below how to load a Sample Data to S3 using Minio Client :
# Downloading the sample data of TotalPopulationBySex.csv from UN wget -O TotalPopulation.csv "" # Compressing the csv file gzip TotalPopulation.csv # Creating a new bucket mc mb data/mycsvbucket # Copying the compressed file inside bucket mc cp TotalPopulation.csv.gz data/mycsvbucket/
Setting up Java Environment for Spark Shell
# Adding ppa to local repository sudo add-apt-repository ppa:webupd8team/java # Updating repository archives sudo apt update # Installing Oracle Java8 sudo apt install -y oracle-java8-installer # Verifying the java installation javac -version # Setting Oracle Java8 as default (In case of multiple java versions) sudo apt install -y oracle-java8-set-default # Setting up environment variable (Also, add this to the `~/.bashrc` file to apply for next boot) export JAVA_HOME=/usr/lib/jvm/java-8-oracle export PATH=$PATH:$JAVA_HOME/bin
Installation of Apache Spark and Hadoop
Steps to install Apache Spark is as follow :
# Download Spark v2.3.0 without Hadoop wget # Extracting the compressed file sudo tar -C /opt/ -xvf spark-2.3.0-bin-without-hadoop.tgz # Setting up environment variable (Also, add this to the `~/.bashrc` file to apply for next boot) export SPARK_HOME=/opt/spark-2.3.0-bin-without-hadoop export PATH=$PATH:$SPARK_HOME/bin
Steps to install Apache Hadoop is as follow :
# Download Hadoop v2.8.2 wget # Extracting the compressed file sudo tar -C /opt/ -xvf hadoop-2.8.2.tar.gz # Setting up environment for Hadoop export HADOOP_HOME=/opt/hadoop-2.8.2 export PATH=$PATH:$HADOOP_HOME/bin export SPARK_DIST_CLASSPATH=$(hadoop classpath) export LD_LIBRARY_PATH=$HADOOP_HOME/lib/native
Setting up Minio Server endpoint and credentials
Open the file $HADOOP_HOME/etc/hadoop/core-site.xml for editing. In the example XML file below, Minio server is running at with access key minio and secret key minio123. Make sure to update relevant sections with valid Minio server endpoint and credentials.
<?xml version="1.0" encoding="UTF-8"?> <></value> </property> <property> <name>fs.s3a.access.key</name> <description>AWS access key ID.</description> <value>minio</value> </property> <property> <name>fs.s3a.secret.key</name> <description>AWS secret key.</description> <value>minio123<> </configuration>
Spark Shell on CSV in Minio (S3)
Note: Make sure JAVA_HOME has been set before setting up Spark Shell.
Spark-Select can be integrated with Spark via
spark-shell,
pyspark,
spark-submit, etc. You can also add it as Maven dependency, sbt-spark-package or a jar import.
Let’s go through the steps below to use
spark-shell in an example.
Start Minio server and configure mc to interact with this server.
Create a bucket and upload a sample file :
# Downloading sample csv wget "" # Creating a bucket named sjm-airlines mc mb data/sjm-airlines # Copying the csv to the created bucket using minio client mc cp people.csv data/sjm-airlines
- Download the sample scala code:
wget ""
- Downloading depedencies and adding it to spark :
# Creating jars folder mkdir jars # Changing current directory to ./jars cd jars # Downloading jar dependencies wget wget wget wget wget wget wget # Copying all the jars to $SPARK_HOME/jars/ cp *.jar $SPARK_HOME/jars/
Configure Apache Spark with Minio. Detailed steps are available in this document.
Let’s start
spark-shellwith the following command. To load some additional package later on, you can use
--packagesflag as well.
$SPARK_HOME/bin/spark-shell --master local[4]
- After
spark-shellis successfully invoked, load the csv.scala, and display the data:
Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /___/ .__/\_,_/_/ /_/\_\ version 2.3.0 /_/ Using Scala version 2.XX.XX (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_XXX) Type in expressions to have them evaluated. Type :help for more information. scala> :load csv.scala Loading examples/csv.scala... import org.apache.spark.sql._ import org.apache.spark.sql.types._ defined object app scala> app.main(Array()) +-------+---+ | name|age| +-------+---+ |Michael| 31| | Andy| 30| | Justin| 19| +-------+---+ +-------+---+ | name|age| +-------+---+ |Michael| 31| | Andy| 30| +-------+---+ scala>
You can see that out of 3 entries, we could use SQL-like query to only select those entries with age > 19.
Awesome, you have successfully set up Spark! Let’s proceed futher.
Spark-Shell using PySpark and Minio
Make sure all of the
aws-java-sdk jars are present under
$SPARK_HOME/jars/ or
added to the
spark.jars.packages in spark-defaults.conf file, before proceeding.
# Running pyspark from Spark_Home Binary $SPARK_HOME/bin/pyspark `` You should see the following screen : ```python Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /__ / .__/\_,_/_/ /_/\_\ version 2.3.0 /_/ Using Python version 2.7.12 (default, Nov 12 2018 14:36:49) SparkSession available as 'spark'. >>>
Let’s execute the commands following lines to verify the same as in
scala-shell can be achieved in PySpark:
>>> from pyspark.sql.types import * >>> df = spark.read.format("csv").option("header", "true").load("s3a://sjm-airlines/people.csv") >>> df.show() +-------+---+ | name|age| +-------+---+ |Michael| 31| | Andy| 30| | Justin| 19| +-------+---+ >>> df.select("*").filter("age > 19").show() +-------+---+ | name|age| +-------+---+ |Michael| 31| | Andy| 30| +-------+---+
Connect Minio and Spark with Jupyter Notebook
Follow the steps below to set up Jupyter:
# Downloading shell script to install Jupyter using Anaconda wget # Making the shell script executable chmod +x ./Anaconda3-2018.12-Linux-x86_64.sh # Running the shell script with bash bash Anaconda3-2018.12-Linux-x86_64.sh # Create new conda environment with minimal environment with only python installed. conda create -n myconda python=3 # Put your self inside this environment run conda activate myconda # Verify the Anaconda Python v3.x Terminal inside the environment. To exit, press CTRL+D or `exit()` python # Install Jupyter Notebook inside the environment conda install jupyter # Install findspark inside the environment using conda-forge channel conda install -c conda-forge findspark # (Optional) Setting up jupyter notebook password, enter the desired password (If not set, have to use randomly generated tokens each time) jupyter notebook password # Running Jupyter Notebook and making it available to public at port 8888 jupyter notebook --ip 0.0.0.0 --port 8888
You should be seeing the following, if everything goes well :
[I 06:50:01.156 NotebookApp] JupyterLab extension loaded from /home/prashant/anaconda3/lib/python3.7/site-packages/jupyterlab [I 06:50:01.157 NotebookApp] JupyterLab application directory is /home/prashant/anaconda3/share/jupyter/lab [I 06:50:01.158 NotebookApp] Serving notebooks from local directory: /home/prashant [I 06:50:01.158 NotebookApp] The Jupyter Notebook is running at: [I 06:50:01.158 NotebookApp] http://(instance-1 or 127.0.0.1):8888/ [I 06:50:01.158 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
Deactivate Conda virtual environment
Example:
# Usage: conda deactivate <environment-name> conda deactivate myconda
Converting python scripts(.py) file to Jupyter notebook(.ipynb) file
# Installing p2j using python-pip pip install p2j
Example:
# Generating .ipynb file out of some sample script.py using p2j p2j script.py
Converting Jupiter notebook(.ipynb) file to python scripts(.py) file
You can make use of nbconvert that comes along with Jupiter. Example:
# Generating script.py file out of some sample .ipynb file using jupyter nbconvert jupyter nbconvert script.ipynb
Creating a sample python file
Let’s create a python file spark-minio.py with the codes below :
# Import sys and print the python environment import sys print(sys.executable) # Import findspark to find spark make it accessible at run time import findspark findspark.init() # Import pyspark and its components import pyspark from pyspark.sql.types import * from pyspark.sql import SparkSession # Creating SparkSession spark = SparkSession.builder.getOrCreate() # Creating schema of the CSV fields schema = StructType([StructField('name', StringType(), True),StructField('age', IntegerType(), True)]) # Creating a dataframe from a csv in S3 df = spark.read.format("csv").option("header", "true").load("s3a://sjm-airlines/people.csv") # Displaying all data in the CSV df.show() # Displaying all the data in the csv for which age is greater than 19 df.select("*").filter("age > 19").show()
Now, converting the python code(spark-minio.py) to jupyter notebook compatible file (.ipynb) :
# Generating spark-minio.ipynb file out of spark-minio.py p2j spark-minio.py
Running .ipynb file from the Jupyter notebook UI
Let’s open the UI running at. Enter the jupyter notebok password (or the token) and then, you should be seeing something like this :
Select spark-minio.ipynb file and click on run, if everything went right, you should be getting the screen below :
Running Some Live Examples Before running the example, let’s get compress the sample csv file with gzip compression.
# Generating gzip file named people.csv.gz file out of people.csv, while keeping the original file with -k flag gzip -k people.csv # Copying the csv.gz file to the bucket using minio client mc cp people.csv.gz data/sjm-airlines
In Jupyter Notebook, go to File Tab > New Notebook > Python 3 (Or any other kernel). Try the following pyspark example on the data present in Minio. Note that the gzip compression is automatically detected with the .gz extension and handled when loading it with Spark’s native csv format.
import findspark findspark.init() import pyspark from pyspark.sql.types import * from pyspark.sql import SparkSession spark = SparkSession.builder.getOrCreate() schema = StructType([StructField('name', StringType(), True),StructField('age', IntegerType(), True)]) df = spark.read.format("csv").option("header", "true").load("s3a://sjm-airlines/people.csv.gz") df.createOrReplaceTempView("people") print("List of all people :") df2.show() print("People with age less than 20 :") df2 = spark.sql("SELECT * FROM people where age>20") df2.show()
If the steps are properly followed, you should be seeing the following in the jupyter notebook:
List of all people : +-------+---+ | name|age| +-------+---+ |Michael| 31| | Andy| 30| +-------+---+ People with age less than 20 : +-------+---+ | name|age| +-------+---+ |Michael| 31| | Andy| 30| +-------+---+
For the next example, we are gonna use SQL query capability of Spark dataframe on comparatively big CSV with 13 header fields and 2000251 entries. For the task, at first, we are gonna download the CSV with gzipped compression from the following link.
wget
Schema of the CSV and the description of each field can be found HERE. Create a new bucket in Minio, here, we are naming the bucket spark-experiment and upload the downloaded file to that bucket. You can use Minio UI for the task. Or, you can use Minio Client - mc for the same.
# Go to the `data` folder which Minio Server is pointing to cd ~/data # Creating a new bucket mc mb spark-experiment # Copying the compressed file inside the bucket mc cp ../natality00.gz spark-experiment
Now, let’s try the following script in Jupyter notebook. You can either create a new cell in the same old notebook or create a new notebook for running the script. is_male, count(*) as count, AVG(weight_pounds) AS avg_weight FROM natality GROUP BY is_male" df.createOrReplaceTempView("natality") df2 = spark.sql(query) df2.show()
Upon running the script in the notebook, you should get the following output:
+-------+-------+-----------------+ |is_male| count| avg_weight| +-------+-------+-----------------+ | false| 975147| 7.17758067338709| | true|1025104|7.439839161360215| +-------+-------+-----------------+
Visualization with charts and graphs using Pandas Installation Install Pandas using conda. PySpark dataframe requires pandas >= 0.19.2 for executing any of the features by pandas.
# Installing pandas and matplotlib. Make sure inside the created conda virtual environment, when you are running the following command conda install pandas matplotlib
Reports and Observations
Report 1
Let’s display some charts on the report that we got in the previous example. Let’s create a new cell on the same notebook rather than integrating the following snippet in the above code, to reduce the time to plot multiple charts on the same report.
df3 = df2.toPandas() df3.plot(x='is_male', y='count', kind='bar') df3.plot(x='is_male', y='avg_weight', kind='bar')
Chart Graph of Is_Male Boolean VS Count and Average weight
Observation: From the generated chart, we can observe that gender of the child doesn’t have any significant role neither in the average weight of the child nor wide difference can be seen in a total count of the two gender divisions.
Report 2
Now, let us try another example. Let’s create a new notebook for this. If you don’t wish to create a new one, you can try on a new cell of the previous notebook. mother_age, count(*) as count, AVG(weight_pounds) AS avg_weight FROM natality GROUP BY mother_age" df.createOrReplaceTempView("natality") print("Based on mother_age, total count and average weight is as follow : ") df2 = spark.sql(query) df3 = df2.toPandas() df4= df3.sort_values('mother_age') print("***DONE***")
After running the program, when it prints DONE. Create a new cell below and run the following snippet:
df4.plot(x='mother_age', y='count') df4.plot(x='mother_age', y='avg_weight')data
Chart Graph of Mother Age VS Count and Average weight
Observation : We can observe that most of the mothers are between 20–30 age range when they gave birth. While the average weight of the children shows some decline in case of mothers at a young age, it shows a significant decrease in children’s average weight in case of mothers at old age.
Report 3
This one will be an interesting one. We will plot a chart with a scatter graph. INT(gestation_weeks), COUNT(*) AS count, AVG(weight_pounds) AS avg_weight FROM natality GROUP BY gestation_weeks" df.createOrReplaceTempView("natality") print("Based on gestation_weeks, total count and average weight is as follow : ") df2 = spark.sql(query) df3 = df2.toPandas() df4= df3.sort_values('gestation_weeks') print("***DONE***")
Like we did before, after DONE is printed. Create a new cell below with the following snippet. Here, we are introducing matplotlib’s axes object(ax), and dataframe.describe().
import matplotlib.pyplot as plt fig, ax = plt.subplots() df4.plot(kind="scatter", x="gestation_weeks", y="avg_weight", s=100, c="count", cmap="RdYlGn", ax=ax) df4.describe()
Scatter Graph and Data Frame Description of Gestation week VS Count and Average weight
Observation: From the scatter graph, it can be seen that the maximum number of mothers’
gestation period was 40 weeks and children born around this period are mostly of more
weight than rest. It can be seen that there are around 100k entries for which
gestation_weeks
is 99, which is not possible in reality. So, it can be concluded that 99 is the dummy
value present for those whose gestation period data wasn’t available.
Note: List of possible cmap i.e. colormap can be found here. | https://prashantshahi.dev/blog/data-analysis-spark-minio/ | CC-MAIN-2021-49 | refinedweb | 2,746 | 57.77 |
February 12, 2007
Creating a Popup in a Cairngorm Architecture
Often I see this question coming up:
"How should I best create a popup in my Cairngorm application?"
I think of creating a popup control as something only views should be concerned about. It’s like creating other view components, managing states of view components or effects. Therefore this question isn't just about creating popups, instead it's about how to invoke methods that belong to the view. (BTW: I often hear from our User Experience team that they try to prevent popups where possible)
First and foremost, I’d try to let your views directly call a method that dispatches a popup.
However, most of the use cases that came up when questions like the above have been asked are slightly more complex. Users for example want to create a popup in a completely different part of the application. Or they receive an asynchronous remote service response, only want to react to the user with a popup at that point in time and work in a context where no view references are available, such as a typical Cairngorm Command.
Some users have created the code needed for creating a popup directly in the result method of a command or in a model object that’s been called by the result method of a command.
Sticking to my above rule; that only views should be concerned about popups, I think both solutions are not the optimal approach. A command should just be an encapsulated request to your model and a model itself should also not be concerned about any UI behavior. I’d argue we need to tell the model about the state change that has just occurred in your application, but in a model context. For example you might set a loggedIn property on a Login model object to true in order to signal that the login process was successful after a positive remote service response.
So, how can our views react to that state change in our model?
In a typical Cairngorm application, we may bind UI controls to properties of our model objects. But for creating a Popup control, there’s no UI component where we can bind to. How do we best let a view invoke a view related method after a certain state change in our model occurred?
Let’s take a step by step approach using a very simple Cairngorm application, that retrieves a list of employees and displays them in a popup after successful retrieval.
Please do ask your UX team if something like this is really a good idea in a real world application. For the sake of this example we don’t care about the user experience here!
This step by step tutorial will walk you through the relevant parts. You can see and download (via right click > View Source) the complete application here. Let's go:
Dispatch a Cairngorm Event.
Let the view dispatch a user gesture via a Cairngorm Event in order to trigger the remote service:
On a mx:Button’s click event, the getEmployees method dispatches the Cairngorm event:
Excerpt from GetEmployeesCommand.as:
private function getEmployees() : void { var event : GetEmployeesEvent = new GetEmployeesEvent(); event.employees = model.employees; CairngormEventDispatcher.getInstance().dispatchEvent( event ); }
Call and handle the remote service and modify your model.
The handling GetEmployeesCommand’s execute method calls a server side method. Its result method modifies the model. Here, it retrieves the model object Employeee via the ModelLocator.
Excerpt from GetEmployeesCommand.as:
private var employees : Employees; public function execute( event : CairngormEvent ) : void { employees = GetEmployeesEvent( event ).employees; employees.hasEmployees = false; var delegate : EmployeeDelegate = new EmployeeDelegate( this ); delegate.getEmployees(); } public function result( event : Object ) : void { employees.employees = IList( event.result ); employees.hasEmployees = true; }
Employees.as:
package com.adobe.cairngorm.samples.popup.model { import mx.collections.IList; public class Employees { [Bindable] public var employees : IList; [Bindable] public var hasEmployees : Boolean; } }
Note that we modify the hasEmployees property of the Empoyees model object.
Let the view react react for you!
And here comes the crux: You can use the mx:Binding tag or the Observe/ObserveValue tag to invoke a view method, once the hasEmployees Boolean value changes.
I’d recommend using the ObserveValue tag to listen to a specific value of a state change of your model. More precisely, you can bind the source property of it to the bindable hasEmployees property of your model object Employees.
<ac:ObserveValue
You could have also used the Observe tag to listen all updates of the hasEmployees property. For more information, check this out.
The ObserveValue tag above will invoke the createEmployeeList method defined in a Script block of the same MXML file. This method will invoke the popup.
private function createEmployeeList() : void { var application : DisplayObject = DisplayObject( Application.application ); var popup : IFlexDisplayObject = PopUpManager.createPopUp( application, EmployeeList, true ); PopUpManager.centerPopUp( popup ); var concretePopup : EmployeeList = EmployeeList( popup ); concretePopup.employees = model.employees; }
That's it! Through a state change in your model, you view has reacted. Furthermore, you can now let many other objects observing this particular state of your model and they can all act independently.
Cheers!
Posted by auhlmann at 02:02 PM | Comments (19)
September 28, 2006
Cairngorm Sample – How Business Logic Can Manage Views Part IV.
Posted by auhlmann at 11:15 AM | Comments (8)
July 20, 2006
Cairngorm Sample – How Business Logic Can Manage Views Part III
The.
Posted by auhlmann at 11:47 AM | Comments (6)
July 05, 2006
Cairngorm 2 (for Flex 2) – Simple Sample Applications
UPDATE:!
Posted by auhlmann at 04:39 PM | Comments (29) 03:47 AM | Comments (3) 04:21 PM | Comments (12)
June 01, 2006
Cairngorm Sample – How Business Logic Can Manage Views
There).
Posted by auhlmann at 12:20 PM | Comments (9) | http://weblogs.macromedia.com/auhlmann/archives/cairngorm/index.cfm | crawl-002 | refinedweb | 958 | 55.54 |
I would like to use a constant in my ejb to be used for indicating a sort order. i.e.:
public final static String SORT_NAME = "name"
public final static String SORT_STATE = "state"
...
Where, perhaps, the strings are the column names in the database or something to that extent. I would then want users to be able to access this when doing, say, an ejbFindByWhatever(sort). Obviously, I know how to put a constant into a POJO, so I know how to place it into my EJB.
My question, however, is where it is appropriate to put the constant declaration. Should it go in the implementation bean, the home interface, the remote interface, or another class entirely? To make the question more general, what if the constant is to be used not only on the home interface (as it would be here), but on the remote interface as well?
EJB constants (3 messages)
- Posted by: Aaron Craven
- Posted on: March 17 2005 14:35 EST
Threaded Messages (3)
- EJB constants by Zhong ZHENG on March 17 2005 17:07 EST
- Constants in interfaces by Jonas Andersen on March 18 2005 14:14 EST
- Constants in interfaces by Aaron Craven on March 18 2005 11:36 EST
EJB constants[ Go to top ]
In my opinion, you may place the constants wherever you want. If you want those constants to be accessible via both remote interface and home interface, you may create an interface to hold the constants, then make your remote interface and home interface extend it, and your bean class implement it.
- Posted by: Zhong ZHENG
- Posted on: March 17 2005 17:07 EST
- in response to Aaron Craven
heavyz.
Constants in interfaces[ Go to top ]
Of course this is just a matter of personal taste but I definetly prefer to refer to the constants in their interface instead of implementing that interface by classes using them.
- Posted by: Jonas Andersen
- Posted on: March 18 2005 14:14 EST
- in response to Zhong ZHENG
The primary reason is that (almost always) those interface with constants are not types. It makes little sense to implement the interface.
If you use a proper IDE, then implementing the interface will also polute your code-completion popups.
Constants in interfaces[ Go to top ]
- Posted by: Aaron Craven
- Posted on: March 18 2005 23:36 EST
- in response to Jonas Andersen
...prefer to refer to the constants in their interface instead of implementing that interface by classes using them.
By this, you mean create an interface containing the constants and simply refer to that interface when wishing to use the constants? I.E.:
public interface SortColumns {
public final static NAME = "name";
...
}
public class SomeEntity extends EJBObject {
...
ejbFindByWhatever(String sort) {
...
}
...
}
and then use it something like this:
myObj = SomeEntityHome.findByWhatever(SortColumns.NAME);
This all makes sense, but it does kind of seem overkill to create a whole new interface to provide constants that are associated really with only one object/entity. But then, that's probably one of the trade-offs with EJBs, huh? That and a part of me is soooooo ready for the enumeration functionality in J2SE 5... | http://www.theserverside.com/discussions/thread.tss?thread_id=32689 | CC-MAIN-2016-50 | refinedweb | 523 | 58.62 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
[V8] Play sound on server on button event.
I would like to play a sound every time a custom button is pressed. The button is calling a server.action to update a custom field on an order and will be use to inform a user that the order is ready to be delivered. I would like to use pyglet with this simple script:
#!/usr/bin/env python
import pyglet
wavfile = '/home/effe/theetone.wav' sound = pyglet.media.load(wavfile) core = pyglet.media.Player() core.queue(sound) core.play()
But is not working when I use a python expression on server.actions.
ValueError: opcode IMPORT_NAME not allowed (u"import pyglet\n\nwavfile = '/home/effe/theetone.wav'\nsound = pyglet.media.load(wavfile)\ncore = pyglet.media.Player()\ncore.queue(sound)\ncore.play()")
Seems I can't declare an import (import pyglet) on python expression field and even if I put the import line on my module seems not to works. Where am I wrong?
Hello,
I think import will not work in the code section of the server action ...
you can make a new function in your model and call it from the action, model.py :
def play_sound(self, cr, uid, context=None):
myfile = '/home/ahmed/Music/bell.ogg'
sound = pyglet.media.load(myfile)
core = pyglet.media.Player()
core.queue(sound)
core.play()
return True
Then from the action you can call this function:
<record model="ir.actions.server" id="play_sound_action">
<field name="name">Play Server Action</field>
<field name="model_id" ref="model_test_test"/>
<field name="code">
#
# you code here
#
self.play_sound(cr, uid, context)
</field>
</record>
Regards ..
I've added on my modified point_of_sale.py the import and inherit the pos.order model:
class pos_order(osv.Model): _inherit = 'pos.order' def play_sound(self, cr, uid, context=None): myfile = '/home/effe/KDE_Beep_Digital_1.ogg' sound = pyglet.media.load(myfile) core = pyglet.media.Player() core.queue(sound) core.play() return TrueThe update of pos.order did work well but I can't figure out how to implement the python code directly on gui... I've added directly the line
self.play_sound(cr, uid, context)but obviously didn't work:
ValueError: "'pos.order' object has no attribute 'play_sound'" while evaluating u'self.play_sound(cr, uid, context)'Where I'm wrong?
Hi, where did you added the self.play_sound(cr, uid, context) ?
Ahmed,
since I got some issues even on pycharm I've update my script using pygame instead of pyglet because I need to separate the audio channels :
import pygame
import time
pygame.init()
sound = pygame.mixer.Sound("/home/effe/theetone.wav")
channel = sound.play()
#depending of the sound use left or right channel to mute
channel.set_volume(1,1)
time.sleep(2)
Works fine on the IDE but I'm still unable to correctly create a function on point_of_sale.py and link it to the server.action via gui on Odoo.
Great!, I'll try it on point_of_sale.py whenever I have spare time ... It will be helpful if you can tell me the scenario, because I think you're working on POS ... and may be will need to call the function from js
Amhed, thanks for your time.
Scenario:
I'm working on a new module for my restaurant. Since we have just one thermal printer and sincerely I don't like how Odoo manages printers, I started to create a new flow using a server, a client on the kitchen and a tablet to make the orders.
The server is located between the kitchen and the front desk and has one speaker for each ambient. When I send the order to the kitchen the server will to play a sound on the right speaker
channel.set_volume(0,1)and this function should be implemented on pos js as you propose (how is another story).
When an order is done and the button linked to the actions.server on my form view will be pressed, another different sound will be sent to left speaker
channel.set_volume(1,0)That will be the def I need on my .py.
I'm already using this method (without sounds linked) and works fine for my needs but have an alert will be grab the attention when there are many customers. When all will be ready I'll pack a module and share with the community because there are many missing things for a restaurant implementation on stock Odoo. Think on pos_restaurant module for example: if you haven't an ESC/POS compatible printer is pretty unhelpful because the module can't generate pdf or interactive views.
Hi, I think in Odoo 9 there will be a new POS for restaurants : anyway I think the second part is working with the script in the answer right ?
Yes, a new pos with (finally) tables management. What second part you're talking about? I'm still stuck but for now I'm not working on pos 9 because is unstable. On production with use V8.
Hi, the second part is : when the order is done "and the button linked to the actions.server" ... - the first part is when you send the order to the kitchen ...
The script that send the order to the kitchen works, but not the sound.
FEDERICO LEONI My question is that have any idea for the play sound in the single system not in the server, actually, my question is that I have 10 scanner machine so that time scann product if product not found any 1 system then play the sound so how I can archive
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now | https://www.odoo.com/forum/help-1/question/v8-play-sound-on-server-on-button-event-85842 | CC-MAIN-2018-26 | refinedweb | 975 | 67.65 |
In all the new IDEs when using open and close curly brackets the auto-formatting usually indents, and uses the extendable block (with the + sign) even if it is not necessary.
For example, I like when all my functions have there own little extendable block but when I place them all in a namespace it makes unnecessary indent and my functions no-longer have the block i.e.
namespace my { int myfunc() { code more code } }
instead of preferred version --
namespace my { int myfunc() { code more code } }
It always bothered me, but now I randomly thought of a solution, using defines!
#define OPENBRACKET { #define CLOSEBRACKET } namespace my OPENBRACKET int myfunc() { code more code } CLOSEBRACKET
I was so happy, I had to share it with someone :D.
P.S. I love the C-styled #defines :icon_evil: | https://www.daniweb.com/programming/software-development/threads/385478/an-idea-for-ides | CC-MAIN-2017-34 | refinedweb | 134 | 62.21 |
Check If a String Contains All Binary Codes of Size K — Day 95(Python
We are given a string that contains two characters i.e. “1” and “0”. We are also given a number “K”. We need to check if all the binary codes of size K can be formed using the substrings from the input string.
To solve this problem, as a first step we would need to make a list of all binary codes that can be formed using the substrings from the input string. We need to keep in mind that the substrings can be repeated, hence we will be taking only the distinct substrings. Let us look at how we can do it programmatically.
all_substring = set()
for i in range(len(s)-k+1):
all_substring.add(s[i:i+k])
What do we do next? We know our substrings will contain only two characters, either 1 or 0. And the size of substrings is “K”. This means we need to have 2^K substrings. Therefore we would just check if the number of substrings in the set of substrings is equal to 2^K. If yes, we return True else False.
class Solution:
def hasAllCodes(self, s: str, k: int) -> bool:
all_substring = set()
for i in range(len(s)-k+1):
all_substring.add(s[i:i+k])
return True if len(all_substring) == pow(2,k) else False
Complexity analysis.
Time Complexity
The time required to create substrings is O(NK). Hence time complexity is O(NK).
Space Complexity
The space complexity is O(2^K) since we are storing the substrings in a set. | https://atharayil.medium.com/check-if-a-string-contains-all-binary-codes-of-size-k-day-95-python-ecb705af311c?responsesOpen=true&source=user_profile---------5---------------------------- | CC-MAIN-2021-39 | refinedweb | 268 | 75.71 |
Build a data-bound grid with C# and ADO.NET
Takeaway: Irina Medvinskaya examines how to access SQL Server-based data using C# and ADO.NET and display the data in a data-bound grid control.
Data access is the basis of any application. In this article, I will show you how to access SQL Server-based data using C# and ADO.NET, as well as how to display the data in a data-bound grid control. I use a simple C# application as an example.
ADO.NET architecture
ADO.NET allows you to work without needing to maintain a connection. Additionally, it allows you to switch from one data source to another with just a few lines of code.
The core objects in ADO.NET are Command, Connection, DataReader, and DataAdapter. They are the basis of all data operations in .NET.
Core ADO.NET namespaces
- System.Data: Serves as a basis for other namespaces and makes up objects such as DataTable, DataColumn, DataView, and Constraints.
- System.Data.Common: Defines generic objects shared by the different data providers, which include DataAdapter, DataColumnMapping, and DataTableMapping. It is used by the data providers and contains the collections that are useful for accessing data sources.
- System.Data.OleDb: Defines objects that you can use to connect to the data sources and to modify the data in the various data sources. It is written as the generic data provider, and the implementation provided by the .NET Framework contains the drivers for SQL Server, the Microsoft OLE DB Provider for Oracle, and Microsoft Provider for Jet 4.0. The namespace is useful when you need to connect to many different data sources, and you want to achieve a better performance than a provider.
- System.Data.SqlClient: Takes advantage of the SQL Server APIs directly and offers a better performance than the more generic System.Data.OleDb namespace. It is a data provider namespace created specifically for SQL Server version 7.0 and up.
- System.Data.SqlTypes: Provides classes for data types specific to SQL Server. The namespace is designed specifically for SQL Server and offers better performance than other namespaces but only when dealing with the SQL Server backend.
- System.Data.Odbc: Works with all compliant ODBC drivers. This namespace is supported only in version 1.1 of the .NET Framework, so installing the new Framework is the way to get it.
Data grid example
Add a data grid control to the form, dataGrid1, as shown in Figure 1. In order to get the sample code in Listing A to work, you need to utilize the following namespaces:
using System.Data;
using System.Data.OleDb;
The code defines two variables: strConn and strSQL. strConn is set to the required connection string for utilizing the JET database using OLEDB and pointing to a location of the Northwind.mdb database on the local machine. strSQL specifies the query I want to run on the Access database (Northwind.mdb).
Next, I define the OleDBDataAdapter object da and pass it the query statement (strSQL) and the connection string (strConn). Notice that I am not creating a Connection object in the example.
Then, I define the dataset ds, which is used to get the actual data from the Customers table onto the grid control. I specify the DataMember property of the data grid control dataGrid1 to point to the table where I am getting the data and set the control's DataSource property to the DataSetds. (The DataMember property gets/sets a table in the DataSource used to bind to the control, and the DataSource property gets/sets the data source used to populate the control.) When you run the code in Listing A, it looks like Figure 2.
I display the data from the C:\\DataAccess\\Northwind.mdb database, and see only the columns I chose in the select statement. If the number of rows or columns is larger than what can fit on the page, the grid control will automatically show the scrollbars.
Now you know the basics of using ADO.NET in a C# application and creating a data grid control to display the data returned by the query.
Irina Medvinskaya has been involved in technology since 1996. She has an MBA from Pace University and works as a Project Manager at Citigroup.
- Trend Micro Message Archiver Trend Micro Easy compliance, e-discovery, and email management. Take a tour today of ... Download Now
- The Impact of Virtualization Software on Operating Environments VMware Today's use of virtualization technology allows IT professionals to ... Download Now
- IBM Smart Business Time Savings Test IBM If time is money - is your organization in need of a way to save both? ... | http://articles.techrepublic.com.com/5100-10878_11-6106750.html | crawl-002 | refinedweb | 779 | 66.13 |
How to auto switch browser tabs
Imagine you have a big monitor and you would like to display something from multiple web links, would it be nice if there is a way to auto switch between the multiple browser tabs in a fixed period? In this article, I will be sharing with you how to auto switch browser tabs via selenium, an automated testing tool.
There is a very detailed documentation on the python selenium library, you may want to check this document as the starting point. For this article, I will just walk through the complete code for this automation, so that you can use it as a reference in case you are tying to implement something similar.
Let’s get started!
To auto launch the browser, we need to first download the web driver for the browser. For instance, if you are using chrome browser, you may download the driver file here. Do check your browser version to make sure you download the driver for the correct version.
As the prerequisite, you will also need to run the below command to install the selenium package in your working environment.
pip install selenium
Launch the browser
Then import all the necessary modules into your script. For this article, we will need to use the below modules:
from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.common.exceptions import SessionNotCreatedException import time import os, sys
Let’s assume we want to display the below 3 links in your browser and make them auto switching between each other:
url_1 = " url_2 = " url_3 = "
Assuming you’ve already downloaded the chrome driver file and put it into the current script folder. Then let’s start to initiate the web driver to launch the browser:
options = Options() options.add_experimental_option('useAutomationExtension', False) try: driver = webdriver.Chrome(executable_path=os.getcwd() + "\\chromedriver.exe", options=options) except SessionNotCreatedException as e: print(e) print("please upgrade the chromedriver.exe from sys.exit(1)
You may wonder why we need a options parameter here? It’s actually optional, but you may see the “Loading of unpacked extensions is disabled by the administrator” warning without setting useAutomationExtension to False. There are plenty of other options to control the browser behavior, check here for the documentation.
As frequently you will see there is a new version of chrome, and it may not work with old driver file anymore. So, it’s better we catch this exception and show some error message to guide users to upgrade the driver.
You can set the chrome window position by doing the below, but it does not matter if you wish to maximize the window later.
driver.set_window_position(2000, 1)
Let’s open the first link and maximize our window (This also can be done by
options.addArguments("start-maximized")). And we want to execute some JavaScript to zoom out a bit so that we can see clearly.
#open window 1 driver.get(url_1) driver.maximize_window() driver.execute_script("document.body.style.zoom='120%'") time.sleep(1)
To open the second tab, we need to use JavaScript to open a blank tab, and switch the active tab to the second tab. The driver.window_handles keeps a list of handlers for the opened windows, so window_handles[1] refers to the second tab.
driver.execute_script("window.open('');") driver.switch_to.window(driver.window_handles[1])
Next, we will open the second link. And for this tab, let’s scroll down 300px to skip the ads second at the page header.
#open second link driver.get(url_2) driver.execute_script("document.body.style.zoom='90%'") driver.execute_script("window.scrollBy(0,300);") time.sleep(1)
Similarly, we can open the third tab with the below code:
#open window 3 driver.execute_script("window.open('');") driver.switch_to.window(driver.window_handles[2]) driver.get(url_3) driver.execute_script("document.body.style.zoom='90%'") driver.execute_script("window.scrollBy(0,200);") time.sleep(1)
Auto switch between tabs
Once everything is ready, we shall write the logic to auto switch between the different tabs at certain interval. To do that, we need to know how to perform the below 3 things:
- Identify what is the active link showing now
We can use driver.title attribute to check if the page title contains certain keyword for the particular website, so that we know which page is active now
- Switch to a new tab
We can continue to use driver.switch_to.window to switch the tab, but we need to have logic to determine which is the next tab we want to switch to
- Refresh the page (in case there is any updates)
We can use driver.refresh() to refresh the page, but we will lose the setting such as zooming in/out, so we need to set it again
So let’s take a look at the complete code:
nextIndex = 2 start = time.time() while True: #stop running after 5 minutes if (time.time() - start >= 5*60): break if "Google Maps" in driver.title: driver.refresh() driver.execute_script("document.body.style.zoom='120%'") time.sleep(3) nextIndex = 0 if nextIndex + 1 > 2 else nextIndex + 1 elif "CNN" in driver.title: driver.refresh() driver.execute_script("document.body.style.zoom='90%'") time.sleep(5) nextIndex = 0 if nextIndex + 1 > 2 else nextIndex + 1 elif "Weather" in driver.title: driver.refresh() driver.execute_script("document.body.style.zoom='90%'") time.sleep(2) nextIndex = 0 if nextIndex + 1 > 2 else nextIndex + 1 driver.switch_to.window(driver.window_handles[nextIndex])
So each of the tab will be active for a few seconds before switching to the next tab. And after 5 minutes, this loop will be stopped.
If we wish to close all tabs at the end of the script, we can perform the below:
for window in driver.window_handles: driver.switch_to.window(window) driver.close()
So that’s it and congratulations that you have completed a new automation project to auto switch browser tabs for Chrome. As per always, welcome any comments or questions. | https://www.codeforests.com/2020/07/03/how-to-auto-switch-browser-tabs/ | CC-MAIN-2022-21 | refinedweb | 990 | 53.1 |
Pros and Cons of Convertible Bonds
Some corporate bond issuers sell bonds that can be converted into a fixed number of shares of common stock. With a convertible bond, a lender (bondholder) can become a part owner (stockholder) of the company by converting the bond into company stock.
Having this conversion option is a desirable thing (options are always desirable, no?), and so convertible bonds generally pay lower interest rates than do similar bonds that are not convertible.
If the stock performs poorly, then there is no conversion. You are stuck with your bond’s lower return (lower than what a nonconvertible corporate bond would get). If the stock performs well, there is a conversion..
While convertible bonds are not necessarily horrible investments, they may not deserve a very sizeable allotment in most individuals’ portfolios. | https://www.dummies.com/personal-finance/investing/bonds/pros-and-cons-of-convertible-bonds/ | CC-MAIN-2019-13 | refinedweb | 135 | 54.42 |
YOW! 2011: Joe Albahari - LINQ, LINQPad, and .NET Async (and a little Rx, too)
- Posted: Dec 22, 2011 at 3:47 PM
- 42,640 Views
- 9 Comments
Something went wrong getting user information from Channel 9
Something went wrong getting user information from MSDN
Something went wrong getting the Visual Studio Achievements
Right click “Save as…”
Joe Albahari is the creator of LINQPad, an application that many of you use in your daily development of .NET applications/services, especially those that employ LINQ in some fashion. It's just a fantastic developer tool for C#; one that C9 celebrity genius and avid LINQPad user Brian Beckman calls "the app I wish I wrote". Erik Meijer, the creator of LINQ, uses LINQPad daily. If you haven't played around with LINQPad, then you need to! [End advertisement for LINQPad
Hey, great work deserves praise, don't you think?]
Joe's also the author of a few C# books (targeting both pro developers and novices) and two books covering WPF. Joe lives in Perth, Australia and works for himself (right on!). Great to meet you, Joe.
Thanks again for creating and continuing to evolve LINQPad, Joe!!
Joe LINQPad for so many things, it's easy to forget how valuable it really is. It's like a writer's favorite text editor, only for a C# developer. I don't know what I'd do without it
. From debugging, optimizing, and creating nice database queries, to writing 100s of plain C# management scripts to administer SharePoint farms. Not sure what I'd do without it. Agreed! Thanks for this awesome little (big) tool.
Hands up, I havn't been using LinqPad but it looks interesting! My first thought was to use it in an existing project where I am looking at querying RSS style feeds. Is there an existing generic style XML driver (or an RSS specific one?) or do I need to be looking at creating one for LinqPad? With apologies if this is a bit of a "noob" question.
OK, I get it now (slaps hand on forehead)
Cool!
LINQPad and Joe rocks. Bought an autocomplete license for work.
Watching...
.NET has some pretty amazing projects, with LINQPad being a project "that goes to eleven"
It's great to hear from Joseph ... LINQPad is one of my favorite tools, and the premium version is well worth it. I use it as much as I use Visual Studio for all those things that VS is too heavyweight for. I write a function and test it in LINQPad before I consider adding it to a project some place in VS. It's like a brainstorming tool, because it just takes so much less time to use than VS. So, I hope that it stays light weight and responsive and it sounds like it will as new features rely on asynchrony features of C# 5.
Roslyn will certainly be a game changer for this kind of application. I was wondering if and when you'd ask about that Charles ... you left it until the end, and I was biting my tongue the whole time waiting to post, "why didn't you ask about Roslyn?".
Joseph did mention that it will make things simpler for him, but it will afford more opportunity for competitors to develop IDEs.
One of my colleagues said "With ordinary tools, you code and then test; with LinqPad, you test and then code." Use it in concert with the Visual-Studio Unit Test Framework (VSUTF) and you will be writing bulletproof code with unbelievable speed.
Technique: get your code working in LinqPad where the code-test-look cycle is fast and frictionless (LinqPad's Dump() + the charting tools in System.Windows.Forms.DataVisualization + plus the Sho-viz libraries are worth their weight in diamonds for code-test-look speed!). Copy all your test stuff into VSUTF and the target code into your VS projects and BAKE 'EM. Really great!
Quite. It is so much faster to prototype and test in LinqPad. I wish I'd started with this approach to begin with.
I do miss NuGet integration though. It would be much easier to quickly toss together some new script using existing open-source libraries. It just has to be done in the usual light-weight LinqPad way.
@exoteric:Even without NuGet, it still pays off bigtime. I have some LinqPad scripts where i have integrated OpenCv (via PInvoke), QuickGraph (see codeplex), the SQL server Geomety and Geography data types, LINQ / Objects, LINQ / XML, LINQ / Rx, Task lib, all in a single script (not just to use them for sophomoric show-off fun, but because they were the most economical way to get the job done!). I took the effort to pull this all into Visual Studio (not really hard, mostly just rationalizing the namespaces), and by gum it all just works. <3 + + + +
Remove this comment
Remove this threadclose | http://channel9.msdn.com/Blogs/Charles/YOW-2011-Joe-Albahari-LINQ-LINQPad-and-NET-Async?format=html5 | CC-MAIN-2014-49 | refinedweb | 818 | 72.16 |
go to bug id or search bugs for
New/Additional Comment:
Description:
------------
A php programmer who use xdebug's remote debugging feature may affects RCE when he just access to attacker's website in modern browsers. The detail as follows.
As in the doc() of xdebug, if `xdebug.remote_connect_back` is enabled, the `xdebug.remote_host setting` is ignored and Xdebug will try to connect to the client that made the HTTP request. It checks the `$_SERVER['HTTP_X_FORWARDED_FOR']` and `$_SERVER['REMOTE_ADDR']` variables to find out which IP address to use.
If xdebug is configured as follows, we can use `$_SERVER['HTTP_X_FORWARDED_FOR']` as the connect back ip.
```
xdebug.remote_connect_back = 1
xdebug.remote_enable = 1
xdebug.remote_log = /tmp/test.log
```
For php programmers, they always setup a local environment and enable xdebug to debug php programs. If we can send a http request with `X-Forwarded-For` header(which points to attacker's server) to local url just like `` or `` then attacker can get a xdebug's connect back and then use DBGp commands to execute any php code.
As we all know, to send special request headers using ajax in browsers, ajax first send an `OPTIONS` request to the target url and we need `Access-Control-Allow-Headers: X-Forwarded-For` in the response headers, but there always no such a header in the local program.
There is a tech called dns rebinding. Attacker can setup a private dns server and make a domain first resolved to attacker's ip and then 127.0.0.1 . The victim first access attacker's domain in the browser, pull payloads from attacker's server and then DNS changed to 127.0.0.1 and the browser launch the exploit to 127.0.0.1 . As the domain do not change, the browser's security features are still obeyed, but the exploit is send to 127.0.0.1.
The urls referenced
Test script:
---------------
The payload used in attacker's website.
```js
<script type="text/javascript">
function exp(url, remote){
var invocation = new XMLHttpRequest();
invocation.open('GET', url, true);
invocation.setRequestHeader('X-Forwarded-For', remote);
invocation.onreadystatechange = function(){};
invocation.send();
}
url = '';
remote = '8.8.8.8' // attacker's server ip
exp(url, remote)
setInterval(function(){exp(url, remote)}, 1000)
</script>
```
The payload for attacker's index.php.
```php
<?php
header('Access-Control-Allow-Origin: *');
header('Access-Control-Allow-Headers: X-Forwarded-For');
```
The payload for xdebug's connect back.
```py
#!/usr/bin/python2
import socket
ip_port = ('0.0.0.0',9000)
sk = socket.socket()
sk.bind(ip_port)
sk.listen(10)
conn, addr = sk.accept()
while True:
client_data = conn.recv(1024)
print(client_data)
data = """system('whoami');"""
conn.sendall('eval -i 1 -- %s\x00' % data.encode('base64'))
```
Add a Patch
Add a Pull Request
I think the way to solve this is not to get connect back ip from X-Forwarded-For header.
the idea is just like
you can check that for reference
Please read
Not a security issue
* requires the use of debugging facilities - ex. xdebug, var_dump | https://bugs.php.net/bug.php?id=76149&edit=1 | CC-MAIN-2019-22 | refinedweb | 500 | 50.53 |
i
i
@ mPatterson557 - Not exactly secure but you could simply mark the member as a public static and init it outside of using the gadgeteer designer:
namespace StaticMember { public partial class Program { public static GTM.GHIElectronics.RS232 SerialObject; // This method is run when the mainboard is powered up or reset. void ProgramStarted() { /******************************************************************************************* Modules added in the Program.gadgeteer designer view are used by typing their name followed by a period, e.g. button. or camera. Many modules generate useful events. Type +=<tab><tab> to add a handler to an event, e.g.: button.ButtonPressed +=<tab><tab> If you want to do something periodically, use a GT.Timer and handle its Tick event, e.g.: GT.Timer timer = new GT.Timer(1000); // every second (1000ms) timer.Tick +=<tab><tab> timer.Start(); *******************************************************************************************/ // Use Debug.Print to show messages in Visual Studio's "Output" window during debugging. Debug.Print("Program Started"); SerialObject = new GTM.GHIElectronics.RS232(1); } } }
And then access will be as you expected it initially:
namespace StaticMember { class TestAccess { public void Init() { Program.SerialObject.serialPort } } }
Just remember to watch namespaces
Why would init out of designer changer how I access it?
How do I drop the program off of the second quoted line?
How would i implement this? Code sample generic is fine or some terms I can do research on thanks.
How would i implement this? What is an event delegate?
Passing as a parameter
public class MyClass{ void Function(RS232 rs232){ rs232.BaudRate = 9600; } }
Or use it as a property off your main program.
public partial class Program { void ProgramStarted() { // Use Debug.Print to show messages in Visual Studio's "Output" window during debugging. Debug.Print("Program Started"); } public RS232 RSPort { get { return rs232; } set { rs232 = value; } } } public class MyClass { void Function() { Program.RSPort.BaudRate = 9600; } }
Static property:
You dont. Program is an object not a namespace.
Passing the object as a parameter will work, however this will consume more memory.
The second example would not work without a reference passed to MyClass
why does it consume more mem?
I thought the objekt was given as referenz and not as copy of itself.
It is, but the reference itself consumes memory, just not as much as the object itself. In most cases, there will be no need to optimize memory consumption to that extent. I only offered it as an option as I do not know the full extent of his project nor the hardware he is using.
Thanks for the Help. This is what I went with eventually: If glaring faults please let me know.
Implemented in code as:
public ILogger logger { get; set; } public IProgram program { get; set; } public virtual bool GetAFI(out byte AFI){ try{ byte[] tid = null; if (RetrySend(GetUIDCommand, 17, ref tid)) { AFI = tid[10]; return true; } } catch (Exception exception){ logger.LogError("RFID", "GetTID()", exception.Message, exception.StackTrace, exception.InnerException); } AFI = 0x00; return false; }
Interface to expose methods:
public interface IProgram { bool rs232IsOpen { get; } void rs232Open(); void rs232Close(); void rs232Send(byte[] bytes); byte[] rs232Send(byte[] bytes, int expectedBytes); }
Example Method exposed from program:
private void rs232Open() { try { if (!rs232IsOpen) { rs232.serialPort.Open(); Thread.Sleep(10); } } catch (Exception exception) { Logger.LogError("rs232.serialport", "Open()", exception.Message, exception.StackTrace, exception.InnerException); } }
;D ;D ;D :dance: | https://forums.ghielectronics.com/t/referring-to-modules-from-separate-classes/15792 | CC-MAIN-2019-22 | refinedweb | 539 | 51.24 |
Battery level don't show up
- cheesepower last edited by cheesepower
Hello all,
I encounter an issue with my battery powered sensor.
It's based on an arduino pro mini 3,3 V, a DHT 22 and powered via a LiPo 3,7V battery.
It runs on a Vera 3 with the 1.4 library.
After inclusion, my vera see 1 node, 1 humidity sensor and 1 temperature sensor. The temperature and humidity are working fine, The problem is that the battery level don't show up
On th serial connection i can see my battery level.
Here is my code:
''''
#include <SPI.h>
#include <MySensor.h>
#include <DHT.h>
#define CHILD_ID_HUM 0
#define CHILD_ID_TEMP 1
#define HUMIDITY_SENSOR_DIGITAL_PIN 3
unsigned long SLEEP_TIME = 300000; // Sleep time between reads (in milliseconds)
int BATTERY_SENSE_PIN = A0; // select the input pin for the battery sense point
MySensor gw;
DHT dht;
float lastTemp;
float lastHum;
boolean metric = true;
MyMessage msgHum(CHILD_ID_HUM, V_HUM);
MyMessage msgTemp(CHILD_ID_TEMP, V_TEMP);
int oldBatteryPcnt = 0;
void setup()
{
gw.begin();
dht.setup(HUMIDITY_SENSOR_DIGITAL_PIN);
// Send the Sketch Version Information to the Gateway
gw.sendSketchInfo("Humidity", "1.0");
gw.sendSketchInfo("Battery Meter", "1.0");
// Register all sensors to gw (they will be created as child devices)
gw.present(CHILD_ID_HUM, S_HUM);
gw.present(CHILD_ID_TEMP, S_TEMP);
//metric = gw.getConfig().isMetric;
}
void loop()
{);
}
// get the battery Voltage
int sensorValue = analogRead(BATTERY_SENSE_PIN);41;
int batteryPcnt = sensorValue / 10;
Serial.print("Input Value: ");
Serial.println(sensorValue);
Serial.print("Battery Voltage: ");
Serial.print(batteryV);
Serial.println(" V");
Serial.print("Battery percent: ");
Serial.print(batteryPcnt);
Serial.println(" %");
if (oldBatteryPcnt != batteryPcnt) {
// Power up radio after sleep
gw.sendBatteryLevel(batteryPcnt);
oldBatteryPcnt = batteryPcnt;
}
gw.sleep(SLEEP_TIME); //sleep a bit
}
''''
@cheesepower I've notice that sometimes it take a while for the indicator to show up in Vera.
- cheesepower last edited by
My sensor runs for 2 weeks now.
One more question : where did the battery level is supposed to be ? on the node under a variable or elsewhere ?
Looks like this.....
- cheesepower last edited by
I have this:
- m26872 Hardware Contributor last edited by m26872
I assume you've done extensive resets, reloads, reboots of sensor, vera, luup, and gateway in all diffent ways?
And a little dare to use custom device names before you get it going if you ask me.
And yes, the battery level as a number is also visible under variables after first receive..
@cheesepower as @m26872 suggested, you should try a power cycle your equipment.. if that doesn't work...... you have a problem. I can't see any issue in the sketch.
- cheesepower last edited by cheesepower
Yes i made several reset, exclusions, inclusions, vera reset and clear EPROM but nothing worked...
I made a new sensor and it works perfectly
Strange... I will try again when i will have more time.
Thanks for the help | https://forum.mysensors.org/topic/706/battery-level-don-t-show-up/8 | CC-MAIN-2019-22 | refinedweb | 466 | 51.95 |
I have found that "tuple-driven" code is a nice way to coerce
what would otherwise be a long string of special cases into
a short processing loop. Just collect all of the special case
information into a list or tuple of tuples, and loop over that
list.
This technique is usable in other languages, but I find I use
it frequently in Python because of the ease of creating the
initial tuple, and the fact that the dynamic nature of python
makes dynamic assignment possible. ( in other words, this sort
of thing is quite difficult in C, but quite easy also in Lisp.)
The example below is going to be part of a class library to
read tar files. (It is work in progress, and there is a bug in
it somewhere, but I'm using it as an example.) The _map tuple
are triples of ( fieldname, size-in-chars, interpret-as )
where interpret-as is 's'-string, 'o'-octal number, 'd'-decimal-number.
( and probably the bug is that I've got a decimal marked as
an octal, or something. )
Note: 'string.stringstrip' is string.strip renamed before trying
to import strop, in my (modified) version of string.py. Changing
string.whitespace ( or even strop.whitespace ) doesn't make
strop.strip "do the right thing". ( I didn't even BOTHER to try
to find a 'portable' way to do it this time! )
For each tuple in the set, a character field is extracted,
stripped, possibly eval-ed into an integer, and made into
an attribute ( instance variable ) of the class instance.
[ I have used this technique before to create dictionaries and
lists before, but the use of this with 'setattr' to create
a set instance variables is "new trick" that made me think
to post this example. ]
- Steve Majewski
import string
string.whitespace = string.whitespace + '\000'
_map = ( ( 'name', _NAMSIZ, 's' ),
( 'mode',8, 'o'), ('uid',8, 'o' ), ('gid',8, 'o' ),
( 'size', 12, 'd' ), ( 'mtime',12, 's'), ('chksum',8, 's'),
('linkflag',1, 's'),
( 'linkname', _NAMSIZ, 's' ),
( 'magic',8, 's'),
('uname',32,'s'), ('gname',32,'s'),
( 'devmaj',8,'o'), ('devmin',8,'o'))
class TarHeader:
def parse( self, h ):
i = j = 0
for item in _map:
i,j = j, j+item[1]
tmp = string.stringstrip( h[i:j] )
print item, tmp
if item[-1] == 'o' and tmp :
tmp = eval( '0'+tmp )
elif item[-1] == 'd' and tmp :
try: tmp = eval( tmp )
except OverflowError: tmp = eval( tmp+'L' )
if tmp or type(tmp) in ( type(1L), type(1) ) :
setattr( self, item[0], tmp )
return self.__dict__ # for debug | http://www.python.org/search/hypermail/python-1993/0510.html | CC-MAIN-2013-48 | refinedweb | 425 | 69.41 |
base case => node is none
recursive case => Left child is / isn't Leave
class Solution(object): def sumOfLeftLeaves(self, root): if not root: return 0 if root.left and not root.left.left and not root.left.right: return root.left.val + self.sumOfLeftLeaves(root.right) return self.sumOfLeftLeaves(root.left) + self.sumOfLeftLeaves(root.right) # isn't leave
EDIT:
Could be 3 Lines, but L2 would be too long.
thanks @tototo's advise!
@YJL1228 Thanks for sharing. You can remove the 'else:' line to get a 5-line solution :) Besides, as a best practice, comparison to None should always be done with is or is not. So, if you compare root with None explicitly, we'd better write in this way 'if root is None:', or just simply 'if not root:'.
Similar, but somewhat clearer ... (probably should not indent that deep)
def sumOfLeftLeaves(self, root): def helper(node, isLeft): if node: if isLeft and not node.left and not node.right: return node.val # Only returns when isLeft and isLeaf return helper(node.left, True) + helper (node.right, False) return 0 return helper(root, False) # Seems that the OJ doesn't treat bare root as a left leaf
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect. | https://discuss.leetcode.com/topic/60395/4-lines-python-recursive-ac-solution | CC-MAIN-2017-43 | refinedweb | 214 | 77.64 |
Stephan Ewen created FLINK-4245:
-----------------------------------
Summary: Metric naming improvements
Key: FLINK-4245
URL:
Project: Flink
Issue Type: Improvement
Reporter: Stephan Ewen
A metric currently has two parts to it:
- The name of that particular metric
- The "scope" (or namespace), defined by the group that contains the metric.
A metric group actually always implicitly has a map of naming "tags", like:
- taskmanager_host : <some-hostname>
- taskmanager_id : <id>
- task_name : "map() -> filter()"
We derive the scope from that map, following the defined scope formats.
For JMX (and some users that use JMX), it would be natural to expose that map of tags. Some
users reconstruct that map by parsing the metric scope. JMX, we can expose a metric like:
- domain: "taskmanager.task.operator.io"
- name: "numRecordsIn"
- tags: { "hostname" -> "localhost", "operator_name" -> "map() at X.java:123", ...
}
For many other reporters, the formatted scope makes a lot of sense, since they think only
in terms of (scope, metric-name).
We may even have the formatted scope in JMX as well (in the domain), if we want to go that
route.
[~jgrier] and [~Zentol] - what do you think about that?
[~mdaxini] Does that match your use of the metrics?
--
This message was sent by Atlassian JIRA
(v6.3.4#6332) | http://mail-archives.us.apache.org/mod_mbox/flink-issues/201607.mbox/%3CJIRA.12991484.1469111039000.95831.1469111060791@Atlassian.JIRA%3E | CC-MAIN-2019-43 | refinedweb | 203 | 62.27 |
Functions
Please see the Functions Cheat Sheet for the most updated version:
Why functions?
Reduce code tasks into simples tasks Can easier split up the code between developers Elimination of duplicate code Reuse code Get a good structure of the code Easier debugging.
How do I create a function?
Must be defined before it's used. The function blocks begin with the keyword def followed by the function name and parentheses. The function has to be named plus specify what parameter it has. A function can use a number of arguments. Every argument is responding to a parameter in the function. The function often ends by returning a value using return.
Small Example
Let's demonstrate how this works. Here we create a function by using the keyword "def" followed by the functions name "name" and the parentheses (). We indent the code and write what we want to accomplish in the function. This function will just ask for the users name using the raw_input function and then return the value. The script is "called" by simple typing the name of the function >> name()
def name(): # Get the user's name. name = raw_input('Enter your name: ') # Return the name. return name name(): | https://www.pythonforbeginners.com/functions/python-functions/ | CC-MAIN-2019-18 | refinedweb | 201 | 67.45 |
CFD Online Discussion Forums
(
)
-
FLUENT
(
)
- -
Cannot preview the dynamic mesh
(
)
sam
May 25, 2005 05:27
Cannot preview the dynamic mesh
Hi, i am new to this dynamic mesh udf feature and i have a simple problem like i want to move a piston in a larged cylinder using a dynamci mesh feature.I am using FLUENT 6.2.16. when i compile it gives no error and successfully loaded.I have even tried to preview the mesh with a small time step but in vain.The dynamic mesh parameters i have used in my problem is that i used remeshing and smoothing algorithm. The dynamic zone i have used is that i defined the piston as rigid body and hooked my udf with this boundary. Can anyone help me in this matter. I will appreciate any help and associated examples with this features.
My code is like that:
#include "udf.h" #include "dynamesh_tools.h" DEFINE_CG_MOTION(piston, dt, vel, omega, time, dtime) { /* reset velocities */ NV_S (vel, =, 0.0); NV_S (omega, =, 0.0); if (!Data_Valid_P ()) return; vel[0]= 0.4 ; }
Saad bin Mansoor
May 25, 2005 07:52
Re: Cannot preview the dynamic mesh
If you are using quadrilateral meshes then remeshing won't work. Instead you have to use layering. Further you can use smoothing instead of layering but smoothing has to be activated for a qaudrilateral mesh by the command-line statement
def/mod/dmc/smp/spr, Enter a yes to activate smoothing for quadrilateral cells.
Sincerely, Saad
sam
May 25, 2005 12:54
Re: Cannot preview the dynamic mesh
well dear thanks for replying, further in addition to the last message, i am using the triangular mesh in the domain and near the object i use boundary layer and its only 2d problem.please help me in this matter and give me some samples of such kind of problem if you can. I am really thankful to u Mr. Saad bin Mansoor .
antun
May 26, 2005 05:12
Re: Cannot preview the dynamic mesh
Hi
Check that the adjacent wall/axis is defined as 'deforming'.
sam
May 26, 2005 06:58
Re: Cannot preview the dynamic mesh
in my case there is no axis, it is rigid body motion problem having linear velocity in x-direction.
All times are GMT -4. The time now is
19:12
. | http://www.cfd-online.com/Forums/fluent/36763-cannot-preview-dynamic-mesh-print.html | CC-MAIN-2016-40 | refinedweb | 389 | 64.61 |
Introduction
Food brings people together, on many different levels!
From ‘Chole Bhature’ and ‘Paneer Masala’ of North to ‘Idli’, ‘Dosa’, and ‘Rassam’ of South, from ‘Dal Bati’ and ‘Dal Dhokli’ of Gujarat, Rajasthan to Bengali sweets and Spicy non-vegetarian food of Assam, Maharashtra’s Zunka, bhaji to Bihar’s Litti Chokha! Indian cuisine is all about mouth-watering dishes. It’s not just the food but the emotion!
Here’s an Indian cuisine analysis using various Data Analysis techniques. So, are you ready to experience the sweetness of the east, a bit spicy meal from the north, mouth-watering dishes from the east, and some delicious cuisine of the South?
Here’s the link to the dataset used: Dataset
Introduction to the dataset
This dataset is about the Indian Cuisine Variety. It tells us about various dishes in various states and regions. Besides, it tells us the course of these food dishes and their flavor profiles. So let’s do some analysis of this data! First, importing required libraries :
import geopandas as gpd import plotly.express as px from plotly.offline import init_notebook_mode import matplotlib.pyplot as plt %matplotlib inline from wordcloud import WordCloud , ImageColorGenerator from PIL import Imagess 'pandas.core.frame.DataFrame'>
First, let’s see how many sweet and spicy dishes are included in our dataset.
pie_df = cuisine.flavor_profile.value_counts().reset_index() pie_df.columns = ['flavor_profile', 'count'] fig = px.pie(pie_df, values='count', names='flavor_profile', title='Sweet or Spicy?', color_discrete_sequence=['blue', 'light green']) fig.show()
Time analysis: How much time does it take to prepare and cook these dishes?
Here, some graphs are plotted which will tell us how much preparation and cooking time is required for different dishes. To get to know more about these graphs I’m providing the link of the whole code at the end! Here’s is one sample code for Bar graph :
reg_df = cuisine.flavor_profile.value_counts().reset_index() reg_df.columns = ['flavor_profile', 'prep_time'] reg_df = reg_df.sample(frac=1) fig = px.bar(reg_df,x='flavor_profile',y='prep_time',title='Okay!variety in spicy food items is more, but it takes more time to get prepared! Are you ready to wait?', color_discrete_sequence=['purple']) fig.show()
Okay! The variety of spicy food items is more, but it also takes more time to prepare them! Are you ready to wait?
It took more time to prepare, but wait! sweet dishes also take much time to get cooked, not more than spicy dishes though! 😉
Let’s order the main course, as it takes more time to get prepared! Well, don’t forget to order your favorite dessert!
Snacks are here! Umm, the main course may take some more time, and dessert too!
Different states, different tastes: Statewise analysis
Now, let’s see if we can guess the names of states from the names of dishes! Here, I’ve created word clouds for various states. First, have a look at the code.
mh_cuisine = cuisine[cuisine['state']=='Maharashtra'].reset_index() name = [] for i in range(0,len(mh_cuisine)): text = mh()
g_cuisine = cuisine[cuisine['state']=='Gujarat'].reset_index() name = [] for i in range(0,len(g_cuisine)): text =()
r_cuisine = cuisine[cuisine['state']=='Rajasthan'].reset_index() name = [] for i in range(0,len(r_cuisine)): text =()
Maharashtra’s Amti, Gujarat’s Dal Dhokli, and Rajasthan’s Dal Bati!
Maharashtra
Gujrat
Rajasthan
Here are the ingredients used in West Indian Food :
Assam’s spicy food, Bengal’s sweetness, and Odisha’s variety!
Assam
West Bengal
Odisha
Here are the ingredients used in East Indian Food :
Let’s see dishes from some other states :
Aloo tikki, paneer masala, chole bhature….list goes on!
Punjab
Jammu and Kashmir
Here are the ingredients used in North Indian Food :
South Indian sambar with idli, dosa!
Kerala
Tamil Nadu
Telengana
Here are the ingredients for South Indian Food
Cuisine Analysis w.r.t. Ingredients
We’ve seen all the ingredients used in each part of India. Have you observed that few ingredients are common in 2 or 3 regions? Let’s do some Cuisine analysis with respect to ingredients!
So, here are the ingredients which are used in most of the dishes.
ingredients = pd.Series(cuisine.ingredients.str.split(',').sum()).value_counts() ingredients = ingredients[ingredients>12] px.bar(ingredients, y=ingredients.values, x=ingredients.index, color=ingredients.values, title= 'Indian cuisine is nothing without these top ingredients!', labels={ 'index': 'Inngredients', 'y': 'count' })
Indian cuisine is nothing without these top ingredients :
Though the following ingredients are used less, they help in making the food more tasty and yummy!
Conclusion
So, this was the Indian Cuisine analysis. Conclusions drawn are as follows :
Flavour_profile: In India, while ordering food, one can get many options in Spicy dishes as compared to sweet dishes.
Time: Mostly, Spicy dishes take more time in preparation as well as cooking. Some of the sweet dishes also need more cooking time.
Ingredients: Indian Cuisine has much variety from North to South, and also from East to West! However, few ingredients are common in many dishes.
Links
Here’s the link to code from Kaggle:
Here’s the Github repository link :
– Arya TalathiYou can also read this article on our Mobile APP
3 Comments
The results are only as accurate as the raw datasets. There’s almost no data on tribal food from the NE states besides a few dishes from Assam. Let alone more granular regional variations from lower-caste kitchens. The skew toward vegetarian dishes also represents biases inherent in the researchers teams.
Profoundly excellent information
Looks like the blog page is corrupted…images not loading… Programming commands all visible in place of text…Please correct it… really enjoying the content… | https://www.analyticsvidhya.com/blog/2020/10/sweet-spicy-north-or-south-indian-cuisine-analysis/ | CC-MAIN-2021-31 | refinedweb | 926 | 58.28 |
Related
Tutorial
Control DOM Outside Your Vue.js App with portal-v Vue.js’ greatest strengths is it’s ability to be used to enhance or replace parts of older apps or even static pages. This progressive nature allows Vue to be incrementally adopted or just used to improve a pre-existing app. portal-vue by Linus Borg extends this flexibility by allowing Vue components to be rendered to anywhere in the DOM, outside of their parent components, or even the whole Vue app!
Installation
Install portal-vue via. Yarn or NPM.
# Yarn $ yarn add portal-vue # NPM $ npm install portal-vue --save
Now, enable the PortalVue plugin.
import Vue from 'vue'; import PortalVue from 'portal-vue'; import App from 'App.vue'; Vue.use(PortalVue); new Vue({ el: '#app', render: h => h(App) });
Targeting Components
So, let’s create the source component that creates the original portal. It can have it’s own content as well, only stuff inside the portal component gets moved.
<template> <div> <portal to="other-component"> <p>{{message}}</p> </portal> <p>Other stuff stays here.</p> </div> </template> <script> export default { data() { return { message: 'I get rendered in OtherComponent!' } } } </script>
Now, as long as both AComponent and OtherComponent are rendered, the content from AComponent’s portal will end up rendered in OtherComponent. You can even have multiple portals in a single component, going different places!
<template> <div> <portal-target </portal-target> <p>I have my own stuff too!</p> </div> </template>
Targeting Anywhere in the DOM
With a teeny-tiny change, we can have the portal content output to anywhere in the DOM of the entire webpage!
<template> <div> <portal target- <p>{{message}}</p> </portal> <p>Other stuff stays here.</p> </div> </template> <script> export default { data() { return { message: 'I get rendered in the element with the id #place-on-the-page!' } } } </script>
Now, as long as both AComponent and OtherComponent are rendered, the content from AComponent’s portal will end up rendered in OtherComponent. You can even have multiple portals in a single component, going different places!
<!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>Vue-Portal Example</title> </head> <body> <!-- Vue app --> <div id="app"> ... </div> <script src="/dist/build.js"></script> <!-- Other random stuff on the page --> <section class="something-else"> <h4>What is going on here! Who let the Vue app out?</h4> <!-- Contents of the portal replace the div here --> <div id="place-on-the-page"> </div> </section> </body> </html>
Neat, huh?
Options
- You can toggle whether or not content should go “through” the portal with the disabled prop. If set to false, it will render the content inside the portal component instead of the portal-target or target-el. This prop is reactive, so you can change it at will.
- If the portal is disabled, you can also add the slim prop to avoid adding an extra wrapper element.
- You can also use the tag prop to determine which element the portal component renders as when disabled.
Potential Issues
See an in-depth explanation here.
- Portal and PortalTarget components can behave oddly, as they are abstract components, so don’t try to manipulate or access them like normal components.
- When using SSR, the portal-target component must appear after the portal component in the DOM. Otherwise Vue will get confused and re-render the whole app.
- Also when using SSR, you should probably make sure any targetted elements outside the DOM are not actual HTML Elements. Use custom (even fake) ones, like <my-target>.
- refs on portal'ed content are currently asynchronous and aren’t available on the first or second tick of your component. You’ll have to wait a bit using setTimeout or call this.nextTick twice. It’s probably a good idea to avoid using refs on portal'ed content. | https://www.digitalocean.com/community/tutorials/vuejs-portal-vue | CC-MAIN-2020-34 | refinedweb | 636 | 57.37 |
Yesterday I needed to check the available style keys in our main app.xaml file and see which ones are no longer needed. As there are currently 66 style keys in that file and it's growing, I didn't feel much for taking each key and searching through our source code to find out. Time to build a small tool. This article describes how I build this tool.
The tool needs to be able to search a directory tree for files with a certain extensions (.xaml*) for a pattern or literal string. Before it does this, it also needs to be able to open a .xaml file and retrieve any style elements so it can then read their keys.To achieve this, two classes are needed. One class will read a .xaml file and get all keys from style elements and the other class will search through the file system for files containing these keys.
I'll spare the obvious details and dive right into the highlights. To read the keys from style elements in basically any XML document, I used LinqToXml. Here is the code I used:
LinqToXml
private void LoadStyleKeysFromDocument()
{
XNamespace winFxNamespace = "";
XName keyAttributeName = winFxNamespace + "Key";
var result = from node in _document.Descendants()
where node.Name.LocalName.Equals("Style")
select node;
var distinctResult = result.Distinct();
StyleKeys.Clear();
foreach (XElement styleElement in distinctResult)
{
StyleKeys.Add(styleElement.Attributes(keyAttributeName).First().Value);
}
}
The first two lines make an XName object that is needed to include the XML namespace when retrieving the x:Key from the element. Note that this works independently from the prefix (x) as it was assigned in the document. This means that this code will still work if someone would decide to change the prefix on this namespace.
XName
x:Key
x
Next, a LINQ query is used to retrieve any nodes in the document that have the name Style. The query is followed by a statement to make sure I only get unique results.
Finally I fill the StyleKeys collection with any key attributes value found inside an element in the query result.
Searching for a particular pattern in the file system is done in the following method:
public void Search(string pattern, string rootFolder, string fileFilter)
{
// Get all files matching the filter
string[] fileNames = Directory.GetFiles
(rootFolder, fileFilter, SearchOption.AllDirectories);
// For each file
foreach (string fileName in fileNames)
{
// Open file
string fileData = File.ReadAllText(fileName);
// Match pattern
MatchCollection matches = Regex.Matches(fileData, pattern);
// Register count
PatternSearchResultEntry resultEntry = newPatternSearchResultEntry()
{
FileName = fileName,
HitCount = matches.Count,
Pattern = pattern
};
Results.Add(resultEntry);
}
}
As you can see, the first line gets all the filenames that are anywhere in the directory hierarchy below the supplied root folder.Looping through the filenames, I simply load all the text from each file and use the Regex class to count the number of hits. By doing so, this code is also very useful to find hit counts for other patterns.All the results are added to a collection of a struct called PatternSearchResultEntry.
Regex
struct
PatternSearchResultEntry
So that's the business end of things. Obviously we need a user interface of some sort.I chose a WPF interface, because I like data binding.To retrieve user input for the style file and the folder to look in, I build a class called BindableString, which contains a Name and a Value and implements the INotifyPropertyChanged interface. It allows me to create instances of these and bind them to my UI. This way I have a central point to access this information without having to worry about updates, etc..
BindableString
Name
Value
INotifyPropertyChanged
To do the actual work I wrote the following Click event for a button:
Click
private void analyseStyleUsageButton_Click(object sender, RoutedEventArgs e)
{
XamlStyleKeyReader reader = newXamlStyleKeyReader();
reader.ReadXamlFile(_stylesFilePath.Value);
PatternSearch patternSearch = newPatternSearch();
foreach (string styleKey in reader.StyleKeys)
{
patternSearch.Search(styleKey, _searchRootDirectory.Value, new string[] { "*.xaml" });
}
CollectionView view = (CollectionView)CollectionViewSource.GetDefaultView(
patternSearch.Results);
if (view.CanGroup)
{
view.GroupDescriptions.Add(new PropertyGroupDescription("Pattern"));
}
analyseStyleUsageDataGrid.ItemsSource = view.Groups;
}
It basically instantiates the XamlStyleKeyReader class and loads the style file in it. Next it instantiates the PatternSearch class and kicks of a search for each style key available in the XamlStyleKeyReader.
XamlStyleKeyReader
PatternSearch
XamlStyleKeyReader
The code after that groups the results based on the search pattern. The reason I did it this way is because it is not very transparent to bind to the result of a group in LINQ. Binding to this is easy once you know how. As you can see, the items source for the datagrid that displays my results is actually the collection of groups.This collection is declared as having objects, which isn't very helpful, however diving into the API documentation reveals that this collection contains instances of the CollectionViewGroup class. From that class, I need the name (obviously) and a hit count, which of course it doesn't have.To get a hit count, I bound to the Items property from the group, which contains all the items that belong to that group and then I use a value converter to get the total hit count for that group.
datagrid
CollectionViewGroup
Items
I've uploaded the complete source for this tool here.
Be aware that this tool is far from finished. I would like to save the last settings and have some progress indication, which means moving the search code to its own thread. Styling of the UI can be improved, etc.
I do hope you find this code useful and you've learned something along the way.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) | http://www.codeproject.com/Articles/42435/Using-code-to-find-out-if-your-Silverlight-WPF-sty | crawl-003 | refinedweb | 938 | 65.32 |
Download presentation
Presentation is loading. Please wait.
Published byCecily McDowell Modified over 5 years ago
1
based on slides at buildingjavaprograms.com Objects and Classes, take 2 Managing Complexity with Programmer-Defined Types Classes are programmer-defined types. All data types other than primitive types (char, int, double, boolean) are written/defined by programmers, including classes from the Java Standard Library Classes help us manage complexity, since they: tie together related data and operations decompose an application into some number of objects and their interactions can be re-used in different applications
2
2 A programming problem Given a file of cities' (x, y) coordinates, which begins with the number of cities: 6 50 20 90 60 10 72 74 98 5 136 150 91 Write a program to draw the cities on a DrawingPanel, then drop a "bomb" that turns all cities red that are within a given radius: Blast site x/y? 100 100 Blast radius? 75
3
3 A bad solution Scanner input = new Scanner(new File("cities.txt")); int cityCount = input.nextInt(); int[] xCoords = new int[cityCount]; int[] yCoords = new int[cityCount]; for (int i = 0; i < cityCount; i++) { xCoords[i] = input.nextInt(); // read each city yCoords[i] = input.nextInt(); }... parallel arrays: 2+ arrays with related data at same indexes. Poor solution - as the programmer, you must remember the connection between entries in the int arrays - the program does not tie them together.
4
4 Observations This problem would be easier to solve if there were such a thing as a Point object. A Point would store a city's x,y data. We could compare distances between Point s to see whether the bomb hit a given city. Each Point would know how to draw itself. The overall program would be shorter and cleaner. A city’s coordinates would be logically connected in the program.
5
5 Clients of objects client program: A program that uses objects. Example: Circles is a client of DrawingPanel and Graphics. Circles.java (client program) public class Circles { main(String[] args) { new DrawingPanel(...)... } DrawingPanel.java (class) public class DrawingPanel {... }
6
6 Classes and objects class: A program entity that represents either: 1.A program / module, or 2.A template for a new type of objects. A blueprint for a collection of similar objects, that have similar attributes and behavior The DrawingPanel class is a template for creating DrawingPanel objects. object: An entity that combines state and behavior. object-oriented programming (OOP): Programs that perform their behavior as interactions between objects.
7
7 Blueprint analogy iPod blueprint state: current song volume battery life behavior: power on/off change station/song change volume choose random song iPod #1 state: song = " 1,000,000 Miles " volume = 17 battery life = 2.5 hrs behavior: power on/off change station/song change volume choose random song iPod #2 state: song = "Letting You" volume = 9 battery life = 3.41 hrs behavior: power on/off change station/song change volume choose random song iPod #3 state: song = "Discipline" volume = 24 battery life = 1.8 hrs behavior: power on/off change station/song change volume choose random song creates
8
8 Abstraction abstraction: A distancing between ideas and details. We can use objects without knowing how they work. abstraction in an iPod: You understand its external behavior (buttons, screen). You don't understand its inner details, and you don't need to, in order to use an iPod.
9
9 Class Design In the following slides, we will implement a Point class as a way of learning about classes. We will define a type of objects named Point. Each Point object will contain x,y data called fields. Each Point object will contain behavior called methods. Client programs will use the Point objects. To create a new class, think about the objects that will be created of this new class type: what the object knows what the object does
10
10 Class Design For a Car class: What does a Car object know? (What attributes does it have?) What can a Car object do? (What actions can it carry out?) Instance variables are an object’s data, i.e., the things the object knows about itself. Methods are the things an object can do. Note: Often our classes contain methods that read new values to store in instance variables, write the data stored in the instance variables.
11
11 Point objects Point p1 = new Point(5, -2); Point p2 = new Point(); // origin, (0, 0) Data in each Point object: Methods in each Point object: Method nameDescription setLocation( x, y ) sets the point's x and y to the given values translate( dx, dy ) adjusts the point's x and y by the given amounts distance( p ) how far away the point is from point p draw( g ) displays the point on a drawing panel Field name Description x the point's x- coordinate y the point's y- coordinate
12
12 Point class as blueprint The class (blueprint) describes how to create objects. Each object contains its own data and methods. Point class state: int x, y behavior: setLocation(int x, int y) translate(int dx, int dy) distance(Point p) draw(Graphics g) Point object #1 state: x = 5, y = -2 behavior: setLocation(int x, int y) translate(int dx, int dy) distance(Point p) draw(Graphics g) Point object #2 state: x = -245, y = 1897 behavior: setLocation(int x, int y) translate(int dx, int dy) distance(Point p) draw(Graphics g) Point object #3 state: x = 18, y = 42 behavior: setLocation(int x, int y) translate(int dx, int dy) distance(Point p) draw(Graphics g)
13
13 Point class, version 1: State public class Point { private int x; private int y; } Save this code in file Point.java. The above code creates a new type named Point. Each Point object contains two pieces of data: an int named x, and an int named y. Point objects do not contain any behavior (yet).
14
14 Fields field: A variable inside an object that is part of its state. Called an instance variable or instance field. Each object has its own copy of each field. Declaration syntax: ; Example: public class Student { private String name;// each Student object private double gpa;// name and gpa field }
15
15 Accessing fields Other classes can access/modify an object's fields, if permitted by the access specifier for the field. access:variable. field modify:variable. field = value ; Example: Point p1 = new Point(); Point p2 = new Point(); System.out.println("the x-coord is " + p1.x);// access p2.y = 13;// modify - usually disallowed (encapsulation)
16
16 A class and its client Point.java is not, by itself, a runnable program. A class can be used by client programs. PointMain.java (client program) public class PointMain {... main(args) { Point p1 = new Point(); p1.x = 7; p1.y = 2; Point p2 = new Point(); p2.x = 4; p2.y = 3;... } Point.java (class of objects) public class Point { int x; int y; } x 7 y 2 x 4 y 3
17
17 PointMain client example public class PointMain { public static void main(String[] args) { // create two Point objects Point p1 = new Point(); p1.y = 2; Point p2 = new Point(); p2.x = 4; System.out.println(p1.x + "," + p1.y); // 0,2 // move p2 and then print it p2.x += 2; p2.y++; System.out.println(p2.x + "," + p2.y); // 6,1 }
18
18 Methods: An Object’s Behavior Methods of objects (methods that are non-static) define the behavior of the object. public class Point { private int x; private int y; public void setLocation(int newX, int newY) { x = newX; y = newY; }
19
19 More on Methods + this keyword The keyword this allows an object to refer to itself: public void setLocation(int x, int y) { this.x = x;// this.x is instance var for this object this.y = y; } this is used to distinguish between the instance variable and parameter of the same name. an instance method is executed from the context or perspective of a particular object this lets you refer to the object on which the method is running. this is called the “implicit parameter”
20
20 Constructors Method that executes when a new object is created Same name as the class No return type - implicitly returns new object Used to initialize object’s data fields public class Point { private int x; private int y; public Point(int x, int y) { this.x = x; this.y = y; }
21
21 Accessors or Getters Many classes include methods that get, or return, the value of an instance variable. This is needed if clients need access to variable values, since we protect our instance variables (by declaring them private) This methods are called accessors or getters, and usually have the name get. Getter for x coordinate, in Point class: public class Point { private int x;// x cannot be modified from outside class private int y; public int getX() { return x; }
22
22 Mutators or Setters Many classes contain methods that allow a user to modify the value of an instance variable, often with restrictions on the type of modifications allowed. These methods are called mutator or setter methods. public class Point { private int x; public void setX(int newX) { x = newX; }
23
23 Point class - Exercise Write a complete Point class that contains: a constructor that takes as arguments the coordinates of the point a constructor that takes no arguments, and initializes the point to represent the origin (0, 0) mutator methods for both the x and y coordinates accessor methods for both the x and y coordinates a distanceToOrigin method that computes the distance to (0, 0) from the current point a distance method that takes a Point p as its argument, and returns the distance from this point to p
24
24 Point Objects Instance method: method that exists inside each object Point p = new Point(2, 3); p.setLocation(4, 1); Point q = new Point(5, 5); q.setLocation(0, 0); p x y public void setLocation(int x, int y) {... } q x y public void setLocation(int x, int y) {... } 23 5 5
25
25 More on Constructors Initializes the state of a new object Runs when the client creates a new object using the new keyword + the class/constructor name If a class contains no constructor, Java provides the class with a default constructor that takes no arguments Syntax: public (,... ) { }
26
26 Common Constructor Errors Including a return type: public void Point(int newX, int newY) {... } Remember that a constructor automatically returns the new Point object. Assigning values to local variables (the constructor’s parameter) rather than instance variables: public void Point(int newX, int newY) { newX = x; // assign parameter the value of // instance variable int y = newY; // create local variable y }
27
27 Arrays of objects null : A reference that does not refer to any object. The elements of an array of objects are initialized to null. String[] words = new String[5]; DrawingPanel[] windows = new DrawingPanel[3]; index01234 valuenull index012 valuenull words window s
28
28 Things you can do w/ null store null in a variable or an array element String s = null; words[2] = null; print a null reference System.out.println(s); // output: null ask whether a variable or array element is null if (words[i] == null) {... pass null as a parameter to a method return null from a method (often to indicate failure)
29
29 Null pointer exception dereference: To access data or methods of an object with the dot notation, such as s.length(). It is illegal to dereference null (causes an exception). null is not any object, so it has no methods or data. String[] words = new String[5]; System.out.println("word is: " + words[0]); words[0] = words[0].toUpperCase(); Output: word is: null Exception in thread "main" java.lang.NullPointerException at Example.main(Example.java:8)
30
30 Looking before you leap You can check for null before calling an object's methods. String[] words = new String[5]; words[0] = "hello"; words[2] = "goodbye";// words[1], [3], [4] are null for (int i = 0; i < words.length; i++) { if (words[i] != null) { words[i] = words[i].toUpperCase(); } index01234 value "hello" null "goodbye" null words
31
31 Two-phase initialization 1) initialize the array itself (each element is initially null ) 2) initialize each element of the array to be a new object String[] words = new String[4]; // phase 1 for (int i = 0; i < words.length; i++) { coords[i] = "word " + i; // phase 2 } index0123 value "word 0""word 1""word 2""word 3" words
Similar presentations
© 2020 SlidePlayer.com Inc. | http://slideplayer.com/slide/7023761/ | CC-MAIN-2020-45 | refinedweb | 2,109 | 61.87 |
- Can Java leak memory? Programmers can.
- How Plentiful Is Memory?
- Conclusion
Sometime around 1997, a programmer colleague of mine was wrestling with what seemed like an intractable C++ bug. When he asked me for advice, I suggested, "You've probably exceeded the boundary of an array." This was (and still is) one of the most common C/C++ errors. He was amazed when a code check revealed that this was indeed the problem! Far from displaying god-like omniscience, this was just a case of the programming languages of the day requiring abstract rules and guidelines such as the one described. In fact, this conversational exchange was probably repeated all over the world by C++ developers! If that suggestion hadn't worked, I'd have suggested checking for other errors such as null pointer access, erroneous file I/O access, and so on. If none of those worked, I'd have suggested running the code with a debugger. It's all about rules!
Times and technologies have changed. The Java Runtime Environment now throws an exception if you exceed the boundary of an array. So, if you're guilty of this particular sin (as we all have been), you'll get to hear about it quickly enough! If you forget to handle the exception, your program is aborted. The reality is this: Each technology provides its own fertile ground for error, and Java is no exception. In this article, I look at a few issues that that can cause serious problems in Java code, and outline a few handy techniques for avoiding such problems.
Can Java leak memory? Programmers can.
A common Java misconception is that you don't need to worry about memory at all—the garbage collector takes care of all that stuff! Not necessarily. It's relatively easy to write Java code that allocates large amounts of memory and then forget to make that code go out of scope. This is a type of inadvertent memory leak, and is illustrated in Listing 1.
Listing 1 A Java Memory Leak
public class MemoryLeak { public static void main(String[] args) { Scanner keyboard = new Scanner(System.in); int keepGoing = 0; System.out.println("Please enter a value for keepGoing " + keepGoing); keepGoing = keyboard.nextInt(); System.out.println("New value for keepGoing is " + keepGoing); if (keepGoing != 0) { System.out.println("Continuing program. Value of keepGoing " + keepGoing); int[] aBiggishArray = new int[5000]; System.out.println("Allocated an array of size " + aBiggishArray.length); // LOTS MORE CODE HERE // DON’T NEED aBiggishArray AFTER THIS // BUT, MEMORY FOR aBiggishArray IS STILL ALLOCATED } else { System.out.println("Exiting program. Value of keepGoing " + keepGoing); } } }
In Listing 1, I allocate a big array called aBiggishArray, and I use it for a few lines. At this stage, I no longer need the array, so I then forget about it. Until the if statement ends, the array object remains in scope locked in memory, impervious to the demands of the garbage collector. This might be a slightly contrived example, but it does illustrate that code logic may inadvertently lead to memory leakage. Of course, once the object aBiggishArray goes out of scope, the memory is released. Perhaps, the important question is: Do we really need to worry so much about resources such as memory? | http://www.informit.com/articles/article.aspx?p=706207&seqNum=2 | CC-MAIN-2019-47 | refinedweb | 543 | 56.35 |
That said, I always have the feeling that OOP is bolted onto C as an afterthought.
You're opinions are just as valid as the next persons.I'm not criticising you personally,
In justification: the arduino is a 16-bit processor
Modern C++ features such as ranged loops, lambda functions, auto type deduction and so on really make C++ a high-level language without really sacrificing much, if anything, in terms of performance.
Don't know if it's any use to anyone, but I'd like to hope so.
Personally I would not call simple inheritance polymorphism, even with virtual functions - It's simply, inheritance. Multiple inheritance and overriding methods, would be examples of polymorphism, to my mind.
As an OOP tutorial held up to novices, the article is highly opinionated, dubious even, in places.
Especially.Neglecting to mention the cost of OOP.Overuse of private propertiesbrightnessClicker 'should' [not] be a property of headlamp - Breaks the MVC pattern.Calling millis() repeatedlyBlocking a refresh method for 800,000 clock cycles.
"The problem is that there are then two places [where a click is started]"The redundancy can be reduced logically, by sticking to a read/modify/write pattern.
I see no tangible benefit in the Runnable class. As such, the class adds unnecessary complexity, consuming RAM and clock cycles needlessly.
"It's called 'time-slicing'"No, not really. Time-slicing is an established term, which implies pre-empting, by a scheduler.
"co-operative multitasking" seems to be the thing to call it.
Using virtual functions is by definition polymorphic behavior.
The write up is not bad,
I think the examples could be far simpler.
The people you know OOP can't resist talking about it in technical terms that make it completely inaccessible to beginners.
What is missing here, however, is the layer provided by the marketing system in the motor industry which effectively isolates the user from the engineering jargon.
Agreed. It's my opinion, it's the way I write things, it's a specific way I use OO with arduino (specifically: giving everything a setup and a loop). Maybe I should put a caveat to that effect towards the beginning.
I don't see what's wrong with making things private.
And it doesn't make sense to complain that OOP is expensive,
As to not making brightness clicker part of headlamp by direct composition - I just disagree. It's a fine way to do (some) things. Direct composition doesn't in itself break MVC. If this sort of composition does break MVC and composition by reference doesn't, isn't it a little odd that the syntax and method calls you need to make are otherwise identical?
Not sure that calling millis() repeatedly is a problem.
Yes, my code doesn't strictly adhere to a pattern.
Well, there are intangible benefits to it [the runnable class].
Hmm, "co-operative multitasking" seems to be the thing to call it.
QuoteWhat is missing here, however, is the layer provided by the marketing system in the motor industry which effectively isolates the user from the engineering jargon.Disagree. What is missing here is technical critique and peer review.
Disagree. What you have said is correct. But it is quite different from the needed separation between users and developers - for the benefit and convenience of users.
In my experience it can be very difficult to get developers (in any field) to appreciate the very different perspective of users.
Quote from: Camel on Dec 28, 2015, 01:10 pmModern C++ features such as ranged loops, lambda functions, auto type deduction and so on really make C++ a high-level language... I'd like to see an Arduino example using one or more of those. An awful lot of the time, for C++ programmers, "no performance sacrifice" means "if your actual data was 1MB, encapsulating it all in a useful and elegant C++ container only adds another 100k, and obviously 10% is not very significant" rather than "the object overhead fits in 2k with plenty of room for actual data."
Modern C++ features such as ranged loops, lambda functions, auto type deduction and so on really make C++ a high-level language...
#include <BitBool.h>void setup() { uint64_t data = 0xAABBCCDD00112233; Serial.begin(9600); Serial.print( "Printed as binary: " ); auto &bits = toBitBool<REVERSE_BOTH>(data); for( auto bit : bits ){ Serial.write( '0' + (bit ? 1 : 0) ); }}void loop() {}
Careful with that stereotype, Eugene | https://forum.arduino.cc/index.php?topic=368295.msg2540213 | CC-MAIN-2020-29 | refinedweb | 739 | 55.54 |
Client Code Generation
[WCF RIA Services Version 1 Service Pack 2 is compatible with either .NET framework 4 or .NET Framework 4.5, and with either Silverlight 4 or Silverlight 5.]
When you link a Silverlight project and a middle-tier project using WCF RIA Services, RIA Services generates client proxy classes for the client application based on entities and operations you have exposed in the middle tier. Because RIA Services generates these classes, you do not need to duplicate any application logic from the middle tier to the presentation tier. Any changes you make to the middle tier code are synchronized with the presentation tier code when you rebuild the client project. When you add a RIA Services link to a solution, an explicit build dependency is added to the solution that forces the server project to build before generating code for client project.
The generated code resides in a folder named Generated_Code in the client project. To see this folder, you must select Show All Files in the Solution Explorer window for the client project. You should not directly modify the classes in the Generated_Code folder because they will be overwritten when the client project is rebuilt. However, you can open the generated file to see the code that is available to the client project.
The algorithm that generates client code follows these basic rules:
Analyze all assemblies either built or referenced by the middle tier project for domain service classes, entity classes, or shared code.
For each domain service that is annotated with the EnableClientAccessAttribute attribute, generate a class that derives from the DomainContext class.
For each query method, named update method (an update method with the UsingCustomMethod property set to true), or invoke operation in the domain service class, generate a method in the domain context class.
For each entity class that is exposed by a domain service, generate an entity proxy class. An entity class is exposed when it is returned by a query method.
Copy code marked for sharing to the client project.
The following image shows the client code that is generated for a middle tier project.
One class that derives from DomainContext is generated for each domain service class according to the following rules:
The domain context class is generated with same namespace as the domain service.
The domain context class contains three constructors:
A default constructor that embeds the URI necessary to communicate with the domain service over http using a WebDomainClient<TContract> class.
A constructor that permits the client to specify an alternate URI.
A constructor that permits the client to provide a custom DomainClient implementation (typically used for unit testing or redirection to a custom transport layer).
For each query method in the domain service class, generate an EntityQuery<TEntity> method that can be used in the client project to load entities.
For each invoke operation, generate a corresponding InvokeOperation method that can be used to invoke that operation asynchronously.
For each method marked the Update(UsingCustomMethod=true) attribute, generate methods to invoke it and to determine whether it has been invoked.
Public methods in the domain service that perform inserts, updates, or deletes cause the generated EntityContainer in the domain context to be constructed with an EntitySetOperations flag that indicates which of operations are permitted on the client.
The following rules are applied when generating the entity proxy class:
The proxy class is generated with the same name and namespace as the entity class in the middle tier.
The root entity type derives from the Entity class. Derived entity types derive from the corresponding base types exposed by the middle-tier.
Every public property that contains a supported type and is not marked with the ExcludeAttribute attribute in the entity class is generated in the proxy class, unless that property already exists in the client project. For more information, see the “Avoiding Duplicated Members” section later in this topic. Object is not a supported type.
Each property setters will contain code that performs validation and notifies clients that the property is changing and has changed.
Metadata attributes are combined with the entity class in the generated code. No metadata class will exist on the client.
If possible, custom attributes are propagated to the proxy class. For a description of the conditions that must exist for the custom attribute to exist in the client project, see the following “Custom Attributes” section.
Only one CustomValidationAttribute is propagated to the member if the same type and validation method are specified in more than instance of the CustomValidationAttribute for that member.
Custom attributes are propagated to the proxy class if adding the custom attribute does not cause a compilation error in the client project. For the custom attribute to be propagated, the following conditions must exist:
The custom attribute type must be available on the client project.
Any types specified in the custom attribute declaration must be available on the client project.
The custom attribute type must expose public setters for all of its properties, or expose a constructor that allows for setting properties that do not have public setters.
If a required custom attribute is not propagated to the client, you may need to add an assembly reference in the client project. Add a reference to any assembly that is needed for the custom attribute to compile in the client project. You can also share a custom attribute between the tiers by defining it in a shared file.
When you share code files between the middle tier and the presentation tier, the code is copied without any changes to the client project. You specify a file for sharing by naming it with the pattern *.shared.cs or *.shared.vb. The directory structure from the middle-tier project containing the shared files is replicated under the Generated_Code folder.
When you add a custom type in a shared code file and then return that type from an invoke operation, the generated method in the domain context will not return the custom type. Instead, the method in the domain context will return a type that is part of the framework. For example, when you create a custom type named MyCustomDictionary that implements IDictionary<TKey, TValue> and specify that type as the return value for a domain operation, the method generated in the domain context will not return MyCustomDictionary. Instead, it will return a Dictionary<TKey, TValue> object.
For more information, see Shared Code.
When generating an entity proxy class, it is possible that the same type and member have already been defined in the client project by using partial types. You may have defined the member in shared code or in code that only exists in the client project. RIA Services checks the existing members before generating the proxy class. Any member that is already defined will not be generated in the proxy class. | http://msdn.microsoft.com/en-us/library/ee707359(v=vs.91).aspx | CC-MAIN-2014-42 | refinedweb | 1,140 | 51.78 |
C library function - fgetpos()
Description
The C library function int fgetpos(FILE *stream, fpos_t *pos) gets the current file position of the stream and writes it to pos.
Declaration
Following is the declaration for fgetpos() function.
int fgetpos(FILE *stream, fpos_t *pos)
Parameters
stream − This is the pointer to a FILE object that identifies the stream.
pos − This is the pointer to a fpos_t object.
Return Value
This function returns zero on success, else non-zero value in case of an error.
Example
The following example shows the usage of fgetpos() function.
#include <stdio.h> int main () { FILE *fp; fpos_t position; fp = fopen("file.txt","w+"); fgetpos(fp, &position); fputs("Hello, World!", fp); fsetpos(fp, &position); fputs("This is going to override previous content", fp); fclose(fp); return(0); }
Let us compile and run the above program to create a file file.txt which will have the following content. First of all we get the initial position of the file using fgetpos() function and then we write Hello, World!in the file, but later we have used fsetpos() function to reset the write pointer at the beginning of the file and then over-write the file with the following content:
This is going to override previous content
Now let us see the content of the above file using the following program −
#include <stdio.h> int main () { FILE *fp; int c; int n = 0; fp = fopen("file.txt","r"); while(1) { c = fgetc(fp); if( feof(fp) ) { break ; } printf("%c", c); } fclose(fp); return(0); }
Let us compile and run above program to produce the following result:
This is going to override previous content | http://www.tutorialspoint.com/c_standard_library/c_function_fgetpos.htm | CC-MAIN-2015-32 | refinedweb | 273 | 62.07 |
Using EditorParts for Dynamic/Data Driven WebPart Editors!
Today I’m blogging about web parts and focusing on a particular aspect of web parts. In a customer project I was faced with a user requirement that led me to study on how I can enhance the end user web part editing experience by developing more dynamic web part editors that reflect the underlying data.
Web parts already save you a lot of manual labor by turning web part properties decorated by specific attributes to full-fledged web user interfaces. In more advanced scenarios however, you will soon find out this is still a rather limited feature and leads to poor user experience. Luckily, the folks at Microsoft regognized this and introduced EditorPart abstraction as a base class for developing advanced web part editors.
The Scenario
- User maintains a list of urls that point to other sharepoint lists
- Web part needs to fetch data from those urls
- User needs to be able to select Urls in web part editor
The Implementation
First, you need to define a Web Part property where the user web part settings will be stored. In this case the Urls the user selects from the checkbox list in the web part editor interface. To my surprise you can not only persist simple strings or integers or boolean values but also collections:
[Personalizable(PersonalizationScope.Shared)]
public ArrayList NewsSources { get; set; }
Next up you will have your WebPart class implement the IWebEditable interface:
//create an instance of each custom EditorPart control
//associated with a server control and return them as collection
EditorPartCollection IWebEditable.CreateEditorParts()
{
//Add custom CheckBoxList editor for editing NewsSources Property
List<EditorPart> editors = new List<EditorPart>();
editors.Add(
new NewsFeedsEditorPart { ID="NewsFeedsEditorPart1"});
return new EditorPartCollection(editors);
}
//a reference to the associated server control
object IWebEditable.WebBrowsableObject
{
get { return this; }
}
#endregion
Now we are ready to implement the actual EditorPart.
Implementing the EditorPart
Start up by inheriting from EditorPart class and overriding a few essential methods:
These two methods are the key to making your editor part do its work. In SyncChanges method you set up your Editor Part to reflect the current state of the corresponding web part and when the user presses the save button after editing the web part ApplyChanges is used to bring the values in an EditorPart control back to the WebPart control’s persisted properties.
Let’s go ahead and implement these methods. At this point I’m also going to override a third method which sets up the UI. I do this in CreateChildControls method:
Here are the implementations for ApplyChanges and SyncChanges methods:
{
//call to make sure the check box list is set up
EnsureChildControls();
// get a reference to the corresponfing web part
var wb = WebPartToEdit as NewsListWebPart;
if (wb == null) return;
newsSourcesCheckBoxList.Items.Clear();
using (var site = new SPSite("yourUrl"))
{
using (var web = site.OpenWeb())
{
var nList = web.Lists["NewsSourcesList"];
var items = nList.GetItems(new SPQuery());
foreach (SPListItem item in items)
{
string checkBoxLabel = item.Title;
string checkBoxValue =
item["NewsSourceUrl"].ToString();
ListItem listItem =
new ListItem(checkBoxLabel, checkBoxValue);
listItem.Selected =
wb.NewsSources.Contains(item["NewsSourceUrl"])
}
}
}
}
{
//call to make sure the check box list is set up
EnsureChildControls();
var wb = WebPartToEdit as NewsListWebPart;
if (wb == null) return false;
var sources = new ArrayList();
foreach (ListItem item in newsSourcesCheckBoxList.Items)
{
if (item.Selected)
{
sources.Add(item.Value);
}
}
wb.NewsSources = sources;
return true;
}
And we are finished developing our web part with a custom web part editor. The code is very straightforward and can really help you boost up the web part editing user experience.
[...] Using EditorParts for Dynamic/Data Driven WebPart Editors (SharePoint Blues! [...]
Nice Article. If adding Sreen shots that will be good to understand New SP Guys.
Thanks,
venkat
The article is extreamly usefull, thank you for sharing. But, I have a few doubts:
1. The property should be marked as [WebBrowsable(false)], isn’t it?
2. If the property is of type ArrayList, your example does not work! The custom property does not appear in the web part’s panel when in edit mode. If I change the property type from ArrayList to string, everything works as expected.
I did a similar example as yours from msdn. For a property of type string, things work as a charm. When I move to ArrayList, does not work.
Do you have any functional project sample for the ArrayList? Are you sure it’s working with ArrayList also?
Thank you.
Never mind my previous comment… You are totally right! You are supposed to mark it [WebBrowsable(false)], and not [WebBrowsable(true)]. And you are supposed to declare NewsFeedsEditorPart as a public class. This was (logically) the problem.
Again, thank you so much for the article! is also great!
Hello Diana. Glad to know you found my article useful.
Cheers
Erkka
It would be great if you can post the source code for use Sharepoint newbies. I tried the code in your blog and it does not work for me. It compiles and I can deploy it, but I don’t see the controls in the Editor Menu.
Eventually, I found the following tutorial working for me:
“Nice Article. If adding Sreen shots that will be good to understand New SP Guys.
Thanks,”
If some one needs to be updated with most recent technologies afterward he must be pay a visit this
web page and be up to date every day. | http://www.sharepointblues.com/2010/04/14/using-editorparts-for-dynamicdata-driven-webpart-editors/ | CC-MAIN-2020-45 | refinedweb | 896 | 55.54 |
On 27/10/2021 11:00, Janne Heß wrote:
Hi everyone, I packaged coreutils 9.0 for NixOS and we found breakages that seemed to be very random during builds of packages that use the updated coreutils in their build process. It's really hard to tell the main cause but it seems like the issues are caused by binaries that are corrupted after cp copied them from /tmp to /nix. The issue arises both when the directories are on the same filesystem and when /tmp is on tmpfs. Upon further inspection/bisection we figured out these issues are caused by a6eaee501f6ec0c152abe88640203a64c390993e. This seems to happen on ZFS and indeed on the main coreutils mailing list there is a ZFS issue linked [1]. The testsuite was patched in 61c81ffaacb0194dec31297bc1aa51be72315858 so it doesn't detect this issue anymore, but the issue still very much happens in the real world. We have found this to happen while building the completions for a go tool (jx) which seems to be the same issue as [2]. The tool is built, copied using cp, and called which causes a segfault to happen. Building another package (peertube) on x86_64-linux on ext4 also fails with strange errors in the test suite, something about "Error: The service is no longer running". This does not happen when the mentioned coreutils commit is undone by replacing #ifdef with #if 0 [3]. We have also seen this issue on Darwin when building Alacritty but only happening on some machines but we were not able to pin it down any further there so this might be related or it might not. Since the issue is so random, we started wondering if it might be related to -frandom-seed which changes in NixOS when rebuilding a package [4]. A thing to note here is that Nix does a lot of sandboxing stuff during builds which includes mount namespaces so a Kernel bug is not out of the question. All of these issues happened during Nix builds, coreutils 9.0 never made it out of the NixOS staging environment due to the builds breaking. We will probably disable the new code paths as outlined above so the issue is contained for NixOS users and does not hit any production environments. [1]: [2]: [3]: [4]:
Looks like there is a WIP fix for OpenZFS mentioned at [1], where mmap'd regions were not being flushed: So this should unblock enabling coreutils 9 at some stage at least. I've asked at [1] now they know what's going on, how programs might best distinguish buggy instances of openzfs. cheers, Pádraig | https://lists.gnu.org/archive/html/bug-coreutils/2021-11/msg00004.html | CC-MAIN-2022-05 | refinedweb | 436 | 65.76 |
As a companion article to my Permutation Generation, I now present Combination generation.
This snippet will do a lazy generation of the combinations of a set of objects. Each combination has a grouping size (number of items in each combination). If you need a different grouping size, you'll have to create a new instance.
As was done in the permutation class, you can request specific lexical combinations either by calling the method GetCombination or by using the index operator.
How to use:
First you'll need an array of object. Call the Combination constructor passing the array and the grouping size. Since this is a generic class, you'll have to provide the class type of the array (like all generics).
The test example will take an array of six strings, with a grouping size of 3, and:
- Display how many different combinations there are.
- Display all the combinations.
- Display the fourth lexical combination using a method call.
- Display the sixth lexical combination using indexing.
using System; using System.Text; using Whittle.Math; namespace Whittle { class Program { static void Main(string[] args) { int counter = 0; String[] myItems = { "A", "B", "C", "D", "E", "F" }; Combination<string> myCombos = new Combination<string>(myItems, 3); Console.WriteLine("For {0} items there are {1} combinations", myItems.Length, myCombos.NumberOfCombinations); Console.WriteLine(); foreach (String[] perm in myCombos) { Console.Write("{0} ", SeqFormat(perm)); counter = (counter + 1) % 6; if (counter == 0) Console.WriteLine(); } Console.WriteLine(); Console.WriteLine("The fourth lexical combination is {0}", SeqFormat(myCombos.GetCombination(4))); Console.WriteLine(); Console.WriteLine("The sixth lexical combination is {0}", SeqFormat(myCombos[6])); Console.ReadLine(); } static String SeqFormat(String[] strings) { StringBuilder sb = new StringBuilder(); sb.Append("["); foreach (String s in strings) { sb.Append(s); } sb.Append("]"); return sb.ToString(); } } }
How it works:
The wikipedia article explains it better than I ever could :)
Limitations:
Because this requires the use of factorials (even though they are generated in a unique way) it is possible to overflow the int variables used to generate the combinations. If you really need larger combinations, try changing all the int values to long, or BigInteger.
Acknowlegments:
As before, my sources are wikipedia and the 2006 MSDN Article "Test Run" by Dr. James McCaffrey. | https://www.daniweb.com/programming/software-development/code/349476/combination-generation | CC-MAIN-2018-05 | refinedweb | 366 | 50.12 |
from IPython.core.display import Image Image(url='', width=600)
# Crucial imports import numpy as np import matplotlib.pyplot as plt
# Let's say we have a (near) periodic signal x = np.sin(np.arange(128)) plt.plot(x)
[<matplotlib.lines.Line2D at 0x10483afd0>]
To analyze time-varying content, we want to process individual overlapping frames.
We can use the stride_tricks from last week to get overlapped windows on a linear vector
(from )
# Build a "framed" version of x as successive, overlapped sequences # of frame_length points from numpy.lib import stride_tricks frame_length = 16 hop_length = 4 num_frames = 1 + (len(x) - frame_length) / hop_length row_stride = x.itemsize * hop_length col_stride = x.itemsize x_framed = stride_tricks.as_strided(x, shape=(num_frames, frame_length), strides=(row_stride, col_stride)) plt.imshow(x_framed, interpolation='nearest', cmap='gray')
<matplotlib.image.AxesImage at 0x104907090>
# If we take the FFT of each row, we can see the short-time fourier transform plt.imshow(np.abs(np.fft.rfft(x_framed)), interpolation='nearest')
<matplotlib.image.AxesImage at 0x10493b8d0>
# Although there's a steady sinusoidal component, we see interference between the # window frame and the signal phase. We need a tapered window applied to each frame. window = np.hanning(frame_length) plt.plot(window)
[<matplotlib.lines.Line2D at 0x10496c4d0>]
# But what's the best way to multiply each frame of x_framed by window? # Linear algebra way is to multiply by a diagonal matrix diag_window = np.diag(window) plt.imshow(diag_window, interpolation='nearest', cmap='gray')
<matplotlib.image.AxesImage at 0x1049fae10>
# Now apply it to each frame using matrix multiplication x_framed_windowed = np.dot(x_framed, diag_window) plt.imshow(x_framed_windowed, interpolation='nearest', cmap='gray')
<matplotlib.image.AxesImage at 0x104aa0190>
# Matlab way is to construct a matrix of repeating rows of the same size window_repeated = np.tile(window, (num_frames,1)) plt.imshow(window_repeated, interpolation='nearest', cmap='gray') # then pointwise multiplication applies it to each row x_framed_windowed = x_framed * window_repeated
# Numpy broadcasting implicitly (and efficiently) repeats singleton or missing dimensions # to make matrices the same size, so tiling is unneeded plt.imshow(x_framed*window, interpolation='nearest', cmap='gray')
<matplotlib.image.AxesImage at 0x104b2c6d0>
# Compare the timings: %timeit np.dot(x_framed, np.diag(window)) %timeit x_framed*np.tile(window, (num_frames,1)) %timeit x_framed*window # The big win is not having to allocate the second num_frames x frame_length array
100000 loops, best of 3: 10.7 µs per loop 100000 loops, best of 3: 15 µs per loop 100000 loops, best of 3: 4.13 µs per loop
# What about if we had our data in columns? x_framed_T = x_framed.T plt.imshow(x_framed_T * window, interpolation='nearest', cmap='gray')
--------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-12-33858fc02e56> in <module>() 1 # What about if we had our data in columns? 2 x_framed_T = x_framed.T ----> 3 plt.imshow(x_framed_T * window, interpolation='nearest', cmap='gray') ValueError: operands could not be broadcast together with shapes (16,29) (16)
# Broadcast works by starting at the last dimensions (the fastest-changing ones in 'C' ordering) # and promoting either one if it's one. # So we just have to make our window be frame_length x 1 # We can do this with slicing and np.newaxis: plt.imshow(x_framed_T * window[:,np.newaxis], interpolation='nearest', cmap='gray') # Now broadcasting works again
<matplotlib.image.AxesImage at 0x104b0b7d0>
# Broadcasting works across multiple dimensions. # It goes through each dimension from the last, looking for a match # or promoting singletons a = np.random.rand( 3, 4, 5 ) b = np.random.rand( 3, 5 ) c = a*b
--------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-14-a039b1ac09b5> in <module>() 4 a = np.random.rand( 3, 4, 5 ) 5 b = np.random.rand( 3, 5 ) ----> 6 c = a*b ValueError: operands could not be broadcast together with shapes (3,4,5) (3,5)
# That didn't work because there was no singleton dimension to promote # so we can introduce one, with reshape for instance: b2 = np.reshape(b, (3, 1, 5)) c1 = a*b2 # or using slicing c2 = a * b[:, np.newaxis, :] print np.allclose(c1, c2)
True
For the full description of how broadcasting works, see the SciPy documentation:
# Remember why we wanted the windowing, to remove artefacts from the STFT? plt.subplot(121) plt.imshow(np.abs(np.fft.rfft(x_framed)), interpolation='nearest') plt.subplot(122) plt.imshow(np.abs(np.fft.rfft(x_framed * window)), interpolation='nearest') # Windowing drastically reduces framing-related artefacts # (at the cost of a little frequency resolution)
<matplotlib.image.AxesImage at 0x104c354d0> | http://nbviewer.jupyter.org/github/craffel/crucialpython/blob/master/week4/broadcasting.ipynb | CC-MAIN-2018-13 | refinedweb | 726 | 50.53 |
tut 0.5.1
Tut is a tool that helps you write technical documentation using Sphinx 1.6 and later.
Tut provides a workflow that supports tutorial-style documents particularly well. If your writing includes code samples that build on one another, Tut is for you. Tut helps you manage the code in the tutorial as you write it, and include the correct segments in your document.
Tut makes it easy to manage a git source repository for your tutorial’s code by using branches to record different steps. As you write the code for your tutorial, Tut allows you to include code from a particular step in your Sphinx document. Tut also has basic support for showing the difference between two branches, allowing you to effectively show what’s changed in a way that’s readable for humans.
Tut consists of two pieces: a program to manage branches, and a Sphinx extension to switch branches during the Sphinx build.
Using Tut
I wrote Tut because I wanted an easier way to manage the sample code I was writing for Effective Django. I was using git to track my changes to the text, but those changes weren’t the ones I was reflecting in the code: I could use git to tell me what changed in the text between two points in time, but I couldn’t easily tell what changed between chapters. The code, in effect, was a parallel set of changes, and I was interested in understanding them over the course of the text, not (necessarily) over the course of my writing timeline.
Tut is a command-line tool that makes managing the code changes independently of the text changes more straight-forward. It allows you to define a set of “points” in the development of your source and switch back and forth between them. If you make a change to an early point in your code, you can roll that change forward so your future code is consistent. Under the hood Tut uses git, so you can include your code as a sub-module and use the other git tools you’ve come to appreciate.
To start using Tut, run tut init <path>:
$ tut init ./demosrc
If the path (./demosrc) is not an existing git repository, Tut will initialize one and add an initial commit.
Subsequent Tut commands should be run from within the Tut-managed repository.
$ cd demosrc
To start a point from your current position, run tut start:
$ tut start step_one
After you’ve created different points in your repository, you can run tut points to list them:
$ tut points step_one step_two
If you realize you’ve made a mistake and want to change the code at an earlier checkpoint, simply run tut edit:
$ tut edit step_one
Tut will check out the step_one branch, and you can make changes and commit them. Once you’re done editing, commit your changes using git. You’ll also want to roll those changes forward, through the subsequent steps.
$ tut next --merge
Running tut next will find the next step and check out that branch. Adding --merge will also merge the previous step. If we’re done making changes to step_one, running tut next --merge will move us to step_two and merge step_one.
Including Code in Sphinx
Sphinx provides the literalinclude directive, which allows you to include source files, or parts of files, in your documentation. Tut allows you to switch to a specific git tag, branch, or commit before processing the inclusion.
To enable Tut, add tut.sphinx to the list of enabled extensions in your Sphinx project’s conf.py file:
extensions = [ # ... 'tut.sphinx', ]
The checkpoint directive takes a single argument, which is the git reference to switch to. For example, the following directive will checkout step_one (either a branch or tag) in the git repository in /src:
.. tut:checkpoint:: step_one :path: /src
The directive doesn’t result in any output, but literalinclude (or other file-system inclusion directives) that come after the checkpoint will use the newly checked-out version.
Tut records the starting state of repositories the first time it does a checkout, and restores the initial state after the build completes.
If your document contains multiple checkpoints, you can specify the path once using the tut directive:
.. tut:: :path: /src
Note that /src is evaluated using the same rules as govern literalinclude. That is, the file name is usually relative to the current file’s path. However, if it is absolute (starting with /), it is relative to the top source directory.
Within a checkpoint Tut provides two new directives for fetching content: tut:literalinclude and tut:diff.
tut:literalinclude works a lot like Sphinx’s built-in literalinclude directive. However, instead of loading the file from the filesystem directly, tut:literalinclude retrieves it from the git repository.
For example:
.. tut:checkpoint:: step_two :path: /src ... .. tut:literalinclude:: setup.py
Will fetch setup.py from the step_two branch in the git repository located at /src.
Tut can also show the changes between two checkpoints (branches) using the tut:diff directive. Like tut:literalinclude it uses the git repository referenced in the last checkpoint by default. You can specify the ref and prev_ref to compare; if omitted, ref defaults to the current checkpoint and prev_ref defaults to the previous point, as listed in the output of tut points.
.. tut:diff:: setup.py :ref: step_two :prev_ref: step_one :path: /src/demosrc
N.B.
When Sphinx encounters a checkpoint directive, it performs a git checkout in target repository. This means that the repository should not contain uncommitted changes, to avoid errors on checkout.
Note that this will probably change soon, to allow for more flexible use of content from the git repository.
News
DEVELOPMENT
(unreleased)
…
0.5.1
Release Date: 30 April 2017
- Fixed missing import which caused tut:literalinclude to silently fails
0.5.0
Release Date: 30 April 2017
- Addition of tut:literalinclude and tut:diff directives
- Sphinx directives are namespaced under tut:
- Drop support for Sphinx releases prior to 1.6
- Drop support for Python 2
- Use dedicated config file on special branch for maintaining point list.
- Added tut fetch to support retreiving all checkpoints.
- Better error reporting when calling git fails.
0.2
Release date: 11 April 2013
- BACKWARDS INCOMPATIBLE
- Removed post-rewrite hook, tut-remap
- Moved from tag-based checkpoints to branch-based
- Added next sub-command to move from one step to the next
- edit now checks out a branch
0.1
Release date: 17 March 2013
- Support for switching to tags, branches, etc within Sphinx documents
- Initial implementation of wrapper script
- Author: Nathan Yergler
- License: BSD
- Requires Distributions
- sphinxcontrib-websupport
- sh
- pyyaml
- docopt
- Sphinx
- Package Index Owner: nathan
- DOAP record: tut-0.5.1.xml | https://pypi.python.org/pypi/tut/ | CC-MAIN-2018-13 | refinedweb | 1,122 | 62.07 |
Distributed Transactions on App Engine
Posted by Nick Johnson | Filed under coding, app-engine, cookbook, tech(db.Model): owner = db.UserProperty(required=True) balance = db.IntegerProperty(required=True, default=0)
Naturally, you need to be able to transfer funds between accounts; those transfers need to be transactional, or you risk losing peoples' money, or worse (from a bank's point of view) duplicating it! You can't group users into entity groups, because it would still be impossible to transfer money between users that were assigned to different entity groups. Further, you need to be able to prevent people from overdrawing their accounts.
Fortunately, we can make it possible to do transactional transfers between accounts fairly simply. The key thing to realise, that makes everything much simpler, is that funds transfers do not have to be atomic. That is, it's okay to briefly exist in a state where the funds have been deducted from the paying account, but not yet credited to the payee account, as long as we can ensure that the transfer will always complete, and as long as we can maintain our invariants (such as the total amount of money in the bank) along the way.
Let's start by defining a simple transaction record model:
class Transfer(db.Model): amount = db.IntegerProperty(required=True) target = db.ReferenceProperty(Account, required=True) other = db.SelfReferenceProperty() timestamp = db.DateTimeProperty(required=True, auto_now_add=True)
As you can see, this is fairly straightforward. A Transfer entity will always be the child entity of an Account; this is the account that the transaction is concerned with, and being a child entity means we can update it and the account transactionally, since they're in the same entity group. Each transfer will create two Transfer entities, one on the paying account, and one on the receiving account.
The amount field is fairly obvious; here we'll use it to signify the change in value to the account it's attached to, so the paying account will have a negative amount, while the receiving account will have a positive amount. The target field denotes the account the transfer was to or from, while the 'other' field denotes the other Transfer entity.
At this point, it would help to describe the basic process we expect to follow in making a transfer between accounts:
- In a transaction, deduct the required amount from the paying account, and create a Transfer child entity to record this, specifying the receiving account in the 'target' field, and leaving the 'other' field blank for now.
- In a second transaction, add the required amount to the receiving account, and create a Transfer child entity to record this, specifying the paying account in the 'target' field, and the Transfer entity created in step 1 in the 'other' field.
- Finally, update the Transfer entity created in step 1, setting the 'other' field to the Transfer we created in step 2.
Each of the three steps above is transactional, thanks to the guarantees made by the App Engine datastore. What's less obvious is that the process can only proceed forwards: Once step 1 has succeeded (eg, because the user had sufficient funds in their account at the time), steps 2 and 3 will inevitably succeed - either immediately, or at some later point if something causes a transient failure. A process picking up the pieces later can easily determine which steps have been completed, and pick up where the previous process left off, without omitting or repeating anything.
Let's implement step 1, in the form of a 'transfer_funds' method:
def transfer_funds(src, dest, amount): def _tx(): account = Account.get(src) if account.balance < amount: return None account.balance -= amount transfer = Transfer( parent=account, amount=-amount, target=dest) db.put([account, transfer]) return transfer return db.run_in_transaction(_tx)
Straightforward, right? At the point that this function returns successfully, the transaction can only go one way - forward. If the process currently handling it dies unexpectedly, another one can pick it up later, and 'roll it forward'. Since the process of completing a transaction and rolling it forward if it fails are one and the same, we'll define a roll_forward method that completes the transaction:
def roll_forward(transfer): def _tx(): dest_transfer = Transfer.get_by_key_name(parent=transfer.target.key(), str(transfer.key())) if not dest_transfer: dest_transfer = Transfer( parent=transfer.target.key(), key_name=str(transfer.key()), amount=-transfer.amount, target=transfer.key().parent(), other=transfer) account = Account.get(transfer.target.key()) account.balance -= transfer.amount db.put([account, dest_transfer]) return dest_transfer dest_transfer = db.run_in_transaction(_tx) transfer.other = dest_transfer transfer.put()
This function is a little more complicated than transfer_funds, but it's still straightforward if we break it down: We pass in the transfer entity returned by transfer_funds. First, the function tries to fetch an existing Transfer for the destination account - this might already exist if a previous attempt to roll the transaction forward failed - using the receiving account as the parent, and specifying a key name based on the key of the paying account's Transfer entity. We need to specify a key name in order to ensure there can only be one matching Transfer entity for the destination account.
If the receiving account has no matching Transfer, we create one, specifying the amount and target based on the first Transfer, and setting the 'other' field to the first Transfer. Then, we fetch the Account, add the transferred amount to its funds, and put both the new Transfer and the updated Account back to the datastore.
Finally, outside the transaction, we get the returned dest_transfer entity, and update the original Transfer entity to reference it. We don't need to use another transaction when we store this entity back to the datastore, because the only possible modification of a Transfer after creating it is to set the 'other' field, which is what we're doing.
That, in a nutshell, is how to transfer money between accounts in App Engine in a robust and consistent fashion. Simply call transfer_funds(src, dest, amount), then call roll_forward() on the returned Transfer object. If you wish, you don't even have to roll_forward the transaction right away - for example, you can enqueue the key of the returned Transaction in the Task Queue, and leave it up to the task to complete the transaction, thus decreasing user-perceived latency for transfers.
You may be wondering, though, how partially applied transactions get rolled forward. The solution is simple: We find Transfer entities with their 'other' field unset, and call the roll_forward method on them:
def execute_unapplied_transactions(count=20): cutoff = datetime.datetime.now() - datetime.timedelta(seconds=30) q = Transfer.all().filter("other =", None).filter("timestamp <", cutoff) for transfer in q.fetch(count): roll_forward(transfer)
This function can be executed from a cron job or the task queue at intervals, to ensure that any failed transactions get rolled forward. If you're taking the 'deferred completion' approach described above, you can even leave it up to this method to roll forward all transactions!
In the next post, we'll be returning to the Bulk Loader, and demonstrating how to load data directly from an SQL database, or nearly any other data source.Previous Post Next Post | http://blog.notdot.net/2009/9/Distributed-Transactions-on-App-Engine | CC-MAIN-2016-30 | refinedweb | 1,201 | 51.58 |
issue that has been frustrating me for a while now.
This is an update of a crosspost
()
which I made over a month ago.
I have been attempting to connect to URLs from python. I have tried: urllib2, urlib3, and requests. It is the same issue that i run up against in all cases. Once I get the answer I imagine all three of them would work fine.
The issue is connecting via proxy. I have entered our proxy information but am not getting any joy. I am getting 407 codes and error messages like: HTTP Error 407: Proxy Authentication Required (Forefront TMG requires authorization to fulfill the request. Access to the Web Proxy filter is denied. )
I think that this also stops me using pip to install (at least from remotes ). I get 'Cannot fetch index base URL". I end up using git to clone a local copy of the repo and install from that.
However, I can connect using a number of other applications that go through the proxy, git and pycharm for example. When I run git config -- get htpp.proxy it returns the same values and format that I am entering in Python namely:
An example of code in requests is
import requests
proxy = {"http": ""}
url = ''
r = requests.get(url, proxies=proxy)
print r.status_code
For testing purposes I want my code to raise a socket "connection reset by peer" error, so that I can test how I handle it, but I am not sure how to raise the error.
A list can contain different types of elements but I am not able to understand how max () method work with different types of data elements.
I can create a list that has repeated elements of another list as follows:
xx = ["a","b"]
nrep = 3
print xx
yy = []
for aa in xx:
for i in range(nrep):
yy.append(aa)
print yy
output:
['a', 'b']
['a', 'a', 'a', 'b', 'b', 'b']
Is there a one-liner to create a list with repeated elements?
Forgot Your Password?
2018 © Queryhome | https://www.queryhome.com/tech/4870/proxy-connection-with-python | CC-MAIN-2018-47 | refinedweb | 341 | 73.17 |
Recently I had to do some parallel processing of nested items in Python. I would read some objects from a 3rd party API, then for each object I would have to get it's child elements and do some processing on them as well. A very simplified sketch of this code is below:
example_record = {'id': 0, 'children': [{'id': 'child01', 'prop': 100}] } executor = ThreadPoolExecutor(20) def fetch_main_item(i): obj = records_to_process[i] return process_main_item(obj) def process_main_item(obj): results = executor.map(process_child_item, obj['children']) return sum(results) def process_child_item(child): sleep(random.random()*2) return child['prop'] results = executor.map(fetch_main_item, range(4)) for r in results: print(r)
The code ran just fine, but we wanted to have some visibility in how the processing is going, so we needed to add some logging. Just sprinkling some log statements here and there is easy, but we wanted all the logs to contain the index of the main record, even when processing the child records, which otherwise doesn't have a pointer to the parent record.
The easy and straightforward way would be to add the index to all our functions and always pass it along. But that would mean changing the signature of all our functions, which were much more, because there could be several different kinds of child objects, each being processed in a different way.
A much more elegant way would be to use
contextvars, which were added in Python 3.7. These context variables act like a global variable, but per thread. If you set a certain value in one thread, every time you read it again in the same thread, you'll get back that value, but if you read it from another thread, it will be different.
A minimal usage example:
import contextvars from concurrent.futures.thread import ThreadPoolExecutor from time import sleep ctx = contextvars.ContextVar('ctx', default=10) pool = ThreadPoolExecutor(2) def show_context(): sleep(1) print("Background thread:", ctx.get()) pool.submit(show_context) ctx.set(15) print("Main thread", ctx.get())
The output is:
Main thread 15 Background thread: 10
Even though the background thread prints the value after it has been set to 15 in the main thread, the value of the
ContextVar is still the default value in that thread.
This means that if we add the index to a context variable in the first function, it will be available in all other functions that run in the same thread.
import contextvars context = contextvars.ContextVar('log_data', default=None) def fetch_main_item(i): print(f"Fetching main item {i}") obj = records_to_process[i] context.set(i) result = process_main_item(obj) return result def process_main_item(obj): ctx = context.get() results = executor.map(process_child_item, obj['children']) s = sum(results) print(f"Processing main item with {obj['id']} children at position {ctx}") return s def process_child_item(child): sleep(random.random()*2) ctx = context.get() print(f"Processing child item {child['id']} of main item at position {ctx}") return child['prop']
What we changed was that in the
fetch_main_item we set the context variable to the index of the record we process, and in the other two functions we get the context.
And it works as we expect in the
process_main_item function, but not in the
process_child_item function. In this simplified example, the
id of each main record is the same as their index, and the first digit of the
id of a child record is the parents
id.
Fetching main item 0 Fetching main item 1 Fetching main item 2 Fetching main item 3 Processing child item child11 None Processing child item child01 None Processing child item child02 None Processing child item child31 None Processing child item child32 None Processing main item with id 3 with 3 Processing child item child21 None Processing child item child22 3 Processing child item child03 3 Processing main item with id 0 with 0 Processing child item child12 3 Processing main item with id 1 with 1 Processing child item child23 None Processing main item with id 2 with 2
What is going on in child processing function? Why is the context sometimes
None and sometimes 3?
Well, it's because we didn't set the context on the new thread. When we spawn a bunch of new tasks in the thread pool to process the child records, sometimes they get scheduled on threads that have never been used before. In that case, the context variable hasn't been, so it's
None. In other cases, after one of the main records is finished processing, some of the child tasks are scheduled on the thread on which the main record with
id 3 was scheduled, so the context variable has remained on that value.
The fix for this is simple. We have to propagate the context to the child tasks:
def process_main_item(obj): ctx = context.get() results = executor.map(wrap_with_context(process_child_item, ctx), obj['children']) s = sum(results) print(f"Processing main item with id {obj['id']} with {ctx}") return s def wrap_with_context(func, ctx): def wrapper(*args): token = context.set(ctx) result = func(*args) context.reset(token) return result return wrapper
When calling
map, we have to wrap our function in another one which sets the context to the one we pass in manually, calls our function, resets the context and then returns the result of the function. This ensures that the functions called in a background thread have the same context:
Fetching main item 0 Fetching main item 1 Fetching main item 2 Fetching main item 3 Processing child item child11 1 Processing child item child12 1 Processing main item with id 1 with 1 Processing child item child02 0 Processing child item child01 0 Processing child item child03 0 Processing main item with id 0 with 0 Processing child item child32 3 Processing child item child31 3 Processing main item with id 3 with 3 Processing child item child22 2 Processing child item child23 2 Processing child item child21 2 Processing main item with id 2 with 2
And indeed, all the indexes are now matched up correctly.
Context variables are a very nice mechanism to pass along some information, but in a sense they are global variables, so all the caveats that apply to global variables apply here too. It's easy to abuse them and to make it hard to track how the values in the context variable change. But, in some cases, they solve a real problem. For example, distributed tracing libraries, such as Jaeger, use them to be able to track how requests flow inside the program and to be able to build the call graph correctly.
Kudos to my colleague Gheorghe with whom I worked on this.
I’m publishing this as part of 100 Days To Offload - Day 10. | https://rolisz.ro/2020/05/15/context-variables-in-python/ | CC-MAIN-2020-45 | refinedweb | 1,122 | 65.56 |
Technocup 2017 Finals and Codeforces Round #403 Editorial
Auto comment: topic has been translated by Endagorion (original revision, translated revision, compare)
Why it shows Tutorial is not available ?
UPD: It's available now :)
because it's not available yet
Can anyone explain how Div2 B can be solved using Ternary Search
I have done this.
We analyze total time in relation to position, I will call it p selected. Time of each point x with speed v is abs(x - p) / v. If we make p vary, as v and x are fixed we'll see an abs function
Which is the total time? The worst of all those times. formally max(v1, v2, ..vn). If we make drawings we can see that if we plot the total time in function to the position selected, that graphic has no local maximum, and the left and right sides tend to positive infinite. So we observe that as the function is continuous we need to have a unique local minimum somewhere in the center of the graphic (otherwise the sides can't tend both to infinity). There can't be more than one minimum, because otherwise we would need a maximum to exist and we accepted it didn't exist. The left side of the minimum must be strictly decreasing (as it comes from positive infinity), and the right side strictly increasing (as it goes to positive infinitive), and so, the graphic is something like this
Note that the left derivative is negative and the right one is positive. Which means we can make a ternary search on the point when it changes, from negative, to positive, that will be a local minimum, as the only one, the global one, hence the solution of the problem.
Evaluating the function will be O(n), so total complexity the product of O(n) and your number of ternary search iterations (of logarithmic behavior)
25253744
Great explanation, I have seen this problem on virtual contest and tried to solve with similar approach. Intuitively I felt that there should be only one point where the required time is minimum. But couldn't solve during the contest, after reading this comment everything clicked and got AC.
By the way, I have checked your solution and why use mabs? cpp has its own fabs which does exactly same
By ignorance :) Sometimes, during a contest it is faster to write these simple functions rather than open google haha. Also sometimes I decide to avoid to use some c++ functions to reduce the risk of generate unexpected algorithm complexity, however this is not the case :)
for Div.2 D (Div.1 B) somebody says try all ai, if there is the conflict then try all bi. if these two cases both have conflict then the answer is NO. like submission 25322923 and try the input:
4 ABC DDD ABC EEE QWE FFF QWR FFF
ABD, ABE, QWE, QWR
Here we don't try all b_i when there is some conflicts for a_i. But only those b_i whose corresponding a_i conflict with each other.
I know but the somebody's code doesn't seems like that.
Can someone please help me understand the editorial of problem 782B — The Meeting Place Cannot Be Changed. how to iterate for all t's and how to apply binary search.
782B can be solved without binary search. Working time is much better comparing to binary search solutions. heuristic with enumeration all pairs & O(n*log(n)) sorting + O(n) scan
in div2 E, why is it guaranteed that the tour exists? and even if it doesn't which, why is writing the nodes in that way is guaranteed to generate the answer?
edit: i understood the solution, but why is it called euler tour?
and is it me, or mentioning that traversing an edge twice is fine had to be mentioned in the statement?
would someone tell me if we can visit the same edge twice? and if so, why is it called "euler tour" in the editorial?
Yeah, we can visit the same edge twice. Suppose in the dfs ordering of nodes, we backtrack all edges (in the dfs tree) once.
-> Total edges traversed = 2*(n-1) -> If we divide into k parts, we get [2*(n-1)/k] <= [2n/k] -> We can always assign dfs (preorder) accordingly.
I have no idea why it's called Euler tour, since that means we should be visiting each edge once, and end up on the beginning node.
You should code simple recursive algorithm for Euler tour. But write vertice twice first time on visiting and another time after recursive return (on each return to the vertice). This will produce required sequence.
In div1D, what is meant by "optimizing boolean multiplication with bitsets" and how do we achieve this optimization?
Can you figure out what is wrong with my code in problem 781B (wrong in test 16)? 25333425
Same problem, can't find the mistake.
what meaning "The route is a closed polygon line in the place, with all segments parallel to one of the axes" in Intranet of Buses? help on Intranet of Buses!
I'm getting 'Tutorial is not available'.
And tutorials for some other rounds are also not available currently see this
KAN MikeMirzayanov please fix this.
Same here,what's the matter?
watch this legendary chess match while you wait
Woah
In Div1 B (football league problem) what is the mistake in my algorithm? I really don't see the bug.
can someone please find why am I getting runtime error in python2.7 in problem C div 2?
Here is my code, a big thanks in advance
in task d(div 2) test 5 aaa b aaa c aaq z aab z aab o craches lots of solutions. The result is probably YES aab,aac,aaq,aaz,aao but it isn't)
What's the idea behind giving 5 seconds for div 2 B if the intended solution is 46 ms 26456429 . Is it to allow for N^2 solution or to trick people into attempting N^2 solution?
I believe it was done to allow solutions using slower programming languages.
interesting. There is programming language 100x slower than c++? If so, what language is that?
I'll give you a hint: the name of this language starts with letter p.
Can anyone please explain me why the complexity in DIV 2 B is O(n * log (eta inverse) ) and not O(n * log ( max (eta inverse,(h-l) ) ), here h=upper bound and l = lower bound of the binary search respectively. Thank you in Advance :)
Same question here. I think it might be an error.
Endagorion That may seem too old, but for problem B div1 for the second part after having only distinct Ai, isn't maximum matching needed?
A case like this causes a lot of accepted solutions to fail:
4
ABC D
ABC E
ABF X
ABX E
I was wondering about the same thing. Both solutions 25258392 of V--o_o--V and 25257297 of OO0OOO00O0OOO0O00OOO0OO make use of maximum matching too.
Most of the solutions used greedy approach which is definitely not accurate that it fails on the mentioned case and similar cases.
That would match the fact that N <= 1000, otherwise if greedy was enough N could have been <= 100000
Can someone explain why my algorithm used in Div2 B The Meeting Place Cannot Be Changed is wrong. I assigned each friend the velocity plus the velocity of last friend(using the concept of relative velocity, last friend which is on the northest side has been given zero velocity while another friend will move towards him with a greater velocity) So minimum time I am getting is less than jury's answer. Here is the link to submission | http://codeforces.com/blog/entry/50854? | CC-MAIN-2018-39 | refinedweb | 1,302 | 70.33 |
You've rolled out an application and it produces mysterious, sporadic errors? That's pretty common, even if fairly well-tested applications are exposed to real-world data. How can you track down when and where exactly your problem occurs? What kind of user data is it caused by? A debugger won't help you there.
And you don't want to keep track of only bad cases. It's helpful to log all types of meaningful incidents while your system is running in production, in order to extract statistical data from your logs later. Or, what if a problem only happens after a certain sequence of 'good' cases? Especially in dynamic environments like the Web, anything can happen at any time and you want a footprint of every event later, when you're counting the corpses..
However, with traditional logging systems, the amount of data written to the logs can be overwhelming. In fact, turning on low-level-logging on a system under heavy load can cause it to slow down to a crawl or even crash.
Log::Log4perl is different. It is a pure Perl port of the widely popular Apache/Jakarta
log4j library [3] for Java, a project made public in 1999, which has been
actively supported and enhanced by a team around head honcho Ceki
Gülcü during the years.
The comforting facts about
log4j are that it's really well thought out, it's the alternative logging
standard for Java and it's been in use for years with numerous projects. If
you don't like Java, then don't worry, you're not alone -- the
Log::Log4perl authors (yours truly among them) are all Perl hardliners
who made sure
Log::Log4perl is real Perl..
- Appenders allow you to choose which output devices the log data is being written to, once it clears the previously listed hurdles.
In combination, these three control mechanisms turn out to be very powerful. They allow you to control the logging behavior of even the most complex applications at a granular level. However, it takes time to get used to the concept, so let's start the easy way:
Getting Your Feet Wet With Log4perl
If you've used logging before, then you're probably familiar with logging priorities or levels . Each log incident is assigned a level. If this incident level is higher than the system's logging level setting (typically initialized at system startup), then the message is logged, otherwise it is suppressed.
Log::Log4perl defines five logging levels, listed here from low to high:
DEBUG INFO WARN ERROR FATAL
Let's assume that you decide at system startup that only messages of level WARN and higher are supposed to make it through. If your code then contains a log statement with priority DEBUG, then it won't ever be executed. However, if you choose at some point to bump up the amount of detail, then you can just set your system's logging priority to DEBUG and you will see these DEBUG messages starting to show up in your logs, too.
Listing
drink.pl shows an example.
Log::Log4perl is called with the
qw(:easy) target to provide a beginner's interface for us. We initialize the logging
system with
easy_init($ERROR), telling it to suppress all messages except those marked
ERROR and higher (
ERROR and
FATAL that is). In easy mode,
Log::Log4perl exports the scalars
$DEBUG,
$INFO etc. to allow the user to easily specify the desired priority.
Listing 1: drink.pl
01 use Log::Log4perl qw(:easy); 02 03 Log::Log4perl->easy_init($ERROR); 04 05 drink(); 06 drink("Soda"); 07 08 sub drink { 09 my($what) = @_; 10 11 my $logger = get_logger(); 12 13 if(defined $what) { 14 $logger->info("Drinking ", $what); 15 } else { 16 $logger->error("No drink defined"); 17 } 18 }
drink.pl defines a function,
drink(), which takes a beverage as an argument and complains if it didn't get one.
In the
Log::Log4perl world, logger objects do the work. They can be obtained by the
get_logger()
function, returning a reference to them.
There's no need to pass around logger references between your system's
functions. This effectively avoids cluttering up your beautifully crafted
functions/methods with parameters unrelated to your implementation.
get_logger() can be called by every function/method directly with little overhead in order to obtain a
logger.
get_logger makes sure that no new object is created unnecessarily. In most cases, it
will just cheaply return a reference to an already existing object
(singleton mechanism).
The logger obtained by
get_logger() (also exported by
Log::Log4perl in
:easy mode) can then be used to trigger logging incidents using the following
methods, each taking one or more messages, which they just concatenate when
it comes to printing them:
$logger->debug($message, ...); $logger->info($message, ...); $logger->warn($message, ...); $logger->error($message, ...); $logger->fatal($message, ...);
The method names are corresponding with messages priorities:
debug() logs with level
DEBUG,
info with
INFO and so forth. You might think that five levels are not enough to effectively
block the clutter and let through what you actually need. But before
screaming for more, read on.
Log::Log4perl has different, more powerful mechanisms to control the amount of output
you're generating.
drink.pl uses
$logger->error() to log an error if a parameter is missing and
$logger->info() to tell what it's doing in case everything's OK. In
:easy mode, log messages are just written to STDERR, so the output we'll see from
drink.pl will be:
2002/08/04 11:43:09 ERROR> drink.pl:16 main::drink - No drink defined
Along with the current date and time, this informs us that in line 16 of
drink.pl, inside the function
main::drink(), a message of priority ERROR was submitted to the log system. Why isn't
there a another message for the second call to
drink(), which provides the beverage as required? Right, we've set the system's
logging priority to
ERROR, so
INFO-messages are being suppressed. Let's correct that and change line 3 in
drink.pl to:
Log::Log4perl->easy_init($INFO);
This time, both messages make it through:
2002/08/04 11:44:59 ERROR> drink.pl:14 main::drink - No drink defined 2002/08/04 11:44:59 INFO> drink.pl:16 main::drink - Drinking Soda
Also, please note that the
info() function was called with two arguments but just concatenated them to form a
single message string.
Moving On to the Big Leagues
The
:easy target brings beginners up to speed with
Log::Log4perl quickly. But what if you don't want to log your messages solely to STDERR,
but to a logfile, to a database or simply STDOUT instead? Or, if you'd like
to enable or disable logging in certain parts of your system independently?
Let's talk about categories and appenders for a second.
Logger Categories
In
Log::Log4perl, every logger has a category assigned to it. Logger Categories are a way
of identifying loggers in different parts of the system in order to change
their behavior from a central point, typically in the system startup
section or a configuration file.
Every logger has has its place in the logger hierarchy. Typically, this
hierarchy resembles the class hierarchy of the system. So if your system
defines a class hierarchy
Groceries,
Groceries::Food and
Groceries::Drinks, then chances are that your loggers follow the same scheme.
To obtain a logger that belongs to a certain part of the hierarchy, just
call
get_logger with a string specifying the category:
######### System initialization section ### use Log::Log4perl qw(get_logger :levels); my $food_logger = get_logger("Groceries::Food"); $food_logger->level($INFO);
This snippet is from the initialization section of the system. It defines
the logger for the category
Groceries::Food and sets its priority to
INFO with the
level() method.
Without the
:easy target, we have to pass the arguments
get_logger and
:levels to
use Log::Log4perl in order to get the
get_logger function and the level scalars (
$DEBUG,
$INFO, etc.) imported to our program.
Later, most likely inside functions or methods in a package called
Groceries::Food, you'll want to obtain the logger instance and send messages to it. Here's
two methods,
new() and
consume(),
that both grab the (yes, one) instance of the
Groceries::Food logger in order to let the user know what's going on:
######### Application section ############# package Groceries::Food; use Log::Log4perl qw(get_logger); sub new { my($class, $what) = @_; my $logger = get_logger("Groceries::Food"); if(defined $what) { $logger->debug("New food: $what"); return bless { what => $what }, $class; } $logger->error("No food defined"); return undef; } sub consume { my($self) = @_; my $logger = get_logger("Groceries::Food"); $logger->info("Eating $self->{what}"); }
Since we've defined the
Groceries::Food logger earlier to carry priority
$INFO, all messages of priority
INFO and higher are going to be logged, but
DEBUG messages won't make it through -- at least not in the
Groceries::Food part of the system.
So do you have to initialize loggers for all possible classes of your
system? Fortunately,
Log::Log4perl uses inheritance to make it easy to specify the behavior of entire armies
of loggers. In the above case, we could have just said:
######### System initialization section ### use Log::Log4perl qw(get_logger :levels); my $food_logger = get_logger("Groceries"); $food_logger->level($INFO);
and not only the logger defined with category
Groceries would carry the priority
INFO, but also all of its descendants -- loggers defined with categories
Groceries::Food,
Groceries::Drinks::Beer and all of their subloggers will inherit the level setting from the
Groceries
parent logger (see figure 1).
Of course, any child logger can choose to override the parent's
level() setting -- in this case the child's setting takes priority. We'll talk
about typical use cases shortly.
At the top of the logger hierarchy sits the so-called root logger, which doesn't have a name. This is what we've used earlier with the
:easy target: It initializes the root logger that we will retrieve later via
get_logger() (without arguments). By the way, nobody forces you to name your logger
categories after your system's class hierarchy. But if you're developing a
system in object-oriented style, then using the class hierarchy is usually the
best choice. Think about the people taking over your code one day: The
class hierarchy is probably what they know up front, so it's easy for
them to tune the logging to their needs.
Let's summarize: Every logger belongs to a category, which is either the
root category or one of its direct or indirect descendants. A category can
have several children but only one parent, except the root category, which
doesn't have a parent. In the system's initialization section, loggers can
define their priority using the
level() method and one of the scalars
$DEBUG,
$INFO, etc. which can be imported from
Log::Log4perl using the
:levels target.
While loggers must be assigned to a category, they may choose not
to set a level. If their actual level isn't set, then they inherit the level of the first parent or ancestor
with a defined level. This will be their effective priority. At the top of the category hierarchy resides the root logger, which always carries a default priority of
DEBUG. If no one else defines a priority, then all unprioritized loggers inherit
their priority from the root logger.
Categories allow you to modify the effective priorities of all your loggers
in the system from a central location. With a few commands in the system
initialization section (or, as we will see soon, in a
Log::Log4perl configuration file), you can remote-control low-level debugging in a small
system component without changing any code. Category inheritance enables
you to modify larger parts of the system with just a few keystrokes.
Appenders
But just a logger with a priority assigned to it won't log your message anywhere. This is what appenders are for. Every logger (including the root logger) can have one or more appenders attached to them, objects, that take care of sending messages without further ado to output devices like the screen, files or the syslog daemon. Once a logger has decided to fire off a message because the incident's effective priority is higher or equal than the logger level, all appenders attached to this logger will receive the message -- in order to forward it to each appender's area of expertise.
Moreover, and this is very important,
Log::Log4perl will walk up the hierarchy and forward the message to every appender
attached to one of the logger's parents or ancestors.
Log::Log4perl makes use of all appenders defined in the
Log::Dispatch
namespace, a separate set of modules, created by Dave Rolsky and others, all freely available on CPAN. There's appenders to write to the
screen (
Log::Dispatch::Screen), to a file (
Log::Dispatch::File), to a database (
Log::Dispatch::DBI), to send messages via e-mail (
Log::Dispatch::Email), and many more.
New appenders are defined using the
Log::Log4perl::Appender class. The exact number and types of parameters required depends on the
type of appender used, here's the syntax for one of the most common ones,
the logfile appender, which appends its messages to a log file:
# Appenders my $appender = Log::Log4perl::Appender->new( "Log::Dispatch::File", filename => "test.log", mode => "append", ); $food_logger->add_appender($appender);
This will create a new appender of the class
Log::Dispatch::File, which will append messages to the file
test.log. If we had left out the
mode => "append" pair, then it would just overwrite the file each time at system startup.
The wrapper class
Log::Log4perl::Appender
provides the necessary glue around
Log::Dispatch modules to make them usable by
Log::Log4perl as appenders. This tutorial shows only the most common ones:
Log::Dispatch::Screen to write messages to STDOUT/STDERR and
Log::Dispatch::File, to print to a log file. However, you can use any
Log::Dispatch-Module with
Log::Log4perl. To find out what's available and how to their respective parameter
settings are, please refer to the detailed
Log::Dispatch documentation. Using
add_appender(), you can attach as many appenders to any logger as you like.
After passing the newly created appender to the logger's
add_appender()
method like in
$food_logger->add_appender($appender);
it is attached to the logger and will handle its messages if the logger decides to fire. Also, it will handle messages percolating up the hierarchy if a logger further down decides to fire.
This will cause our
Log::Dispatch::File appender to add the following line
INFO - Eating Sushi
to the logfile
test.log. But wait -- where did the nice formatting with date, time, source file
name, line number and function go we saw earlier on in
:easy mode? By simply specifying an appender without defining its layout,
Log::Log4perl
just assumed we wanted the no-frills log message layout
SimpleLayout, which just logs the incident priority and the message, separated by a
dash.
Layouts
If we want to get fancier (the previously shown
:easy target did this behind our back), then we need to use the more flexible
PatternLayout
instead. It takes a format string as an argument, in which it will --
similar to
printf() -- replace a number of placeholders by
their actual values when it comes down to log the message. Here's how to
attach a layout to our appender:
# Layouts my $layout = Log::Log4perl::Layout::PatternLayout->new( "%d %p> %F{1}:%L %M - %m%n"); $appender->layout($layout);
Since
%d stands for date and time,
%p for priority,
%F for
the source file name,
%M for the method executed,
%m for the log message and
%n for a newline, this
layout will cause the appender to write the message like this:
2002/08/06 08:26:23 INFO> eat:56 Groceries::Food::consume - Eating Sushi
The
%F{1} is special in that it takes the right-most component of the file, which
usually consists of the full path -- just like the
basename() function does.
That's it -- we've got
Log::Log4perl ready for the big league. Listing
eat.pl shows the entire "system": Startup code, the main program and the
application wrapped into the
Groceries::Food class.
Listing 2: eat.pl
01 ######### System initialization section ### 02 use Log::Log4perl qw(get_logger :levels); 03 04 my $food_logger = get_logger("Groceries::Food"); 05 $food_logger->level($INFO); 06 07 # Appenders 08 my $appender = Log::Log4perl::Appender->new( 09 "Log::Dispatch::File", 10 filename => "test.log", 11 mode => "append", 12 ); 13 14 $food_logger->add_appender($appender); 15 16 # Layouts 17 my $layout = 18 Log::Log4perl::Layout::PatternLayout->new( 19 "%d %p> %F{1}:%L %M - %m%n"); 20 $appender->layout($layout); 21 22 ######### Run it ########################## 23 my $food = Groceries::Food->new("Sushi"); 24 $food->consume(); 25 26 ######### Application section ############# 27 package Groceries::Food; 28 29 use Log::Log4perl qw(get_logger); 30 31 sub new { 32 my($class, $what) = @_; 33 34 my $logger = get_logger("Groceries::Food"); 35 36 if(defined $what) { 37 $logger->debug("New food: $what"); 38 return bless { what => $what }, $class; 39 } 40 41 $logger->error("No food defined"); 42 return undef; 43 } 44 45 sub consume { 46 my($self) = @_; 47 48 my $logger = get_logger("Groceries::Food"); 49 $logger->info("Eating $self->{what}"); 50 }
Beginner's Pitfalls
Remember when we said that if a logger decides to fire, then it forwards the message to all of its appenders and also has it bubble up the hierarchy to hit all other appenders it meets on the way up?
Don't underestimate the ramifications of this statement. It usually puzzles
Log::Log4perl beginners. Imagine the following logging requirements for a new system:
- Messages of level FATAL are supposed to be written to STDERR, no matter which subsystem has issued them.
- Messages issued by the
Groceriescategory, priorized DEBUG and higher need to be appended to a log file for debugging purposes.
Easy enough: Let's set the root logger to
FATAL and attach a
Log::Dispatch::Screen appender to it. Then, let's set the
Groceries logger to
DEBUG and attach a
Log::Dispatch::File appender to it.
Now, if any logger anywhere in the system issues a
FATAL message and decides to 'fire,' the message will bubble up to the top of the
logger hierarchy, be caught by every appender on the way and ultimately
end up at the root logger's appender, which will write it to STDERR as
required. Nice.
But what happens to DEBUG messages originating within
Groceries? Not only will the
Groceries logger 'fire' and forward the message to its appender, but it will also
percolate up the hierarchy and end up at the appender attached to the root
logger. And, it's going to fill up STDERR with DEBUG messages from
Groceries, whoa!
This kind of unwanted appender chain reaction causes duplicated logs. Here's two mechanisms to keep it in check:
Each appender carries an additivity flag. If this is set to a false value, like in
$appender->additivity(0);
then the message won't bubble up further in the hierarchy after the appender is finished.
Each appender can define a so-called appender threshold, a minimum level required for an oncoming message to be honored by the appender:
$appender->threshold($ERROR);
If the level doesn't meet the appender's threshold, then it is simply ignored by this appender.
In the case above, setting the additivity flag of the
Groceries-Appender to a false value won't have the desired effect, because it will
stop FATAL messages of the
Groceries category to be forwarded to the root appender. However, setting the root
logger's threshold to
FATAL will do the trick: DEBUG messages bubbling up from
Groceries will simply be ignored.
Compact Logger Setups With Configuration Files
Configuring
Log::Log4perl can be accomplished outside of your program in a configuration file. In
fact, this is the most compact and the most common way of specifying the
behavior of your loggers. Because
Log::Log4perl originated out of the Java-based
log4j system, it understands
log4j configuration files:
log4perl.logger.Groceries=DEBUG,
This defines a logger of the category
Groceries, whichs priority is set to DEBUG. It has the appender
A1 attached to it, which is later resolved to be a new
Log::Dispatch::File appender with various settings and a PatternLayout with a user-defined
format (
ConversionPattern).
If you store this in
eat.conf and initialize your system with
Log::Log4perl->init("eat.conf");
then you're done. The system's compact logging setup is now separated from the application and can be easily modified by people who don't need to be familiar with the code, let alone Perl.
Or, if you store the configuration description in
$string, then you can initialize it with
Log::Log4perl->init(\$string);
You can even have your application check the configuration file in regular intervals (this obviously works only with files, not with strings):
Log::Log4perl->init_and_watch("eat.conf", 60);
checks
eat.conf every 60 seconds upon log requests and reloads everything and
re-initializes itself if it detects a change in the configuration file.
With this, it's possible to tune your logger settings while the system is running without restarting it!
The compatibility of
Log::Log4perl with
log4j goes so far that
Log::Log4perl even understands
log4j
Java classes as appenders and maps them, if possible, to the corresponding
ones in the
Log::Dispatch namespace.
Log::Log4perl will happily process the following Java-fied version of the configuration
shown at the beginning of this section:
log4j.logger.Groceries=DEBUG, A1 log4j.appender.A1=org.apache.log4j.FileAppender log4j.appender.A1.File=test.log log4j.appender.A1.layout=org.apache.log4j.PatternLayout log4j.appender.A1.layout.ConversionPattern=%F %L %p %t %c - %m%n
The Java-specific
FileAppender class will be mapped by
Log::Log4perl
to
Log::Dispatch::File behind the scenes and the parameters adjusted (The Java-specific
File will become
filename and an additional parameter
mode will be set to
"append" for the
Log::Dispatch world).
Typical Use Cases
The configuration file format is more compact than the Perl code, so let's use it to illustrate some real-world cases (although you could do the same things in Perl, of course!):
We've seen before that a configuration line like:
log4perl.logger.Groceries=DEBUG, A1
will turn on logging in
Groceries::Drink and
Groceries::Food (and all of their descendants if they exist) with priority DEBUG via
inheritance. What if
Groceries::Drink gets a bit too noisy and you want to raise its priority to at least INFO
while keeping the DEBUG setting for
Groceries::Food? That's easy, no need to change your code, just modify the configuration
file:
log4perl.logger.Groceries.Drink=INFO, A1 log4perl.logger.Groceries.Food=DEBUG, A1
or, you could use inheritance to accomplish the same thing. You define INFO
as the priority for
Groceries and override
Groceries.Food with a less restrictive setting:
log4perl.logger.Groceries=INFO, A1 log4perl.logger.Groceries.Food=DEBUG, A1
Groceries::Food will be still on
DEBUG after that, while
Groceries and
Groceries::Drinks will be on
INFO.
Or, you could choose to turn on detailed DEBUG logging all over the system
and just bump up the minimum level for the noisy
Groceries.Drink:
log4perl.logger=DEBUG, A1 log4perl.logger.Groceries.Drink=INFO, A1
This sets the root logger to
DEBUG, which all other
loggers in the system will inherit. Except
Groceries.Drink and its descendents, of course, which will carry the
INFO priority.
Or, similarily to what we've talked about in the Beginner's Pitfalls
section, let's say you wanted to print FATAL messages system-wide to
STDOUT, while turning on detailed logging under
Groceries::Food
and writing the messages to a log file? Use this:
log4perl.logger=FATAL, Screen log4perl.logger.Groceries.Food=DEBUG, Log log4perl.appender.Screen=Log::Dispatch::Screen log4perl.appender.Screen.stderr=0 log4perl.appender.Screen.Threshold=FATAL log4perl.appender.Screen.layout=Log::Log4perl::Layout::SimpleLayout log4perl.appender.Log=Log::Dispatch::File log4perl.appender.Log.filename=test.log log4perl.appender.Log.mode=append log4perl.appender.Log.layout=Log::Log4perl::Layout::SimpleLayout
As mentioned in Appenders, setting the appender threshold of the screen appender to FATAL keeps
DEBUG messages out of the root appender and so effectively prevents message
duplication.
According to the
Log::Dispatch::Screen documentation, setting its
stderr attribute to a false value causes it log to STDOUT instead of STDERR.
log4perl.appender.XXX.layout is the configuration file way to specify the no-frills Layout seen earlier.
You could also have multiple appenders attached to one category, like in
log4perl.logger.Groceries=DEBUG, Log, Database, Emailer
if you had
Log::Dispatch-type appenders defined for
Log,
Database
and
Performance Penalties and How to Minimize Them
Logging comes with a (small) price tag: We figure out at runtime
if a message is going to be logged or not.
Log::Log4perl's primary design directive has been to run this check at maximum speed in
order to avoid slowing down the application. Internally, it has been highly
optimized so that even if you're using large category hierarchies, the
impact of a call to e.g.
$logger->debug() in non-
DEBUG mode is negligable.
While
Log::Log4perl tries hard not to impose a runtime penalty on your application, it has
no control over the code leading to
Log::Log4perl calls and needs your cooperation with that. For example, take a look at
this:
use Data::Dumper; $log->debug("Dump: ", Dumper($resp));
Passing arguments to the logging functions can impose a severe runtime
penalty, because there's often expensive operations going on before the
arguments are actually passed on to
Log::Log4perl's logging functions. The snippet above will have
Data::Dumper completely unravel the structure of the object behind
$resp, pass the whole slew on to
debug(), which might then very well decide to throw it away. If the effective
debug level for the current category isn't high enough to actually forward
the message to the appropriate
appender(s), then we should have
never called
Dumper() in the first place.
With this in mind, the logging functions don't only accept strings as arguments, but also subroutine references which, in case the logger is actually firing, it will call the subroutine behind the reference and take its output as a message:
$log->debug("Dump: ", sub { Dumper($resp) } );
The snippet above won't call
Dumper() right away, but pass on the subroutine reference to the logger's
DEBUG method instead. Perl's closure mechanism will make sure that the value of
$resp will be preserved, even if the subroutine will be handed
over to
Log::Log4perls lower level functions. Once
Log::Log4perl will decide that the message is indeed going to be logged, it will execute
the subroutine, take its return value as a string and log it.
Also, your application can help out and check if it's necessary to pass any parameters at all:
if($log->is_debug()) { $log->debug("Interpolation: @long_array"); }
At the cost of a little code duplication, we avoid interpolating a huge array into the log string in case the effective log level prevents the message from being logged anyway.
Installation
Log::Log4perl is freely available from CPAN. It also requires the presence of two other
modules,
Log::Dispatch (2.00 or better, which is a bundle itself) and
Time::HiRes (1.20 or better). If you're using the CPAN shell to install
Log::Log4perl, then it will resolve these and other recursive dependencies for you
automatically and download the required modules one by one from CPAN.
At the time this article went to print, 0.22 was the stable release of
Log::Log4perl, available from [1] and CPAN. Also on [1], the CVS source tree is publicly
available for those who want the (sometimes shaky) bleeding development
edge. The CPAN releases, on the other hand are guaranteed to be stable.
If you have questions, requests for new features, or if you want to
contribute a patch to
Log::Log4perl, then please send them to our mailing list at
log4perl-devel@lists.sourceforge.net
on SourceForge.
Project Status and Similar Modules
Log::Log4perl has been inspired by
Tatsuhiko Miyagawa's clever
Log::Dispatch::Config module, which provides a wrapper around the
Log::Dispatch
bundle and understands a subset of the
log4j configuration file syntax. However,
Log::Dispatch::Config
does not provide a full Perl API to
log4j -- and this is a key issue which
Log::Log4perl has been designed to address.
Log::Log4perl is a
log4j port, not just a subset.
The
Log::Log4perl project is still under development, but its API has reached a fairly mature
state, where we will change things only for (very) good reasons. There's
still a few items on the to-do list, but these are mainly esoteric features
of
log4j that still need to find their way into
Log::Log4perl, since the overall goal is to keep it compatible. Also,
Log::Log4perl isn't thread safe yet -- but we're working on it.
Thanks
Special thanks go to fellow Log4perl founder Kevin Goess (cpan@goess.org), who wrote half of the code, helped generously to correct the manuscript for this article and invented these crazy performance improvements, making log4j jealous!
Mission
Scatter plenty of debug statements all over your code -- and put them to
sleep via the
Log::Log4perl configuration. Let the INFO, ERROR and FATAL statements print to a log
file. If you run into trouble, then lower the level in selected parts of the
system, and redirect the additional messages to a different file. The
dormant DEBUG statements won't cost you anything -- but if you run into
trouble, then they might save the day, because your system will have an embedded
debugger on demand. Have fun!
Infos
[1]
The
log4perl project page on SourceForge:
[2]
The
Log::Log4perl documentation:
[3]
The
log4j project page on the Apache site:
[4] Documentation to Log::Dispatch modules: | http://www.perl.com/pub/2002/09/11/log4perl.html | CC-MAIN-2014-15 | refinedweb | 5,026 | 50.16 |
char * tmpnam ( char * str );
<cstdio>
Generate temporary filename
A string containing a filename different from any existing file is generated.This string can be used to create a temporary file without overwriting any other existing file.If the str argument is a null pointer, the resulting string is stored in an internal static array that can be accessed by the return value. The content of this string is stored until a subsequent call to this same function erases it.If the str argument is not a null pointer, it must point to an array of at least L_tmpnam bytes that will be filled with the proposed tempname. L_tmpnam is a macro constant defined in <cstdio>.The file name returned by this function can be used to create a regular file using fopen to be used as a temp file. The file created this way, unlike those created with tmpfile is not automatically deleted when closed; You should call remove to delete this file once closed.
/* tmpnam example */
#include <stdio.h>
int main ()
{
char buffer [L_tmpnam];
char * pointer;
tmpnam (buffer);
printf ("Tempname #1: %s\n",buffer);
pointer = tmpnam (NULL);
printf ("Tempname #2: %s\n",pointer);
return 0;
} | http://www.cplusplus.com/reference/clibrary/cstdio/tmpnam/ | crawl-002 | refinedweb | 196 | 60.95 |
Originally posted at carloscuesta's blog
It's been a while since I've started working with React and React-Native in production. One of the greatest things about React is the flexibility the library gives to you. Meaning that you are free to decide how do you want to implement almost every detail of your project for example the architecture and structure.
However this freedom on the long term, could lead to a complex and messy codebase, specially if you don't follow a pattern. In this post I'll explain a simple way to organize and structure React Components.
A Component is a JavaScript function or class that returns a piece of UI.
We're going to create an
EmojiList component and then we are going to refactor it breaking it up into smaller isolated pieces applying the folder pattern. Here's how our component looks like:
EmojiList
As I mentioned before, we can start really simple and small, without following any pattern. This is our
EmojiList component contained in a single function.
If you open the CodeSandbox sidebar you'll see that our file tree looks like this:
. ├── components │ ├── EmojiList.js │ └── styles.js └── index.js
There's nothing wrong with this approach. But on larger codebases that kind of component becomes hard to maintain, because there a lot of things in it: state, ui, data... Take a look at our component code below 👇
EmojiList.js
import React from "react" import styles from "./styles" class EmojiList extends React.Component { state = { searchInput: "", emojis: [] } render() { const emojis = this.state.emojis.filter(emoji => emoji.code.includes(this.state.searchInput.toLowerCase()) ) return ( <ul style={styles.list}> <input style={styles.searchInput} placeholder="Search by name" type="text" value={this.state.searchInput} onChange={event => this.setState({ searchInput: event.target.value })} /> {emojis.map((emoji, index) => ( <li key={index} style={styles.item}> <div style={styles.icon}>{emoji.emoji}</div> <div style={styles.content}> <code style={styles.code}>{emoji.code}</code> <p style={styles.description}>{emoji.description}</p> </div> </li> ))} </ul> ) } } export default EmojiList
A step to improve this code, would be to create separate components into the same file and then using them at the main component. However, you'll be sharing styles among other things and that could be confusing.
Refactor
Let's start refactoring the single component into multiple ones by breaking up the UI into a component hierarchy.
If we take a look at the image, it's easy to identify that we can break up our UI in three different components: 🛠
EmojiList: Combines the smaller components and shares the state down.
SearchInput: Receives user input and displays the search bar.
EmojiListItem: Displays the List Item for each emoji, with the icon, name and description.
We're going to create a folder for each component, with two files, an
index.js that is going to hold all the code for the component and the
styles.js. That's one of the good things about this pattern. Every component defines his own UI and styles, isolating this piece of code from another components that doesn't need to know anything about them.
Notice that inside the
EmojiList folder, (that is a component), we add two nested components that only will be used within the
EmojiList component. Again, that's because these two components aren't going to be used out of that context. This helps reducing the visual clutter a lot.
. ├── EmojiList │ ├── EmojiListItem │ │ ├── index.js │ │ └── styles.js │ ├── SearchInput │ │ ├── index.js │ │ └── styles.js │ ├── index.js │ └── styles.js └── index.js
Now let's isolate and separate the code into the three components from the smallest to the biggest one:
EmojiListItem/
This component renders every emoji item that will appear on the list.
import React from "react" import styles from "./styles" const EmojiListItem = (props) => ( <li style={styles.item}> <div style={styles.icon}>{props.emoji}</div> <div style={styles.content}> <code style={styles.code}>{props.code}</code> <p style={styles.description}>{props.description}</p> </div> </li> ) export default EmojiListItem
SearchInput/
This component receives the user input and updates the state of the parent component.
import React from "react" import styles from "./styles" const SearchInput = (props) => ( <input style={styles.searchInput} placeholder="Search by name" type="text" value={props.value} onChange={props.onChange} /> ) export default SearchInput
EmojiList/
This is the top level component, holds the state and data of our example and imports the other components to recreate the whole UI of our tiny application. Isolating components makes the render method more readable and easier to understand ✨.
import React from "react" import SearchInput from "./SearchInput" import EmojiListItem from "./EmojiListItem" import styles from "./styles" class EmojiList extends React.Component { state = { searchInput: "", emojis: [] } render() { const emojis = this.state.emojis.filter(emoji => emoji.code.includes(this.state.searchInput.toLowerCase()) ) return ( <ul style={styles.list}> <SearchInput onChange={(event) => this.setState({ searchInput: event.target.value })} value={this.state.searchInput} /> {emojis.map((emoji, index) => ( <EmojiListItem key={index} code={emoji.code} description={emoji.description} emoji={emoji.emoji} /> ))} </ul> ) } } export default EmojiList
That's basically the architecture that I use at the company I'm working on. I'm pretty satisfied with the experience of using this pattern. Our components turned out a lot easier to maintain and use. Anyway there are no silver bullets on Software Engineering, so figure what works best for you or your team!
Discussion (2)
Thanks for the article.
You have the input as a child of ul, you should wrap it inside li or semantically, move it above UL.
I like the idea behind folder pattern, but I am not a fan of having tons of index.js files. I suppose you set the editor to show the folder name of the opened file as well or else you would be having lots of index.js files and it would be impossible to switch between them.
I like the container component pattern. You have a container component that does all the data fetching and formatting, and then you have the view component that only acts as the view/presentational component. Its actually very similar to folder pattern except it uses different names and in most cases does not contain folders but it depends on use/case and how many child views main component has.
Yeah semantically, the Input should be placed out of the ul element.
In my opinion, tools should not change the way we structure and code our projects. Tools must be improved and changed if needed. On my specific use case I use Atom and I switch between files using the ⌘+P and the name of the folder.
Thanks for sharing your opinion! | https://practicaldev-herokuapp-com.global.ssl.fastly.net/carloscuesta/scalable-react-components-architecture-3k2e | CC-MAIN-2021-39 | refinedweb | 1,094 | 59.4 |
ESP8266 Firmata-J5 NodeBot
Introduction: ESP8266 Firmata-J5 NodeBot
The ESP8266 is a WIFI-enabled SoC available in many shapes and forms.
It usually comes with nodemcu firmware which runs on lua script. Most of us just flash it using the arduino core provided by the Esp8266 community themselves.
Now, you can program it using Javascript!!!
This is thanks to the great work by soundanalogous, jacobrosenthal and jnsbyr (work on Firmata firmware) and ofcourse rwaldron for Johnny-five library.
This is just a guide about how to make it work.
Also:
1. The ESP8266 works as Wi-Fi server. An option to configure as a Wi-Fi client will be added in the future.
2. Analog Read does not seem to be working for the time being, but work is in progress.
3. This is all new and the dependencies may get updated often.
Here we are going to use the StandardFirmataWiFi to use ESP8266 as server, connect to a network and run a node.js J-5 script on a host machine connected to same network to blink LED pin 13.
This was tested on macosX
Step 1: Ingredients
- Libraries
- Latest Arduino IDE(1.6.8 as of writing)
- ESP8266 Core
- Latest Arduino Firmata Library with support for ESP8266
- Latest johnny-five library
- Etherport-client
- Computer
- host machine(computer, raspberry-pi, etc) capable of running node.js
- Patience
- Based on your board, you might also need an USB-serial converter
Also, I assume you have experience with programming ESP8266 based board with Arduino IDE,
otherwise, please first read here
Step 2: Some Information
Johnny-Five:
JavaScript Robotics and IoT programming framework, developed at Bocoup.
StandardFirmataWiFi
StandardFirmataWiFi is a WiFi server application. You will need a Firmata client library with
a network transport in order to establish a connection with StandardFirmataWiFi. To use StandardFirmataWiFi you will need to have one of the following boards or shields: - Arduino WiFi Shield (or clone) - Arduino WiFi Shield 101 - Arduino MKR1000 board (built-in WiFi 101) - ESP8266 WiFi board compatible with ESP8266 Arduino core
EtherPortClient
Client-side virtual serial port for Etherport for the implementation of firmata-compatible boards and tethering hubs to control boards by a remote entity.
Step 3: Arduino IDE Configuration
- get the latest arduino ide
Install the approapriate IDE for your platform
2.Install the ESP8266 core
Installing with Boards Managerfield. You can add multiple URLs, separating them with commas.Open Boards Manager from Tools > Board menu and install esp8266 platform (and don't forget to select your ESP8266 board from Tools > Board menu after installation).The best place to ask questions related to this core is ESP8266 community forum:...
3. Download the Latest Arduino Firmata Library
Updating Firmata in the Arduino IDE - Arduino 1.6.4 and higher
If you want to update to the latest stable version: Open the Arduino IDE and navigate to: Sketch > Include Library > Manage LibrariesFilter by "Firmata" and click on the "Firmata by Firmata Developers" item in the list of results.Click the Select version dropdown and select the most recent version (note you can also install previous versions)Click Install.
Cloning Firmata
If you are contributing to Firmata or otherwise need a version newer than the latest tagged release, you can clone Firmata directly to your Arduino/libraries/ directory (where 3rd party libraries are installed). This only works for Arduino 1.6.4 and higher, for older versions you need to clone into the Arduino application directory (see section below titled "Using the Source code rather than release archive"). Be sure to change the name to Firmata as follows: $ git clone git@github.com:firmata/arduino.git ~/Documents/Arduino/libraries/Firmata Update path above if you're using Windows or Linux or changed the default Arduino directory on OS X
Now we need to modify the libraries accordingly.
Step 4: Making Necessary Changes
- Open the StandardFirmataWiFI example from examples(check the picture)
- This will open StandardFirmataWiFI.ino along with wifiConfig.h
Uncomment / comment the appropriate set of includes for your hardware (OPTION A, B or C). Option A is enabled by default.
4.It is already configured for ESP8266. Find Step 2:
// STEP 2 [REQUIRED for all boards and shields]
// replace this with your wireless network SSIDchar ssid[] = "your_network_name";
change "your_network_name" to yours appropriately, e.g "myHomewifi"
5. In this example we are using DHCP so we are not configuring Step 3. If you need to use static IP please follow the instructions provided in the wifiConfig.h file itself
6. Configure your home network pass: The default is Option A
/* * OPTION A: WPA / WPA2 * * WPA is the most common network security type. A passphrase is required to connect to this type. * * To enable, leave #define WIFI_WPA_SECURITY uncommented below, set your wpa_passphrase value * appropriately, and do not uncomment the #define values under options B and C */ #define WIFI_WPA_SECURITY
#ifdef WIFI_WPA_SECURITY char wpa_passphrase[] = "your_wpa_passphrase"; #endif //WIFI_WPA_SECURITY
7. Flash your ESP8266 using the corresponding settings to your board
e.g I use the NodeMCU 0.9 development board. I use and FTDI-TTL adapter to program the MCU.
You need to select the appropriate Port settings.
It might say the file is Read-only. Just save another copy
Step 5: Setting Up Johnny-Five!
Install Johnny-Five:
You need to have node.js already running on your system. Read Here
Source Code:
git clone git://github.com/rwaldron/johnny-five.git && cd johnny-five
npm install
npm package:
Install the module with:
npm install johnny-five
We also need the Etherport-client library:
Client-side virtual serial port for Etherport for the implementation of firmata-compatible boards and tethering hubs to control boards by a remote entity.
npm install etherport-client
Writing JS Client
Create a js file in the johnny-five directory and name it appropriately.
Paste this code:
Update line 18 below to the ESP8266 board address
/*
* Update line 18 below to the ESP8266 board address * * Enable Serial debugging by uncommenting //#defin SERIAL_DEBUG in StandardFirmataWiFi * (save a copy of StandardFirmataWiFi first) * * On startup (you may have to reset the ESP board because it starts up really fast * view the Serial output to see the assigned IP address (if using DHCP) * Or if you want to give the board a static IP (not use DHCP) then uncomment the * following lines in wifiConfig.h and update to your chosen IP address: * #define STATIC_IP_ADDRESS 10,0,0,17 * #define SUBNET_MASK 255,255,255,0 // REQUIRED for ESP8266_WIFI, ignored for others * #define GATEWAY_IP_ADDRESS 0,0,0,0 // REQUIRED for ESP8266_WIFI, ignored for others */ var Firmata = require("firmata").Board; var EtherPortClient = require("etherport-client").EtherPortClient; var board = new Firmata(new EtherPortClient({ host: "192.168.1.103", port: 3030 }));
board.on("ready", function() { console.log("READY!"); console.log( board.firmware.name + "-" + board.firmware.version.major + "." + board.firmware.version.minor );
var state = 1; var lastVal = 0;
this.pinMode(2, this.MODES.OUTPUT);
setInterval(function() { // blinks the blue LED on a HUZZAH ESP8266 board // for other boards, wire an LED to pin 2 or change // the pin number below this.digitalWrite(13, (state ^= 1)); }.bind(this), 500);
// this does not seem to be working - need to look into it // one other thing is ESP uses a 1V reference for analog so // once this works, it will need scaling this.analogRead(0, function(value) { if (value != lastVal) { console.log(value); } });
});
Step 6: Making It Work
So now after flashing your ESP8266 board with the StandardFirmataWifi firmware and we can run the basic Johnny-five script.
use:
node "your_file_name".js
Change the line name according to your file name you just created.
That's it. You should see the LED attached to pin 13 blink.
Now you can make awesome projects from here. The possiblities are endless. Johnny-five already have a lot of examples.
Again. A big shoutout to anyone who helped this awesome feature be possible. Sorry if a didn't give credit to anyone unintentionally.
I followed along and when I went to run my final file i got this:
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff}
span.s1 {font-variant-ligatures: no-common-ligatures}
SyntaxError: Invalid or unexpected token
at Object.exports.runInThisContext (vm.js:76:16)
Any Ideas?
Is Johnny Five used here? I don't see it in any of the code, is J5 really necessary?
Thanks for your instruction!
Step 5
"npm install"
was a bit irritating for me because of the received messages (on OSX):
[...]
[execSync v1.0.2] Native code compile failed!!
[...]
and with "npm install johnny-five" I received:
npm WARN install: Refusing to install johnny-five as a dependency of itself
but: IT WORKED!
nice. Maybe you can share how you dealt with the errors?
Can wait to see what kinds of projects come out of this.
I didn't care about the errors. It worked anyway. | http://www.instructables.com/id/ESP8266-Firmata-J5-NodeBot/ | CC-MAIN-2017-43 | refinedweb | 1,478 | 56.76 |
Tony Hoare, a British computer scientist, invented the QuickSort algorithm in 1959. The name "Quick-sort" stems from the fact that it can sort a list of data elements substantially faster (twice or three times faster) than any other sorting method.
Quicksort is one of the most efficient sorting algorithms. It works by breaking an array (partition) into smaller ones and swapping (exchanging) the smaller ones, depending on a comparison with the 'pivot' element picked.
By the end of this tutorial, you will have a better understanding of the fundamental technicalities of the Quick Sort with all the necessary details along with practical implementations.
What Is the Quick Sort Algorithm?
Quicksort is a highly efficient sorting technique that divides a large data array into smaller ones. A vast array is divided into two arrays, one containing values smaller than the provided value, say pivot, on which the partition is based. The other contains values greater than the pivot value.
Now, look at the working of the Quick-Sort algorithm to understand the Quick Sort better.
How Does Quick Sort Work?
To sort an array, you will follow the steps below:
- You will make any index value in the array as a pivot.
- Then you will partition the array according to the pivot.
- Then you will recursively quicksort the left partition
- After that, you will recursively quicksort the correct partition.
Let's have a closer look at the partition bit of this algorithm:
- You will pick any pivot, let's say the highest index value.
- You will take two variables to point left and right of the list, excluding pivot.
- The left will point to the lower index, and the right will point to the higher index.
- Now you will move all elements which are greater than pivot to the right.
- Then you will move all elements smaller than the pivot to the left partition.
And this is how the QuickSort algorithm works. Now implement this algorithm through a simple C++ code.
How to Implement the Quick Sort Algorithm?
You will be provided with an array of elements {10, 7, 8, 9, 1, 5}. You have to write a code to sort this array using the QuickSort algorithm. The final array should come out to be as {1, 5, 7, 8, 9, 10}.
Code:
// C++ implementation of QuickSort
#include <bits/stdc++.h>
using namespace std;
// A utility function to swap two elements
void swap(int* a, int* b)
{
int t = *a;
*a = *b;
*b = t;
}
/* This function takes the final pivot element, puts the pivot element in an ordered array, and places all smaller elements on the left side of the pivot, as well as all larger elements on the right of the pivot. */
int partition (int arr[], int l, int h)
{
int pivot = arr[h]; // pivot
int i = (l - 1); // Index of smaller element and indicates the right position of pivot found so far
for (int k = l; k <= h - 1; k++)
{
// When the actual element is less than the pivot
if (arr[k] < pivot)
{
i++; // increment index of smaller element
swap(&arr[i], &arr[k]);
}
}
swap(&arr[i + 1], &arr[h]);
return (i + 1);
}
//A function to implement quicksort
void quickSort(int arr[], int l, int h)
{
if (l < h)
{
//pi is a partitioning index, and
//arr[p] is now in the correct location.
int pi = partition(arr, l, h);
// Separately sort elements before
// partition and after partition
quickSort(arr, l, pi - 1);
quickSort(arr, pi + 1, h);
}
}
/* Function to print an array */
void print_array(int arr[], int size)
{
int i;
for (i = 0; i < size; i++)
cout << arr[i] << " ";
cout << endl;
}
int main()
{
int arr[] = {11, 13, 16, 1, 3, 5, 9};
int n = sizeof(arr) / sizeof(arr[0]);
quickSort(arr, 0, n - 1);
cout << "Sorted array: \n";
printArray(arr, n);
return 0;
}
You have now explored the working of Quick Sort with a code. Now you will see some of the advantages of the Quick Sort.
What Are the Advantages of Quick Sort?
Let us discuss a few significant benefits of using Quick Sort and a few scenarios where Quick Sort is proven to be delivering the best performance.
- It is an in-place algorithm since it just requires a modest auxiliary stack.
- Sorting n objects takes only n (log n) time.
- Its inner loop is relatively short.
- After a thorough mathematical investigation of this algorithm, you can make a reasonably specific statement about performance issues.
What Are the Disadvantages of Quick Sort?
Despite it being the quickest algorithm, Quick Sort does a few downfalls. Let us address a few significant limitations that you should be considering before you implement Quick Sort in real-time.
- It is a recursive process. The implementation is quite tricky, mainly if recursion is not provided.
- In the worst-case scenario, it takes quadratic (i.e., n2) time.
- It is fragile in the sense that a slight error in implementation can go unreported and cause it to function poorly.
With this, you have come to an end of this tutorial. You will now look at what could be your next steps to master other sorting algorithms.
Advance your career as a MEAN stack developer with the Full Stack Web Developer - MEAN Stack Master's Program. Enroll now!
Next Steps
Your next stop in mastering data structures should be the selection Sort Algorithm. Using in-place comparisons, the selection sort algorithm divides the list into two parts, with the sorted half on the left and the unsorted half on the right.. In this web development course, you'll study Angular, Spring Boot, Hibernate, JSPs, and MVC, which will help you to get started as a full-stack developer.
If you have any questions or require clarification on this "Merge Sort Algorithm" tutorial, please leave them in the comments section below. Our expert team will review them and respond as soon as possible. | https://www.simplilearn.com/tutorials/data-structure-tutorial/quick-sort-algorithm?source=sl_frs_nav_playlist_video_clicked | CC-MAIN-2021-49 | refinedweb | 979 | 61.16 |
In this article we discuss recursion in Python programming. Recursion is a fundamental concept in Computer Science, and regardless of what your development goals are, it is good to have an understanding of at least the basics.
Topics covered:
- The basic concept of recursion
- What is a base case?
- Some examples of recursive algorithms
- Visualizing recursion
In terms of day-to-day development, the amount you use recursion will vary by context. Some developers may make little or no explicit use of it while for others it will be a mainstay. Regardless, recursion is part of the very fabric of computing, and even if you don’t use it explicitly in your everyday work, you can bet it’s happening a whole lot behind the scenes.
Here are some examples of where recursion is used in computing:
- traversing DOM elements
- processing recursively defined data such as that stored in trees
- command shells
- compilers and linkers
- evaluation of arithmetic expressions
- database systems
Recursion is so important and useful that almost all modern programming languages support it.
So what is recursion?
It’s probably best to look at an example first and then break it down to explain what is happening.
An Example of a Recursive Algorithm in Python
Type this code into a new Python file.
def countdown(n): if n <= 0: print("LIFTOFF!") else: print(n) countdown(n - 1) countdown(10)
Before you run it, have a think about what the output of this program might be. You can click below to see the solution.
10 9 8 7 6 5 4 3 2 1 LIFTOFF!
What is going on here? Even though it’s a simple program, it contains the fundamental ingredients of recursion:
Base case
A base case is essential with recursion. Without it there is no way for the algorithm to “know” when to stop. Not having one is like having a
while True loop – i.e. you get an infinite loop, except with recursion you will eventually hit your system’s maximum recursion limit. Here the base case is when
n <= 0.
Movement toward the base case
The algorithm must approach the base case on each successive call, otherwise it can not terminate. Again comparing this to a
while loop, not moving towards the base case is like not moving towards the condition for the while loop to exit. Each successive call here has
n - 1 as its argument so we are approaching the base case. This is good.
A recursive call
The simple yet powerful idea here is that the function definition contains a call to itself within its body. Did you notice that the function definition for
countdown() contains a call to the function
countdown()?
Stages of recursion
One key thing to understand about recursion is that there are two stages to a recursive algorithm. Before anything is returned from the initial function call, all the subsequent recursive function calls are made, until the base case is reached. At that point, the call stack (which contains a frame for each function call), begins to unwind, until a value for the initial function call is returned.
This is probably best illustrated visually. Look at this representation of a call to the
factorial(n) function, which calculates the product of decreasing values of
n and whose mathematical symbol is
!. For example
5! = 5 * 4 * 3 * 2 * 1
def factorial(n): if n == 1: return 1 else: return n * factorial(n-1) print(factorial(5))
Here’s what happens before the final value of
120 is returned and printed:
|-- factorial(5) | |-- factorial(4) | | |-- factorial(3) | | | |-- factorial(2) | | | | |-- factorial(1) | | | | | |-- return 1 | | | | |-- return 2 | | | |-- return 6 | | |-- return 24 | |-- return 120 120 >>>
factorial(5) calls
factorial(4) which calls
factorial(3) etc, until the base case is reached (
n == 1), then each of the function calls returns its value, in the reverse order to that in which they were called, until the value for the initial call
factorial(5) is returned.
We can use the same kind of diagram for our first example of a recursive algorithm,
countdown(n) although is is less clear what is happening since nothing (actually
None) is returned by each successive function call, as we are using
|-- countdown(5) 5 | |-- countdown(4) 4 | | |-- countdown(3) 3 | | | |-- countdown(2) 2 | | | | |-- countdown(1) 1 | | | | | |-- countdown(0) LIFTOFF! | | | | | | |-- return None | | | | | |-- return None | | | | |-- return None | | | |-- return None | | |-- return None | |-- return None None
How to Master Recursion in Python
Learners often find recursion confusing when they first encounter it. This is completely normal. Recursion has the paradoxical quality of being both very simple and intuitive on the one hand, and seemingly confusing and complex on the other. The way to gain confidence and competence with the topic is by looking at lots of examples of recursive algorithms and, more importantly, writing them for yourself. You may also have to spend a bit of hard thinking time wrapping your head around what is happening. Having a whiteboard handy may help as you trace a particular function call and try to anticipate what happens next. Don’t be discouraged if it takes a while for your understanding of recursion to grow. It is well worth the effort!
Happy computing! | https://compucademy.net/recursion-in-python-programming/ | CC-MAIN-2022-27 | refinedweb | 868 | 59.84 |
Step-by-Step Guide to Using Generic ADO.NET
Those of you who know what lies behind frameworks such as 'Entity Framework' and 'N-Hibernate' will undoubtedly be going 'Why'? For those of you who don't, well... stay tuned, this might get just a little bit interesting.
With all the ORMs floating about today, the modern day .NET developer is spoiled for choice.
You have EF, NHibernate, Massive, Simple.Data and about a zillion others, and don't get me wrong, it's great, never has it been so easy to deal with a database at object level.
For all you line of business app guys out there, this is pretty much all you need to know.
Object goes in, object comes out; pretty simple right?
However, on more than one occasion I've seen developers getting really upset, because their ORM is slow, or too limiting, and I'll be one of the first to actually sit here and sympathize with you, why?
Well if any of you know me at all, you'll know I do a *LOT* of GIS based work.
Not just your hey let's grab a location and plot a Google map of it on a site type work, but real in depth stuff like terrain analysis, behavioural tracking and a ton of other stuff.
For most of this work I quite often have to employ the use of PostgreSQL as a database server.
Postgres, is a fantastic bit of software (If you've never tried it I absolutely recommend you do), but because it's not .NET or designed for .NET it can be a bit of a challenge to get it working.
Currently there is a build of 'Npgsql' that allows you to use Postgres with Entity Framework, but in my honest opinion, it still needs a lot of work done on it, before it's ready.
Because my applications often need to do things that are either difficult or not at all possible for a lot of ORMs, I very often have to resort to using plain old ADO.NET code, and that's what I'm about to show you here.
Underneath all of these various ORMs available, is still the same generic ADO.NET runtime, as we will be using. What most of these ORMs give us, are things like change tracking, repository patterns, unit of work patterns. And they do things like use reflection on our objects, and map the property names to table columns automatically.
All of this changes when you get down to the ADO level, because you now have to implement all of this yourself.
You might however be quite surprised at just how simple all of this actually is.
Step1 - Connecting
The first thing you need is a connection string. Most of you will be familiar with defining these in your web.config or app.config files, and if that's what you want to do you still can. There is however, another way, using a connection string builder. You can create a connection string builder as follows:
DbConnectionStringBuilder csb = new DbConnectionStringBuilder();
Once you've done this, you can then populate your string using the various properties available:
csb.Add("Host", "MyServer"); csb.Add("Database", "TheDatabase");
As you can see, this is not really an intuitive way of doing it, which is why you often don't use the generic 'DbXXXXX' methods that are available; instead you install an adapter layer such as 'NpgSql' that I mentioned before, or things like the official 'Sql Server' assemblies.
Once you add those, you then get to use something similar to the following (This is for Postgres):
NpgSqlConnectionStringBuilder csb = new NpgSqlConnectionStringBuilder() csb.Database = "TheDatabase"; csb.Host = "MyServer";
Exactly what nodes are available, depends entirely on the adapter used, but many are the same or follow the same naming pattern. In particular all of the connection string builders will have a 'ConnectionString' property, and that moves us onto our next part.
Once you have a connection string, you then need a connection to the server, and you do this by using the connection object as follows:
using(DbConnection conn = new DbConnection(csb.ConnectionString)) { conn.Open(); ... conn.Close(); }
Again, I've used the generic versions, just to show you the sequence needed, as with the connection string builder, you'd substitute these for your adapter specific version eg: 'SqlDbConnection'.
Step 2 - Running a Command
Once you've opened your connection, you'll then want to run SQL commands against it. To do this, you need a 'DbCommand' object, and some SQL for it to run, something similar to the following:
string sql = "select * from mytable"; using(DbCommand cmd = new DbCommand(sql, conn)) { ... }
Word of warning, the generic version doesn't actually take any parameters, so again, just like previously you will need to substitute this for your adapter version. If you copy and paste the example above, it will fail to compile unless you modify it.
If you want to run a call to a stored procedure or function, you need to set the command type property on your command from 'Text' to 'Stored Procedure' then specify the name of the stored proc, in place of the SQL string shown above.
Once you have your command object, you then have a choice of three different execution strategies:
cmd.ExecuteScalar(); cmd.ExecuteNonQuery(); cmd.ExecuteReader();
Execute scalar is used primarily for executing functions, where the result is generated from the first column of the first row of any result set, or in some cases by the return value from a given function or stored procedure.
Execute non Query is typically used to run updates, inserts and DDL based SQL (such as that for creating a table). Basically anything that's not expected to return a result set should be run using the non query version.
If your query is expected to return data, for example a select query, then you need to use Execute Reader.
Step 3 - Reading the Data
Execute reader returns a data reader object as shown below (again using the generic version as we have done throughout this article):
string sql = "select * from mytable"; using(DbCommand cmd = new DbCommand(sql, conn)) { using(DBDataReader dr = cmd.ExecuteReader()) { ... } }
The properties and methods on your Data Reader can then be used to sequentially get the rows from your database, and pick the values out into your own objects, for example:
string sql = "select * from mytable"; using(DbCommand cmd = new DbCommand(sql, conn)) { using(DBDataReader dr = cmd.ExecuteReader()) { while(dr.Read()) { string Name = (string)dr[0]; int Age = (int)dr[1]; } } }
If you've named the columns in your query, then you can specify the name of the column directly in the call to the data reader EG: '(string)dr["name"]' rather than using the integer index and having to know the result set order.
There are also a number of ways you can make sure the data is in the correct format, from the methods in the 'System.Convert' name space right through to actually using 'TryParse', 'Parse' and other methods, I just used casting for simplicity in the example.
In the event that your query may or may not have rows, you can decide to go into a reading loop by using the boolean result from 'dr.HasRows' and a simple if statement:
if(dr.HasRows) { ... read here ... }
And that's pretty much how simple it is, but what if you wish to make a class that's generic and can be connected to any database adapter?
Well, all of the generic objects shown above, can all use their interface version, so 'DbConnection' becomes 'IDbConnection', 'DbCommand' becomes 'IDbCommand' and so on.
Since all database adapter layers inherit from each of these interfaces, using those types in place of an NpgSql or other Db object, will allow you to write a set of simple, customised generic functions, that you can easily just then inject a database adapter driver into using some kind of IoC based pattern.
There's a lot more that can be accomplished under the hood but the basic pattern shown above will get you 90% of the way. To close of this post, here's a full example of reading using the official SQL Server layer 'System.Data.SqlClient':
using System; using System.Data; using System.Data.SqlClient; class Program { static void Main() { string connectionString = "Data Source=MyServer;Initial Catalog=MyDatabase; Integrated Security=true"; using (SqlConnection conn = new SqlConnection(connectionString)) { conn.Open(); string sql = "select name, email from users"; using(SqlCommand command = new SqlCommand(sql, connection)) { SqlDataReader dr = command.ExecuteReader(); while (dr.Read()) { Console.WriteLine("Name: {0}\tEmail: {1}",dr[0], dr[1]); } dr.Close(); } } } }
One thing I've left out of this is using parameters to insert data. This can easily take a full post itself to do however, so I'll cover that another time, but please for the sake of security, if you're using ADO.NET to insert data, do not use plain string concatenation to do so; this will likely lead to lots of SQL injection attacks if you do.
Instead, the DbCommand object has a parameters collection, use that to add your parameters, then mark those parameters up in your SQL string using '@param'. I can't stress this any more importantly, unless you know 100% where the data for your parameters are coming from, using string concatenation is a really, really bad idea.
If there's anything you'd like to see in this column, you can generally find me on Twitter as @shawty_ds or in the Linked.NET (Lidnug) users group on Linked-In that I help run. Feel free to come along and say hi, and if you think I can improve on anything, let me know. These small articles are for your .NET toolbox, so let me know what tools you need, and I'll see what I can do.
There are no comments yet. Be the first to comment! | http://www.codeguru.com/columns/dotnet/step-by-step-guide-to-using-generic-ado.net.htm | CC-MAIN-2015-18 | refinedweb | 1,663 | 60.14 |
The following form allows you to view linux man pages.
#include <stdarg.h>
void va_start(va_list ap, last);
type va_arg(va_list ap, type);
void va_end(va_list ap);
void va_copy(va_list dest, va_list src); func-
tion invo-
cation of va_end() in the same function. After the call va_end(ap) the
variable ap is undefined. Multiple traversals of the list, each brack- invoca-
tion of va_end() in the same function. Some systems that do not supply
va_copy() have __va_copy instead, since that was the name used in the
draft proposal.
Multithreading (see pthreads(7))
The va_start(), va_arg(), va_end(), and va_copy() macros are thread-
safe.
The va_start(), va_arg(), and va_end() macros conform to C89. C99
defines the va_copy() macro.
These macros are not compatible with the historic macros they replace.
A backward-compatible version can be found in the include file
<varargs.h>.
The historic setup is:
#include <varargs.h>
void
foo(va_alist)
va_dcl
{
va_list ap;
va_start(ap);
while (...) {
...
x = va_arg(ap, type);
their arguments on to a function that takes a va_list argument, such as
vfprintf(3).
The function foo takes a string of format characters and prints out the
argument associated with each format character based on the type.
);
}
2013-12-10 STDARG(3)
webmaster@linuxguruz.com | http://www.linuxguruz.com/man-pages/va_copy/ | CC-MAIN-2017-43 | refinedweb | 206 | 67.25 |
Hello all,
I’m having trouble coding using TimerOne. Is there an easier way to step out of my loop without altering the timing to execute a One-Shot style timer?
Here is the concept:
Run a routine in the main loop to read a sensor. when an external button is pressed it turns on an LED for 4 seconds, then turns off. It seems simple but Im having issues. Any help would be appreciated.
Not My Code below, Im altering it to work for my purposes
#include <TimerOne.h> void setup() { // Initialize the digital pin as an output. // Pin 13 has an LED connected on most Arduino boards pinMode(13, OUTPUT); Timer1.initialize(100000); // set a timer of length 100000 microseconds (or 0.1 sec - or 10Hz => the led will blink 5 times, 5 cycles of on-and-off, per second) Timer1.attachInterrupt( timerIsr ); // attach the service routine here } void loop() { // Main code loop // TODO: Put your regular (non-ISR) logic here } /// -------------------------- /// Custom ISR Timer Routine /// -------------------------- void timerIsr() { // Toggle LED digitalWrite( 13, digitalRead( 13 ) ^ 1 ); }
the code attached has functions such as Timer1.stop, Timer1.start… but not all work, I tried different versions of the TimerOne library with the same problem.
There may be an easier way to accomplish and the accuracy needed is low so that is not an issue. I have attached the library for reference.
Thanks,
TimerOne-r11.zip (26 KB) | https://forum.arduino.cc/t/one-shot-timer-with-uno/276393 | CC-MAIN-2022-40 | refinedweb | 236 | 74.39 |
Python Interview Questions
A high-level, interactive and object-oriented scripting language, Python is a highly readable language that makes it ideal for beginner-level programmers. Here we can help you to prepare for the best Python interview questions. It uses English keywords and has fewer syntactical constructions as compared to other languages. Similar to PERL and PHP, Python is processed by the interpreter at runtime. Python supports the Object-Oriented style of programming, which encapsulates code within objects.
Best Python Interview Questions And Answers
A high-level, interactive and object-oriented scripting language, Python is a highly readable language that makes it ideal for beginner-level programmers. It uses English keywords and has fewer syntactical constructions as compared to other languages.
from random import shuffle
x = ['My', 'Singh', 'Hello', 'India']
shuffle(x)
print(x)
The output of the following code is as below.
['Singh', 'India', 'Hello', 'My']
A Flask is a micro web framework for Python based on the "Werkzeug, Jinja 2 and good intentions". Werkzeug and jingja are its dependencies. Because a Flask is part of the micro-framework, it has little or no dependencies on the external libraries. A Flask also makes the framework light while taking little dependency and gives fewer security bugs.
- Lists are mutable. Tuples are immutable.
- Lists are slower. Tuples are faster.
- List Syntax:
list_1 = [10, ‘Chelsea’, 20]. Tuples Syntax:
tup_1 = (10, ‘Chelsea’ , 20)
Python is an interpreted language. It runs directly from the source code and converts the source code into an intermediate language. This intermediate language is translated into machine language and has to be executed.
The process of picking can be defined as: Pickle, which is a module, accepts an object, converts it into a string, and dumps into a file using dump function.
The process of retrieving Python objects from the stored string is called unpickling.
copy.copy ()or
copy.deepcopy()for copy an object.
Everything in Python is like an object. All variables hold different references to the objects. The values of references are as per their functions. As a result, the programmer cannot change the value of the references. However, he can change the objects if they are mutable.
A session allows the programmer to remember information from one request to another. In a flask, a session uses a signed cookie so that the user can look at the contents and modify. The programmer will be able to modify the session only if it has the secret key Flask.secret_key.
Lambda is an anonymous expression function that is often used as an inline function. Its form does not have a statement as it is only used to make new functional objects and then return them at the runtime.
The module is a way to structure a program. Each Python program is a module, which imports other modules such as objects and attributes. The entire folder of the Python program is a package of modules. A package can have both modules or subfolders.
If you assign a new value to a variable anywhere within the function's body, it is assumed to be local. The variables that are referenced inside a function are known as global.
Memory is managed by the private heap space. All objects and data structures are located in a private heap, and the programmer has no access to it. Only the interpreter has access. Python memory manager allocates heap space for objects. The programmer is given access to some tools for coding by the core API. The inbuilt garbage collector recycles the unused memory and frees up the memory to make it available for the heap space.
Indexing Python sequences in both positive and negative numbers are possible. For the positive index, 0 is the first index, 1 is the second index and so forth. For the negative index, (-1) is the last index and (-2) is the second last index and so forth.
Pass means where there is a no-operation Python statement. It is just a placeholder in a compound statement where nothing needs can be written. The continue makes the loop to resume from the next iteration.
Python does not support the unary operators; rather, it supports augmented assignment operators.
The arithmetic operators it supports are as follows-
- Addition- '+'
- Subtraction- '-'
- Multiplication- '*'
- Division- '/:
- Modulo division- '%'
- Power of- '**'
- Floor div- '//'
In order to make the Python coding more relatable, PEP 8 is the coding convention. It is also a set of recommendations that are used to make the codes executable and modular.
Method resolution order or MRO refers to when one class inherits from multiple classes. The class that gets inherited is the parent class and the class that inherits is the child class. It also refers to the order where the base class is searched while executing the method.
This function returns to a printable presentation for the given object. It takes a single object & its syntax is repr(obj). The function repr computes all the formal string representation for the given object in Python.
Both lists and arrays in Python can store the data in the same way.
The difference is-
It refers to the method which adds a certain value to the class. It can’t be initiated by the user rather only occurs when an internal action takes charge. In python, the built-in classes define a number of magic methods.
The entity that changes the data types from one form to another is known as typecasting. In programming languages, it is used to make sure the variables are processed in the correct sequence by the function.
E.g., while converting an integer to string.
- Extensive Support Libraries
- Extensive Integration Features
- Improves Programmer's Productivity
- Platform Independent
The built-in method which decides the types of the variable at the program runtime is known as type() in Python. When a single argument is passed through it, then it returns given object type. When 3 arguments pass through this, then it returns a new object type.
Python 3.8.2
- Similar to PERL and PHP, Python is processed by the interpreter at runtime. Python supports
In total, there are 33 keywords in Python. It is important to know them all in order to know about their use so we can utilize them. In additon, while we are naming a variable, the name cannot be matched with the keywords. This is another reason to know all the keywords.
For performing Static Analysis, PyChecker is a tool that detects the bugs in source code and warns the programmer about the style and complexity. Pylint is another tool that authenticates whether the module meets the coding standard.
Decorators are specific changes that we make in syntax to alter functions.
Dict and List are syntax constructions that ease the creation of a Dictionary or List based on iterables.
In Python, every name has a place where it lives and can be tied to. This is called a namespace. A namespace is like a box where a name is mapped with the object. Whenever the variable is searched, this box will also be searched in order to find the corresponding object.
A Flask is a microframework build for small applications with more straightforward requirements. Flask comes ready to use.
Pyramids are built for larger applications. They provide flexibility and allow the developer to use the right tools for their projects. The developer is free to choose the database, templating style, URL structure, and more. Pyramids is configurable.
Similar to Pyramids, Django can be used for larger applications. It includes an ORM.
A thread is a lightweight process. Multithreading allows the programmer to execute multiple threads in one go. The Global Interpreter Lock ensures that a single thread performs at a given time. A thread holds the GIL and does some work before passing it on to the next thread. This looks like parallel execution, but actually, it is just threading taking turns at the CPU.
Python supported 5 data types.
- Numbers
- String
- Tuple
- Dictionary
- List
import random
random.random
- Select the URL you want to scrap
- Inspect the page
- Select data you want to extract
- Write the codes and run them
Once the data is extracted store the data in any required format
The way of using the operating system dependent functionalities is an OS module. Through this function, the interface is provided with the underlying operating system for which Python is running on.
DeQue module is a segment of the collection library that has a feature of addition and removal of the elements from their respective ends.
It is a Python library used to optimize, define, and execute the mathematical expressions including multidimensional arrays.
In order to get the ASCII values in Python, you have to type a program. The function here will get the int value of char. This program must be in ord function() to return the value.
Example
>>> ord('a')
97
>>> chr(97)
'a'
>>> chr(ord('a') + 3)
'd'
>>>
There are two categories of ‘types’ present in Python, which is mutable and immutable.
Mutable built-in types
- List
- Dictionary
- Set
Immutable built-in type
- String
- Number
- Tuple
In Python '//' operator is a floor Division operator. It is used to distinguish operands with their result as quotient representing the digits before the decimal point.
E.g. 10//5=2
10.5//5.0=2.0
Python iterators are used to traverse the elements or any collection for the specific implementation. In Python, they also regulate the iterator protocol and contain specific values.
Key points
- Similar to PERL and PHP, Python is processed by the interpreter at runtime. Python supports the
If you are looking for a job as a Python developer, we have a vast collection of Python interview questions.
Advantages
- Extensive Support Libraries
- Extensive Integration Features
- Improves Programmer's Productivity
- Platform Independent
Disadvantages
- Weak in Mobile Computing
- Slow Speed | https://www.bestinterviewquestion.com/python-interview-questions | CC-MAIN-2020-16 | refinedweb | 1,638 | 57.27 |
I'm trying to convert a javascript array into a generic list. I'm having trouble with adding items in the definition.
In javascript I have var myarray=New Array("walk","run" etc...);
for generic list I would do
import System.Collections.Generic;var myarray=new List.("walk","run" etc...);
gives an error that the constructor cannot take String,String.
Is there away to add items in the definition without having to use myarray.Add("walk");
Thanks,
Answer by Eric5h5
·
Feb 27, 2012 at 05:29 AM
var aList = new List.<String>(["walk", "run", "etc."]);
Although technically that's creating a fixed-size array and using the array to initialize the List. Same difference really.
how would I do aList of lists then
var aList=new List.
It seems to be deleting my tagstringtag
You might be better off with a 2D array:
var array2D = new String[100, 100];
Otherwise,
var aList = new List.<List.<String> >();
aList.Add(new List.<String>(["a", "b", "c"]));
print (aList[0][0]);
Note the space in the > > part; that's actually necessary for some reason.
> >
I've seen some people taking built in arrays and then converting them into javascript arrays and back again if they want to push something or removeAt or use the nice javascript functions. Is that going to be slower than using the .NET List?
You can always convert Lists to built-in arrays and back; there's no reason to use Javascript Array. Mostly you wouldn't need to; Lists are much faster than Javascript.
Iterate through Variables in a Script and adding them to a Generic List
1
Answer
Remove and Add to List By Name aad
1
Answer
A node in a childnode?
1
Answer
Add to generic list on start problem
1
Answer
How to select certain Elements in a list using LINQ-methods?
3
Answers | https://answers.unity.com/questions/221476/javascript-array-to-generic-list.html | CC-MAIN-2019-51 | refinedweb | 310 | 74.39 |
Python Pandas In 5 Mins — Part 2
Bhavani Ravi
Originally published at
hackernoon.com
on
・4 min read
Python Pandas In 5 Mins— Part 2
Use cases open up more functionalities
In the last blog, I hope I have sold you the idea that Pandas is an amazing library for quick and easy data analysis and it’s much easier to use than you thought.If you have not read my first blog about Pandas, please go through it before you move forward.
Oops !! We missed Some Data
In the last blog, we saw basic Dataframe operations using sample sales data. Let’s assume you are a manager leading a sales team, and you were all happy about the sales trajectory and the pivot representation of the data you learned to create from ourlast blog.
import numpy as np df.pivot\_table(index=["Country"], columns=["Region"], values=["Quantity"], aggfunc=[np.sum])
That’s when you realize you have missed sales data of a particular quarterbecause it was lost in one of the spreadsheets. Now, what do you do? You already have a report ready to go. How can you incorporate the new data into the current pivot representation without major changes?
If you see, the pivot table is constructed with a single Dataframe df, somehow if we can find a way to feed our new data into the df then we can just re-run the pivot code and voila!! we will get the report again.
So here are the steps we are going to follow,
1. Load the new spreadsheet data into a new Dataframe
df2 = pd.read\_csv("data/Pandas - Q4 Sales.csv") df2.head()
2. Combine two Dataframe into a single df object,
Using concat
Pandas Concat method concatenates the contents of multiple Dataframes and creates a new Dataframe.
The axis param of the method enables you to concatenate data along rows or columns
result\_df = pd.concat([df, df2], axis=0, sort=False) # axis = 0, append along rows, # axis = 1, append along cols result\_df.tail() # tail is similar to head returns last 10 entries
Using append
Unlike concat , the append method adds up data to an existing dataframe instead of creating a new Dataframe. Also, you can notice that we don’t supply any axis parameter here since append method only allows adding new entries as rows.
result\_df = df.append([df2],sort=False) result\_df.tail()
If you take a closer look, in both cases, the data frames that need to be combined are supplied as a python list [df1, df2]. This implies that we can combine as many Dataframes as we want
3. Re-run the pivot code
pivot = result\_df.pivot\_table(index=["Country"], columns=["Region"], values="Quantity")
Charts are better than tables
You have a couple of hours for your final meeting. Your presentation is concrete, your sales are good but still, something is missing. Charts. For a management person who was so used to spreadsheets charts, leaving them behind is not a good idea. But, we have a short time to go back to spreadsheets, don’t we? Worry not, Pandas comes with a built-in charting framework which lets you draw graphs of our pivot representation
Perfection
As a person who was known for your perfection something doesn’t sit well in you. One of the tabular representations that you have created has unnecessary information that doesn’t interest your management, and a couple of columns have names that are used internally in your company and will not ring any bell to the management.
Worry not, we can do it all in one shot and pretty quick. In pandas terms, we call this method chaining.
Method chaining enables you to perform a various transformation on the same data without storing the intermediate result.
- Explicit is better than implicit hence let’s rename “Total” to “Total Sales”
- We don’t need the date of purchase just the year and quarter
- We don’t need the requester of purchase, Salesperson, and Date of purchase. So let’s drop it.
result\_df.rename({"Total": "Total Sales"}, axis=1)\ .assign(Quarter=result\_df['Date of Purchase'].dt.quarter, \ Year=result\_df['Date of Purchase'].dt.year) \ .drop(["Requester", "Sales Person", "Date of Purchase"], axis=1).head()
One Last Thing
With that, our final report looks good and guess what? Your management is not only happy about your sales this year but also excited about your new found love for Pandas, but there is just one last thing remaining, you need to send the final data as a CSV back to your management. But worry not we have pandas to do it for you.
result\_df.to\_csv(path\_or\_buf="Export\_Data.csv")
An “Export_Data.csv” file would be created in your current path which you can happily send to your management as an email attachment.
As you rest back on your seat, you want to automate the pandas experiment that you just did for the future sales reports. Thankfully, you have an intern who is joining you in a couple of days. It will be a great project for him to pick it up. Something in me tells that things aren’t going to be as easy as it was for you. which we will see in the next blog “What’s wrong with Pandas?”
Did the blog nudge you to deep dive into pandas?
Hold the “claps” icon and give a shout on [_twitter]()._
Follow to stay tuned on future blogs.
Thanks for stopping by ❤️
Investing in the right technologies to avoid technical debt
How patience can help you avoid jumping on the wrong tech.
| https://practicaldev-herokuapp-com.global.ssl.fastly.net/bhavaniravi/python-pandas-in-5-minspart-2-1f2p | CC-MAIN-2019-22 | refinedweb | 945 | 71.85 |
C# Refactoring and FailurePublished on
In the first lecture in an programming class at the University of Michigan, it is taught that it is highly improbable for one to write code that is simple, fast, and elegant. Instead we should strive for one of the categories or two if we’re ambitious.
- Simple:
- few lines of code using basic constructs
- Code can be digested in a short amount of time
- Fast:
- Code has been optimized to run as fast possible either in asymptotic complexity or runtime
- Elegant:
- Code is idiomatic of the language
- Code is beautiful and artful
Being the perfectionist that I am, I strive for my code to reflect all three of those goals. This can be seen in my project Pdoxcl2Sharp. I would describe it as elegant and fast, though far from simple. Since two out of three is bad, I decided to refactor a couple of functions.
Before:
private byte readByte() { if (currentPosition == bufferSize) { if (!eof) bufferSize = stream.Read(buffer, 0, BUFFER_SIZE); currentPosition = 0; if (bufferSize == 0) { eof = true; return 0; } } return buffer[currentPosition++]; }
After:
private IEnumerable<byte> getBytes() { byte[] buffer = new byte[BUFFER_SIZE]; int bufferSize = 0; do { bufferSize = stream.Read(buffer, 0, BUFFER_SIZE); for (int i = 0; i < bufferSize; i++) yield return buffer[i]; } while (bufferSize != 0); eof = true; yield break; }
Benefits:
- Fewer lines of code
- Simplicity
- Improved elegance
- Variables defined at the class scope are moved to the functional level
The new code excites me. However, being anal, I profiled the two methods against each other. The new method was twice as slow. Visual Studio said that most of the time was being spent incrementing the enumerator. *Sigh*, an hour of my time wasted.
Why was the new method slower? First off, the actual number of comparisons hasn’t decreased, the conditional simply moves from encompassing additional logic to being used as a loop conditional. If the number of comparisons are equivalent, then what else could be causing the slowdown. I dug through the C# Specification and Jon Skeet’s, C# in Depth but I couldn’t find anything interesting relating to performance. On closer inspection, one can make the logical hypothesis that the refactored is actually more complex, as the read byte is yielded into another collection. The C# compiler abstracts away the complexity by doing all the heavy lifting of setting up an iterator class.
To set up the next refactor. When reading the file the paraser will label bytes if they are important, or leave them as “Untyped” if they aren’t.
Before:
enum LexerToken { Equals, Quote, Untyped } private static LexerToken getToken(byte c) { switch (c) { case EQUALS: return LexerToken.Equals; case QUOTE: return LexerToken.Quote; default: return LexerToken.Untyped; } }
After:
enum LexerToken { Equals = EQUALS, Quote = QUOTES, Untyped } private static LexerToken getToken(byte c) { switch (c) { case EQUALS: case QUOTE: return (LexerToken)c; default: return LexerToken.Untyped; } }
Benefits:
- Less code
- Some might argue that it appears more simple
Like the previous example and by the title of the this post, one might guess the results. Believe it or not, the refactored version was slower by about 10-15%. I have no idea why the slowdown, as I refuse to believe that casting could cause a significant bottleneck.
In the end, after about two hours of refactoring and profiling, and an hour spent writing this post, the code appears the same. Some might be discouraged by this fact, but I feel that this reassures the fact that I write high performance code.
This kind of micro-optimization should not be pursued in the general case. I was profiling using a 10MB file and the original method ran in .135 seconds, switch refactoring ran in .17 seconds, and read refactoring ran in .24 seconds. As can be imagined, the file was in memory. | https://nbsoftsolutions.com/blog/c-refactoring-and-failure | CC-MAIN-2019-51 | refinedweb | 630 | 63.29 |
I've noticed that MSVS 2005 seems to get confused when you use function names that have been defined as macros at some point prior to your declaration.
I have a hash table class that has a function called GetObject. It is very clearly GetObject in the code but 2005 is complaining about GetObjectA not being a part of HashTable.
To get around it I did this at the top of the cpp using HashTable:
#undef GetObject
Is this a bug? Shouldn't the compiler see that my GetObject() is a this call and requires an object of type HashTable and that my GetObject() is not called GetObjectA()?
This link applies to C# but I think it answers my question:
It's not a bug in the compiler but is caused by windows.h polluting the global namespace. It has a macro that turns GetObject into GetObjectA or GetObjectW. | http://cboard.cprogramming.com/cplusplus-programming/123032-msvc-2005-bug-printable-thread.html | CC-MAIN-2016-36 | refinedweb | 149 | 70.02 |
I have looked at a lot of of the AJAX autocomplete options available, but most of them fall short in one way or another.
Your options were either:
You could not do both...until now. Try out a live demo of the solution on my website.
This is made possible by the Select2
jQuery autocomplete plugin. Although it has very good documentation I still had some trouble getting it up and running in a .NET MVC environment. To make this a little easier to get started with (and to encourage more people to use this fantastic plugin), I decided to put together a sample MVC project that uses this solution.
This article assumes a familiarity with jQuery and with .NET MVC. If you are accessing someone else's
JSON service, you just need the jQuery experience, but if you need to create the front and back end it would help to be familiar with .NET MVC as well.
This project was built using Visual Studio 2012 Express. You can download Visual Studio Express for free if you do not have it.
Select2 settings and code examples can be found on their site.
The demo project requires NuGet to be installed, since it re-downloads the required dependencies for the project to run.
The code is also available for download on GitHub.
To get started, download the latest version of
jQuery as well as the
CSS and JavaScript for Select2. Both of these can also be downloaded via NuGet inside Visual Studio.
<link href="~/Content/css/select2.css" type="text/css" rel="stylesheet" />
<script src="~/Scripts/jquery-2.0.3.js"></script>
<script src="~/Scripts/select2.js"></script>
To get this working on the front end, you will need:
The core of the
JavaScript is here:
//The url we will send our get request to
var attendeeUrl = '@Url.Action("GetAttendees", "Home")';
var pageSize = 20;
$('#attendee').select2(
{
placeholder: 'Enter name',
//Does the user have to enter any data before sending the ajax request
minimumInputLength: 0,
allowClear: true,
ajax: {
//How long the user has to pause their typing before sending the next request
quietMillis: 150,
//The url of the json service
url: attendeeUrl,
dataType: 'jsonp',
//Our search term and what page we are on
data: function (term, page) {
return {
pageSize: pageSize,
pageNum: page,
searchTerm: term
};
},
results: function (data, page) {
//Used to determine whether or not there are more results available,
//and if requests for more data should be sent in the infinite scrolling
var more = (page * pageSize) < data.Total;
return { results: data.Results, more: more };
}
}
});
And our html textbox where we will store the data looks like this:
<input id="attendee" name="AttendeeId" type="text" value="" />
That's really all that is needed on the front end to get this going. Things get a little more involved if you have to setup the backend as well, but it's nothing we can't handle.
We need to send the data from our backend to Select2 in a certain format. Basically we need to send an id and text value for each result returned. We also need to return the total count of all the results that are returned.
This is so when we start scrolling through the data Select2 knows if it needs to keep requesting more data or if we are at the end of our results.
So the classes we'll create to hold our results look like this:
public class Select2PagedResult
{
public int Total { get; set; }
public List<Select2Result> Results { get; set; }
}
public class Select2Result
{
public string id { get; set; }
public string text { get; set; }
}
Our method that returns the JSON is going to go to our data store, get any attendees that match the search term typed into the dropdownlist, and is then going to convert that data into the
Select2PagedResult class, which it will then send back to the browser as our
JSON result.
Select2PagedResult
[HttpGet]
public ActionResult GetAttendees(string searchTerm, int pageSize, int pageNum)
{
//Get the paged results and the total count of the results for this query.
AttendeeRepository ar = new AttendeeRepository();
List<Attendee> attendees = ar.GetAttendees(searchTerm, pageSize, pageNum);
int attendeeCount = ar.GetAttendeesCount(searchTerm, pageSize, pageNum);
//Translate the attendees into a format the select2 dropdown expects
Select2PagedResult pagedAttendees = AttendeesToSelect2Format(attendees, attendeeCount);
//Return the data as a jsonp result
return new JsonpResult
{
Data = pagedAttendees,
JsonRequestBehavior = JsonRequestBehavior.AllowGet
};
}
All the code to get test data from our data store, search through the first and last names in the last, and convert the results to a Select2PagedResult class is included in the sample project.
Is it worth going through all this effort? If you have small lists of data, probably not. For lists with 100 items or less in them, the default
JavaScript filtering in Select2 is fast enough and is much easier to setup (just a few lines of code).
In my demo example, the code is filtering a list with 1000 attendees quickly and easily, since all the heavy lifting is done by the server and the
JavaScript only has to display 20 results at a time. In cases like this, setting up Select2 with a remote data source will be worth the time you put. | http://www.codeproject.com/Articles/623566/Select2-The-Ultimate-jQuery-Autocomplete | CC-MAIN-2014-35 | refinedweb | 859 | 59.23 |
We of the graph.
We use vector in STL to implement graph using adjacency list representation.
- vector : A sequence container. Here we use it to store adjacency lists of all vertices. We use vertex number as index in this vector.
The idea is to to represent graph as an array of vectors such that every vector represents adjacency list of a vertex. Below is complete STL based C++ program for DFS Traversal.
// A simple representation of graph using STL, // for the purpose of competitive programming #include<bits/stdc++.h> using namespace std; // A utility function to add an edge in an // undirected graph. void addEdge(vector<int> adj[], int u, int v) { adj[u].push_back(v); adj[v].push_back(u); } // A utility function to do DFS of graph // recursively from a given vertex u. void DFSUtil(int u, vector<int> adj[], vector<bool> &visited) { visited[u] = true; cout << u << " "; for (int i=0; i<adj[u].size(); i++) if (visited[adj[u][i]] == false) DFSUtil(adj[u][i], adj, visited); } // This function does DFSUtil() for all // unvisited vertices. void DFS(vector<int> adj[], int V) { vector<bool> visited(V, false); for (int u=0; u<V; u++) if (visited[u] == false) DFSUtil(u, adj, visited); } // Driver code int main() { int V = 5; vector<int> adj[V]; addEdge(adj, 0, 1); addEdge(adj, 0, 4); addEdge(adj, 1, 2); addEdge(adj, 1, 3); addEdge(adj, 1, 4); addEdge(adj, 2, 3); addEdge(adj, 3, 4); DFS(adj, V); return 0; }
Output :
0 1 2 3 4
Below are related articles:
Graph implementation using STL for competitive programming | Set 2 (Weighted graph)
Dijkstra’s Shortest Path Algorithm using priority_queue of STL
Dijkstra’s shortest path algorithm using set in STL
Kruskal’s Minimum Spanning Tree using STL in C++
Prim’s algorithm using priority_queue in STL
This article is contributed by Shubham | http://www.geeksforgeeks.org/graph-implementation-using-stl-for-competitive-programming-set-1-dfs-of-unweighted-and-undirected/ | CC-MAIN-2017-17 | refinedweb | 310 | 52.39 |
CentOs 6 64-bits
Run this following macro
#include <TGraph.h>
#include <TAxis.h>
using namespace std;
void test() {
float x[4] = {1,2,3,4};
float y[4] = {2,4,6e36,8};
TGraph *g = new TGraph(4,x,y);
g->SetTitle("test");
g->GetXaxis()->SetTimeDisplay(true);
g->Draw();
}
A graph appears with an X-axis shown as a time. Click right and select "setlogy"
See that the x-axis is not shown as a time.
Note that this bug also affects ROOT 5.34/34
It looks like the way it is implemented produces this effect. When you set the log scale the axis are recreated, with the defaults attribute. The default is SetTimeDisplay(false); that's why the time display goes away. I you set the log before the time display then it works. Now I have to see why/where the axis are recreated and check if the time attribute can be kept... need some time.
Fixed.
Thanks for reporting. | https://sft.its.cern.ch/jira/si/jira.issueviews:issue-html/ROOT-7766/ROOT-7766.html | CC-MAIN-2020-10 | refinedweb | 163 | 85.39 |
I'm aware that there's no field in the record format that explicitly relates to co-ownership.
This scenario must come up fairly often--one party owns (is the registrant) the domain name, another party actually operates the Site. In my case: my employer is about to license an e-commerce Site to another outfit. My employer is the current registrant of the domain name.
Obviously, the licensee should have access to the domain name record to re-point the domain name to their own web host, to add/change nameservers, etc., change the admin contact address, change the entire MX record, etc. Still, the licensee is just a licensee not the actual owner/registrant. That's important because for instance if the licensee breaches the license agreement, for instance by selling our competitors' products, my employer has the right to terminate the agreement--but that's not worth much if they can't regain control of the domain name.
A couple of ideas i've had so far:
rely on the 'two-tired' account authority which our Registrar offers (which most others likely do as well), i.e., retain myself as the account's superuser, add the licensee as an admin on the account, so they have full 'working' access to the record;
lock the domain to prevent its transfer by the licensee ('CLIENT TRANSFER PROHIBITED' under the 'status' field), though perhaps with admin access they can change this.
On several occasions, we have purchased domain names from 'professional domain name owners' and used the escrow service provided by our Registrar, and it's worked very well. The intention behind such services is obviously to protect buyer and seller during transaction. A service directed to license-type transactions rather than outright changes of ownership would obviously have to function quite differently, still i thought perhaps such a service might offered by one or more Registrars.
And if you had in mind to reply "that's a legal problem", please don't. It's not a legal problem or a sys admin problem, it's a business problem. And like many business problems it has multiple possible solutions from various functions, legal, IT, and whatnot. The legal solution is embodied in the License Agreement. The Sys Admin solution is directed teh administrative minutiae of the domain name record in order to reconciling the possibly conflicting interests of two parties both of whom (apparently) need access to the same domain name record for different reasons.
Does your DNS registrar not allow you to setup an account for your technical contact? Most registrars do, and this is usually adequate.
You do not have to host your DNS with your registrar. You could delegate authority for the zone to some other DNS hosting service. Then give out access to the hosted DNS control panel without giving out access to the accounts at the registrar.
Interesting question. I think this is more a legal issue than a technical one. In my view the best solution is for your employer to retain ownership of the domain and lease its use to the other party, rather than trying to have co-ownership, as co-ownership is always fraught with problems. If I were placed in the same position I'd be asking lawyers, rather than sysadmins.
I don't see it as a legal issue either. Posession is nine tenths of the law. If I purchase the domain name then I "own" it, regardless of what I do with it. Someone else hosting the domain namespace, web site, or email doesn't infer any "ownership" rights as far as I'm concerned, and my "sub-letting" the domain name to another party doesn't infer any ownership either. To me this is a business\technical issue: What rights to the domain namespace does the business agreement allow for and how will the namespace be managed from a technical perspective? I would say that the customer may need to have the right to add\change\delete A, CNAME, MX, and SPF records but shouldn't have the right to modify the SOA or NS records. At any rate, as the party that registered the domain name, you'll always have ultimate control of it.
If I buy a car and hold the title to it then I can do anything I want with it. I can lend it out or lease it but I retain ownership of the car and retain ultimate control of what happens to it.
I would simply have the new company give you all the settings and have someone from your company make all the needed changes. They don't need access to any of your DNS settings beyond the initial setup. Once everything is setup is shouldn't need to be changed again.
By posting your answer, you agree to the privacy policy and terms of service.
asked
4 years ago
viewed
197 times
active | http://serverfault.com/questions/104845/how-should-i-insist-that-a-domain-name-record-be-amended-to-protect-a-co-owner | CC-MAIN-2014-10 | refinedweb | 824 | 59.94 |
abba-babaadmixture tests¶
The
ipyrad.analysis Python module includes functions to calculate abba-baba admixture statistics (including several variants of these measures), to perform signifance tests, and to produce plots of results. All code in this notebook is written in Python, which you can copy/paste into an IPython terminal to execute, or, preferably, run in a Jupyter notebook like this one. See the other analysis cookbooks for instructions on using Jupyter notebooks. All of the software required for this tutorial is included with
ipyrad (v.6.12+). Finally, we've written functions to generate plots for summarizing and interpreting results.
import ipyrad.analysis as ipa import ipyparallel as ipp import toytree import toyplot
print ipa.__version__ print toyplot.__version__ print toytree.__version__
0.7.19 0.16.0-dev 0.1.4
The code can be easily parallelized across cores on your machine, or many nodes of an HPC cluster using the
ipyparallel library (see our ipyparallel tutorial). An
ipcluster instance must be started for you to connect to, which can be started by running
'ipcluster start' in a terminal.
ipyclient = ipp.Client() len(ipyclient)
4
## ipyrad and raxml output files locifile = "./analysis-ipyrad/pedic_outfiles/pedic.loci" newick = "./analysis-raxml/RAxML_bipartitions.pedic"
## parse the newick tree, re-root it, and plot it. tre = toytree.tree(newick=newick) tre.root(wildcard="prz") tre.draw( height=350, width=400, node_labels=tre.get_node_values("support") ) ## store rooted tree back into a newick string. newick = tre.tree.write()
To give a gist of what this code can do, here is a quick tutorial version, each step of which we explain in greater detail below. We first create a
'baba' analysis object that is linked to our data file, in this example we name the variable bb. Then we tell it which tests to perform, here by automatically generating a number of tests using the
generate_tests_from_tree() function. And finally, we calculate the results and plot them.
## create a baba object linked to a data file and newick tree bb = ipa.baba(data=locifile, newick=newick)
## generate all possible abba-baba tests meeting a set of constraints bb.generate_tests_from_tree( constraint_dict={ "p4": ["32082_przewalskii", "33588_przewalskii"], "p3": ["33413_thamno"], })
44 tests generated from tree
## show the first 3 tests bb.tests[:3]
[{'p1': ['41478_cyathophylloides'], 'p2': ['29154_superba', '30686_cyathophylla'], 'p3': ['33413_thamno'], 'p4': ['32082_przewalskii', '33588_przewalskii']}, {'p1': ['41954_cyathophylloides'], 'p2': ['29154_superba', '30686_cyathophylla'], 'p3': ['33413_thamno'], 'p4': ['32082_przewalskii', '33588_przewalskii']}, {'p1': ['41478_cyathophylloides'], 'p2': ['29154_superba'], 'p3': ['33413_thamno'], 'p4': ['32082_przewalskii', '33588_przewalskii']}]
## run all tests linked to bb bb.run(ipyclient)
[####################] 100% calculating D-stats | 0:02:58 |
## show first 5 results bb.results_table.head()
By default we do not attach the names of the samples that were included in each test to the results table since it makes the table much harder to read, and we wanted it to look very clean. However, this information is readily available in the
.test() attribute of the baba object as shown below. Also, we have made plotting functions to show this information clearly as well.
## save all results table to a tab-delimited CSV file bb.results_table.to_csv("bb.abba-baba.csv", sep="\t") ## show the results table sorted by index score (Z) sorted_results = bb.results_table.sort_values(by="Z", ascending=False) sorted_results.head()
## get taxon names in the sorted results order sorted_taxa = bb.taxon_table.iloc[sorted_results.index] ## show taxon names in the first few sorted tests sorted_taxa.head()
Interpreting the results of D-statistic tests is actually very complicated. You cannot treat every test as if it were independent because introgression between one pair of species may cause one or both of those species to appear as if they have also introgressed with other taxa in your data set. This problem is described in great detail in this paper (Eaton et al. 2015). A good place to start, then, is to perform many tests and focus on those which have the strongest signal of admixture. Then, perform additional tests, such as
partitioned D-statistics (described further below) to tease apart whether a single or multiple introgression events are likely to have occurred.
In the example plot below we find evidence of admixture between the sample 33413_thamno (black) with several other samples, but the signal is strongest with respect to 30556_thamno (tests 12-19). It also appears that admixture is consistently detected with samples of (40578_rex & 35855_rex) when contrasted against 35236_rex (tests 20, 24, 28, 34, and 35). Take note, the tests are indexed starting at 0.
## plot results on the tree bb.plot(height=850, width=700, pct_tree_y=0.2, pct_tree_x=0.5, alpha=4.0);
Because tests are generated based on a tree file, it will only generate tests that fit the topology of the test. For example, the entries below generate zero possible tests because the two samples entered for P3 (the two thamnophila subspecies) are paraphyletic on the tree topology, and therefore cannot form a clade together.
## this is expected to generate zero tests aa = bb.copy() aa.generate_tests_from_tree( constraint_dict={ "p4": ["32082_przewalskii", "33588_przewalskii"], "p3": ["33413_thamno", "30556_thamno"], })
0 tests generated from tree
If you want to get results for a test that does not fit on your tree you can always write the result out by hand instead of auto-generating it from the tree. Doing it this way is fine when you have few tests to run, but becomes burdensome when writing many tests.
## writing tests by hand for a new object aa = bb.copy() aa.tests = [ {"p4": ["32082_przewalskii", "33588_przewalskii"], "p3": ["33413_thamno", "30556_thamno"], "p2": ["40578_rex", "35855_rex"], "p1": ["39618_rex", "38362_rex"]}, {"p4": ["32082_przewalskii", "33588_przewalskii"], "p3": ["33413_thamno", "30556_thamno"], "p2": ["40578_rex", "35855_rex"], "p1": ["35236_rex"]}, ] ## run the tests aa.run(ipyclient) aa.results_table
[####################] 100% calculating D-stats | 0:00:23 |
You can also perform partitioned D-statistic tests like below. Here we are testing the direction of introgression. If the two thamnophila subspecies are in fact sister species then they would be expected to share derived alleles that arose in their ancestor and which would be introduced from together if either one of them introgressed into a P. rex taxon. As you can see, test 0 shows no evidence of introgression, whereas test 1 shows that the two thamno subspecies share introgressed alleles that are present in two samples of rex relative to sample "35236_rex".
More on this further below in this notebook.
## further investigate with a 5-part test cc = bb.copy() cc.tests = [ {"p5": ["32082_przewalskii", "33588_przewalskii"], "p4": ["33413_thamno"], "p3": ["30556_thamno"], "p2": ["40578_rex", "35855_rex"], "p1": ["39618_rex", "38362_rex"]}, {"p5": ["32082_przewalskii", "33588_przewalskii"], "p4": ["33413_thamno"], "p3": ["30556_thamno"], "p2": ["40578_rex", "35855_rex"], "p1": ["35236_rex"]}, ] cc.run(ipyclient)
[####################] 100% calculating D-stats | 0:00:23 |
## the partitioned D results for two tests cc.results_table
## and view the 5-part test taxon table cc.taxon_table
babaobject¶
The fundamental object for running abba-baba tests is the
ipa.baba() object. This stores all of the information about the data, tests, and results of your analysis, and is used to generate plots. If you only have one data file that you want to run many tests on then you will only need to enter the path to your data once. The data file must be a
'.loci' file from an ipyrad analysis. In general, you will probably want to use the largest data file possible for these tests (
min_samples_locus=4), to maximize the amount of data available for any test. Once an initial
baba object is created you create different copies of that object that will inherit its parameter setttings, and which you can use to perform different tests on, like below.
## create an initial object linked to your data in 'locifile' aa = ipa.baba(data=locifile) ## create two other copies bb = aa.copy() cc = aa.copy() ## print these objects print aa print bb print cc
<ipyrad.analysis.baba.Baba object at 0x7fc55634a8d0> <ipyrad.analysis.baba.Baba object at 0x7fc55634ab50> <ipyrad.analysis.baba.Baba object at 0x7fc55634a110>
The next thing we need to do is to link a
'test' to each of these objects, or a list of tests. In the Short tutorial above we auto-generated a list of tests from an input tree, but to be more explicit about how things work we will write out each test by hand here. A test is described by a Python dictionary that tells it which samples (individuals) should represent the 'p1', 'p2', 'p3', and 'p4' taxa in the ABBA-BABA test. You can see in the example below that we set two samples to represent the outgroup taxon (p4). This means that the SNP frequency for those two samples combined will represent the p4 taxon. For the
baba object named
'cc' below we enter two tests using a list to show how multiple tests can be linked to a single
baba object.
aa.tests = { "p4": ["32082_przewalskii", "33588_przewalskii"], "p3": ["29154_superba"], "p2": ["33413_thamno"], "p1": ["40578_rex"], } bb.tests = { "p4": ["32082_przewalskii", "33588_przewalskii"], "p3": ["30686_cyathophylla"], "p2": ["33413_thamno"], "p1": ["40578_rex"], } cc.tests = [ { "p4": ["32082_przewalskii", "33588_przewalskii"], "p3": ["41954_cyathophylloides"], "p2": ["33413_thamno"], "p1": ["40578_rex"], }, { "p4": ["32082_przewalskii", "33588_przewalskii"], "p3": ["41478_cyathophylloides"], "p2": ["33413_thamno"], "p1": ["40578_rex"], }, ]
Each
baba object has a set of parameters associated with it that are used to filter the loci that will be used in the test and to set some other optional settings. If the
'mincov' parameter is set to 1 (the default) then loci in the data set will only be used in a test if there is at least one sample from every tip of the tree that has data for that locus. For example, in the tests above where we entered two samples to represent "p4" only one of those two samples needs to be present for the locus to be included in our analysis. If you want to require that both samples have data at the locus in order for it to be included in the analysis then you could set
mincov=2. However, for the test above setting
mincov=2 would filter out all of the data, since it is impossible to have a coverage of 2 for 'p3', 'p2', and 'p1', since they each have only one sample. Therefore, you can also enter the
mincov parameter as a dictionary setting a different minimum for each tip taxon, which we demonstrate below for the
baba object
'bb'.
## print params for object aa aa.params
database None mincov 1 nboots 1000 quiet False
## set the mincov value as a dictionary for object bb bb.params.mincov = {"p4":2, "p3":1, "p2":1, "p1":1} bb.params
database None mincov {'p2': 1, 'p3': 1, 'p1': 1, 'p4': 2} nboots 1000 quiet False
When you execute the
'run()' command all of the tests for the object will be distributed to run in parallel on your cluster (or the cores available on your machine) as connected to your
ipyclient object. The results of the tests will be stored in your
baba object under the attributes
'results_table' and
'results_boots'.
## run tests for each of our objects aa.run(ipyclient) bb.run(ipyclient) cc.run(ipyclient)
[####################] 100% calculating D-stats | 0:00:07 | [####################] 100% calculating D-stats | 0:00:06 | [####################] 100% calculating D-stats | 0:00:10 |
The results of the tests are stored as a data frame (pandas.DataFrame) in
results_table, which can be easily accessed and manipulated. The tests are listed in order and can be referenced by their
'index' (the number in the left-most column). For example, below we see the results for object
'cc' tests 0 and 1. You can see which taxa were used in each test by accessing them from the
.tests attribute as a dictionary, or as
.taxon_table which returns it as a dataframe. An even better way to see which individuals were involved in each test, however, is to use our plotting functions, which we describe further below.
## you can sort the results by Z-score cc.results_table.sort_values(by="Z", ascending=False) ## save the table to a file cc.results_table.to_csv("cc.abba-baba.csv") ## show the results in notebook cc.results_table
Entering all of the tests by hand can be pain, which is why we wrote functions to auto-generate tests given an input rooted tree, and a number of contraints on the tests to generate from that tree. It is important to add constraints on the tests otherwise the number that can be produced becomes very large very quickly. Calculating results runs pretty fast, but summarizing and interpreting thousands of results is pretty much impossible, so it is generally better to limit the tests to those which make some intuitive sense to run. You can see in this example that implementing a few contraints reduces the number of tests from 1608 to 13.
## create a new 'copy' of your baba object and attach a treefile dd = bb.copy() dd.newick = newick ## generate all possible tests dd.generate_tests_from_tree() ## a dict of constraints constraint_dict={ "p4": ["32082_przewalskii", "33588_przewalskii"], "p3": ["40578_rex", "35855_rex"], } ## generate tests with contraints dd.generate_tests_from_tree( constraint_dict=constraint_dict, constraint_exact=False, ) ## 'exact' contrainst are even more constrained dd.generate_tests_from_tree( constraint_dict=constraint_dict, constraint_exact=True, )
2006 tests generated from tree 126 tests generated from tree 14 tests generated from tree
The
.run() command will run the tests linked to your analysis object. An ipyclient object is required to distribute the jobs in parallel. The
.plot() function can then optionally be used to visualize the results on a tree. Or, you can simply look at the results in the
.results_table attribute.
## run the dd tests dd.run(ipyclient) dd.plot(height=500, pct_tree_y=0.2, alpha=4); dd.results_table
[####################] 100% calculating D-stats | 0:01:00 |
The default (required) input data file is the
.loci file produced by
ipyrad. When performing D-statistic calculations this file will be parsed to retain the maximal amount of information useful for each test.
An additional (optional) file to provide is a newick tree file. While you do not need a tree in order to run ABBA-BABA tests, you do need at least need a hypothesis for how your samples are related in order to setup meaningful tests. By loading in a tree for your data set we can use it to easily set up hypotheses to test, and to plot results on the tree.
## path to a locifile created by ipyrad locifile = "./analysis-ipyrad/pedicularis_outfiles/pedicularis.loci" ## path to an unrooted tree inferred with tetrad newick = "./analysis-tetrad/tutorial.tree"
For abba-baba tests you will pretty much always want your tree to be rooted, since the test relies on an assumption about which alleles are ancestral. You can use our simple tree plotting library
toytree to root your tree. This library uses Toyplot as its plotting backend, and ete3 as its tree manipulation backend.
Below I load in a newick string and root the tree on the two P. przewalskii samples using the
root() function. You can either enter the names of the outgroup samples explicitly or enter a wildcard to select them. We show the rooted tree from a tetrad analysis below. The newick string of the rooted tree can be saved or accessed by the
.newick attribute, like below.
## load in the tree tre = toytree.tree(newick) ## set the outgroup either as a list or using a wildcard selector tre.root(outgroup=["32082_przewalskii", "33588_przewalskii"]) tre.root(wildcard="prz") ## draw the tree tre.draw(width=400) ## save the rooted newick string back to a variable and print newick = tre.newick
You can see in the
results_table below that the D-statistic range around 0.0-0.15 in these tests. These values are not too terribly informative, and so we instead generally focus on the Z-score representing how far the distribution of D-statistic values across bootstrap replicates deviates from its expected value of zero. The default number of bootstrap replicates to perform per test is 1000. Each replicate resamples nloci with replacement.
In these tests ABBA and BABA occurred with pretty equal frequency. The values are calculated using SNP frequencies, which is why they are floats instead of integers, and this is also why we were able to combine multiple samples to represent a single tip in the tree (e.g., see the test we setup, above).
## show the results table print dd.results_table
dstat bootmean bootstd Z ABBA BABA nloci 0 0.071 0.071 0.034 2.082 415.266 360.406 9133 1 0.120 0.121 0.035 3.400 421.000 330.484 8611 2 0.085 0.088 0.041 2.044 327.828 276.609 6849 3 0.129 0.129 0.044 2.967 326.953 252.047 6505 4 0.096 0.097 0.037 2.558 376.078 310.266 8413 5 0.135 0.135 0.038 3.519 380.672 290.359 7939 6 -0.092 -0.090 0.040 2.299 278.641 335.234 6863 7 -0.109 -0.109 0.037 2.916 310.672 386.297 8439 8 -0.085 -0.083 0.044 1.948 276.609 327.828 6849 9 -0.096 -0.096 0.038 2.506 310.266 376.078 8413 10 -0.129 -0.130 0.043 3.009 252.047 326.953 6505 11 -0.135 -0.134 0.038 3.556 290.359 380.672 7939 12 -0.023 -0.023 0.032 0.714 435.562 455.750 8208 13 -0.013 -0.014 0.030 0.434 509.906 523.438 9513
To perform partitioned D-statistic tests is not any harder than running the standard four-taxon D-statistic tests. You simply enter your tests with 5 taxa in them now, listed as p1-p5. We have not developed a function to generate 5-taxon tests from a phylogeny, as this test is more appropriately applied to a smaller number of tests to further tease apart the meaning of significant 4-taxon results. See example above in the short tutorial. A simulation example will be added here soon... | http://nbviewer.jupyter.org/github/dereneaton/ipyrad/blob/master/tests/cookbook-abba-baba.ipynb | CC-MAIN-2018-39 | refinedweb | 2,961 | 64 |
In this article, I will show you how you can create a program for checking prime number using a while loop in C, but before we proceed further, let's clear the definition of prime number first.
Prime Number: A Prime Number is a number greater than 1 and which is only divisible by 1 and the number itself. For example, 13 is a prime number as it is divisible by 1 and itself, but 4 is not a prime number as it can be divided by 2.
So, to find if a number if prime or not, we can check is number is divisible by any number from 2 to N, if it is divisible by any number between this, then it is not primer otherwise it is prime.
Let's create a program in which we will be checking all the prime number available below 50 using while loop
#include <stdio.h> #include <math.h> int main(void) { int max = 50; int current = 4; int checker = 2; do{ if(checker > sqrt((double)current)) { checker = 2; // number is prime print it printf("%d is prime\n",current); current++; } else if(current % checker == 0) { /* number is not a prime, let's continue */ checker = 2; current++; } else checker++; }while(current < max); }
Cases which we have considered in the above program:
- Our current number is divisible by the test variable then the number is NOT prime, increase the current number and reset the test variable.
- Our test variable is greater than the square root of the current number. By definition, it CANNOT divide the current number, so the current number has to be prime (we have tried all numbers lower than the square root of the current number and none of them divide it). Increase the current number and reset the test variable.
- Lastly, if either above case isn't true, we have to try the next number higher. Increment the test variable.
The output of the above code will be as below
5 is prime 7 is prime 11 is prime 13 is prime 17 is prime 19 is prime 23 is prime 29 is prime 31 is prime 37 is prime 41 is prime 43 is prime 47 is prime
You can check the above program here
Program to check if a number is prime or not by taking user input
In this program, we will take user's number and check if it Prime or); }
Executing the above code by entering number 5, the output will be as below
Enter any number: 5 5 is Prime Number
| https://qawithexperts.com/article/c-cpp/program-to-check-prime-number-in-c-using-while-loop/131 | CC-MAIN-2019-39 | refinedweb | 428 | 56.22 |
"""hi i am having heaps of trouble with calling my whoWins function from main. i can get all of my other functions to call ok from main, but am having trouble with this one for some reason. any help would be greatly apreciated. cheers joel"""
import random Ascore = 0 Bscore = 0 Dscore = 0 table = {1:"Rock", 2:"Paper",3:"Scissors"} gameList = [] turnTuple = () Awins = {(2,1):"Paper beats rock - Player A wins", (1,3):"Rock beats scissors - Player A wins", (3,2):"Scissors beat paper - Player A wins."} Bwins = {(1,2):"Paper beats rock - Player B wins", (3,1):"Rock beats scissors - Player B wins", (2,3):"Scissors beat paper - Player B wins."} Cdraws = {(1,1):"Its a draw.", (2,2):"Its a draw.", (3,3):"Its a draw."} def intro(): print "\t\t\t\tRock, Paper, Scissors" # Prints program intro print "\t\t\t\t\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ " print "\t\t\t\t\tby" print "\t\t\t\t Joel Watts" print "\t\t\t\t \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ " print print " \t\t\t\tInstructions\t\t\t\t\t\n The aim of this game is to play rock paper scissors. The computer is to \ simulate the whole game by producing the results for each game along with a statement declaring who has won the match." # Here is the code so far # for players A and B: while Ascore < 10 and Bscore < 10: x = 0 y = 0 print "hello" x = random.randrange(1,4) y = random.randrange(1,4) turnA = table[x] turnB = table[y] turnTuple = (x,y) print "player A chose:",turnA print "player B chose:",turnB print def whoWins(): if turnTuple in Awins: print Awins[turnTuple] Ascore = Ascore + 1 elif turnTuple in Bwins: print Bwins[turnTuple] Bscore = Bscore + 1 elif turnTuple in Cdraws: Dscore = Dscore + 1 print Cdraws[turnTuple] #TESTING 1 2 3 def priSummary(): print print "A scored: ", Ascore print "B scored: ", Bscore print "Draws: ", Dscore def main(): intro() priSummary() whoWins() if __name__ == '__main__': main() | https://www.daniweb.com/programming/software-development/threads/121908/calling-functions-from-main | CC-MAIN-2019-04 | refinedweb | 323 | 67.89 |
The objective of this post is to explain how to develop a very simple URL query string parser using MicroPython. The tests shown through this ESP32 tutorial were performed using a DFRobot’s ESP-WROOM-32 device integrated in a ESP32 FireBeetle board.
Introduction
The objective of this post is to explain how to develop a very simple URL query string parser using MicroPython. Explaining what a query string is is outside the scope of this post, but you can read more about it here.
Our query string parser will be very simple and thus we will assume that the query string has a well behaved format and thus all the parameter value pairs are separated from each by a “&” character and each parameter is separated from its value by a “=” character. We will also assume that each parameter will always have a corresponding value.
The tests shown through this ESP32 tutorial were performed using a DFRobot’s ESP-WROOM-32 device integrated in a ESP32 FireBeetle board. The IDE used was uPyCraft.
The code
Since we want to develop a reusable generic solution, we will encapsulate our code in a function. Naturally, this function will have an input variable so we can pass the URL query string to be processed. We will name our function qs_parse.
def qs_parse(qs): ## Function code
Inside the function, we will start by declaring an empty dictionary, which maps well to the query string “parameter=value” structure. Thus, we will by able to access each parameter by its name, since the parameter names will be used as the dictionary keys. You can learn more about dictionaries in this previous post.
parameters = {}
As stated in the previous section, we know that each parameter value pair is separated by an ampersand. So, if we use the “&” character as separator, we can isolate each parameter value pair.
To do it, we can use the string split method, which receives as input a string that is used as separator and returns a list of sub-strings resulting from the separation. The separator is not included in the results and thus we will get a clean list with parameter value pairs in each sub-string.
Note that since the split function is a string method, we call it on the string variable that contains the query parameters. In our case, that string is the input argument of the qs_parse function, which we called qs.
ampersandSplit = qs.split("&")
Since we are developing a generic parsing function, we will assume that we don’t know how many parameters exist on our query. Thus, we will iterate the previously obtained list element by element, using a for … in loop.
for element in ampersandSplit: #iteration code
So, in each iteration of the loop, the element variable will have a string composed by each parameter value pair, in the “parameter=value” format. Since we know that the parameter and the value are separated by the equal character, we can again apply the split function, using the “=” character as separator.
equalSplit = element.split("=")
Since we are iterating pair by pair, we know that the output of this operation will always be a list with two positions. The first position will have the name of the parameter (it is the sub-string left of the separator character) and the second position will have the value (it is the sub-string right of the separator character).
Taking this into account, we simply map the first element of the resulting list to a key of the dictionary and the second to the value of the dictionary. Remember that MicroPython indexes are zero based and thus the list first and second elements are in indexes 0 and 1, respectively.
parameters[equalSplit[0]] = equalSplit[1]
To finalize the code, we simply end the function by returning our dictionary, which is stored on the parameters variable. The final source code can be seen below.
def qs_parse(qs): parameters = {} ampersandSplit = qs.split("&") for element in ampersandSplit: equalSplit = element.split("=") parameters[equalSplit[0]] = equalSplit[1] return parameters
Testing the code
To test the code, simply upload it to your ESP32. In my case, I’m using uPyCraft and thus it will create a .py file with the specified name.
I will call the file qs_parse (I’ve used the same name as the function, but it could have been different) and thus I will later need to import it as a module to be able to use the developed function. Upon uploading, to test everything, we can use the following code.
import qs_parse stringToParse = "param1=val1¶m2=val2¶m3=val3" parameters = qs_parse.qs_parse(stringToParse) print(parameters)
Just as a quick analysis, we start by importing the module where we encapsulated our function. Then we declare a string matching an example of an URL query string. Finally, we call the qs_parse function of our module (remember that both the module and the function have the same name) and print the result. You can check the output at figure 1.
Figure 1 – Result of applying the query string parser.
As can be seen, the output dictionary is composed of keys and values that match the URL parameters. After this, we can simply use the dictionary functions to check which keys and values are available.
Pingback: ESP32 MicroPython: Getting the query parameters on a Picoweb app | techtutorialsx | https://techtutorialsx.com/2017/09/29/esp32-micropython-developing-a-simple-url-query-string-parser/ | CC-MAIN-2017-43 | refinedweb | 891 | 52.6 |
C Standard library functions or simply C Library functions are inbuilt functions in C programming. Function prototype and data definitions of these functions are written in their respective header file. For example: If you want to use
printf() function, the header file
<stdio.h> should be included.
#include <stdio.h> int main() { /* If you write printf() statement without including
header file, this program will show error. */ printf("Catch me if you can."); }
There is at least one function in any C program, i.e., the
main() function (which is also a library function). This program is called at program starts.
There are many library functions available in C programming to help the programmer to write a good efficient program.
Suppose, you want to find the square root of a number. You can write your own piece of code to find square root but, this process is time consuming and the code you have written may not be the most efficient process to find square root. But, in C programming you can find the square root by just using
sqrt() function which is defined under header file
"math.h"
#include <stdio.h> #include <math.h> int main(){ float num,root; printf("Enter a number to find square root."); scanf("%f",&num); root=sqrt(num); /* Computes the square root of num and stores in root. */ printf("Square root of %.2f=%.2f",num,root); return 0; } | http://www.programiz.com/c-programming/library-function | CC-MAIN-2013-48 | refinedweb | 233 | 76.82 |
Java Runtime Environment (JRE) is a set of software tools for the development of Java applications. It combines Java Virtual Machine (JVM), and platform core classes and auxiliary libraries. JRE is a part of the Java Development Kit (JDK), but it can be downloaded separately.
JRE was originally developed by the Sun Microsystems., a wholly-owned subsidiary of Oracle Corporation. Available for many computer platforms including JRE Mac, and Windows, and Unix
If JRE is not installed on the computer, then Java programs can not be recognized by the operating system and will not run.
JRE software provides a runtime environment in which Java programs can be executed, such as software programs that have been compiled solely for computer processors. JRE software is available as both a standalone environment and a web browser plug-in, which allows Java applet to run in a web browser.
A Java Runtime Environment performs the following main tasks respectively.
Class loader loads all the class files needed to execute the program. The class loader secures the program by separating the namespace from classes available locally for the classes received through the network. Once the byte-code has been successfully loaded, the next step is the byte-code verifier byte-code verifier.
The byte-code verifier verifies the byte code to see if any security problems are there in the code. It checks the byte code and ensures the followings.
Once this code has been verified and proven that there is no security problem with the code, JVM will convert the byte code to the machine code which will be executed directly by the machine in which the Java program runs.
When Java program is executed, byte code is interpreted by JVM. But this interpretation is a slow process. To overcome this difficulty, the Jere Component JIT compiler is included. JIT makes execution faster
If the JIT compiler library is present, when a special bytecode is executed for the first time, JIT compliance compiles it into native machine code that can be run directly by the Java machine. Once the byte code is compiled by the JIT compiler, the required execution time will be very low. This compilation occurs when the byte code is being executed and hence the name "Just in Time"
Once the bytecode is compiled into that particular machine code, it is cached by the JIT compiler and will be reused for future needs. Therefore, the main performance improvement can be seen using the JIT compiler when the same code is repeatedly executed because JIT uses the machine code which is cached and stored.
Java Virtual Machine (JVM) is part of the Java Runtime Environment (JRE). JVM (Java Virtual Machine) is an abstract machine. It is a specification that Java, compiler and interpreter comply to ensure safe, portable program and runtime environment.
JVM provides a strict set of rules that can be used by the developer to implement a native interpreter that runs Java code on any machine. This is a logical machine rather than a hardware.
JVMs are available for many hardware and software platforms. JVM, JRE and JDK depend on the platform because the configuration of each OS is different. However, Java platform is free.
The JVM performs following main tasks:
When running a Java Virtual Machine program, it requires memory to store many things, including other information extracted from the bitcodes and loaded class files, programs instant objects, parameters of the parameters, return value, Includes intermediate results of local variables and computations.
Java virtual machine organizes the memory needed to execute the program in many runtime data areas.
Registrars of Java Virtual Machine are similar to registers in our computers. However, because the virtual machine is based on a heap, its registers are not used to pass or obtain logic.
In Java, the registers hold the position of the machine, and to maintain that condition, each row of the byte code is updated after execution.
The following four registers hold the state of the virtual machine:
All these registrars are 32-bit wide, and are allocated immediately. This is possible because the compiler knows the size of the local variable and the operand stack, and because the interpreter knows the size of the execution environment
When Java code is compiled, it is converted to byte-code, which is similar to assembly language created by C and C ++ compilers.
Each instruction in the byte-code has an opode followed by an operand.
The following list contains examples of the OPCode and their description:
Opcodes are represented by 8-bit numbers. Operations vary in length. They are alliances with eight bits, and therefore, large operations with eight bits are divided into multiple bytes.
The reason for using such a small memory space is to maintain the compactness of memory. The Java team felt that the compact code was worth a hit on CPU when looking for each instruction.
A hit that results in the inability of the interpreter to decide that each instruction is due to the different length of instructions.
This decision retrieves lost performance because the compact byte-code travels over the network faster than the codes found in other programming languages, where unused memory space is free as a result of large.
Fixed instructions length Of course, the code with fixed instruction length runs faster on the CPU because interpreter can jump through directions, expect their length and exact locations.
Instruction set provides specification for opcode and operand syntax and value, and identifier values. It includes instructions for methods of methods
During execution of Java stack methods, the existing parameter provides byte-code. Each style of the classroom is assigned a stack frame that is stored in the Java stack. Each stack frame has the local variable, operand stack and current state of the execution environment.
The local variable for the method is stored in the array of 32-bit variables indexed by the vars register.
Large variables are divided into two local variables. When local variables are used, they are loaded on the operand stack for the method.
Operand Stack is 32-bit earlier, the first out (FIFO) stack that stores operators for the Opcodes in JVM Instruction set. These operands are used in both the parameters of the instructions as well as in the results of the instructions.
The execution environment provides information about the current state of the method in Java stack.
It stores pointers to previous method, pointers to its local variables, and pointers to the top and bottom of the operand stack. It might also be contain debugging information.
Each program running in the Java Runtime Environment contains a garbage-stacked pile. Since the examples of class objects are allocated from this stack, another word for the heap is memory allocation pool. By default, the size of the stack on most systems is set to 1MB
Although the heap is set to a specific size when we start a program, it can grow, for example, when new objects are allocated. To ensure that the heap does not get too large, objects that are no longer in use are automatically DE allocated or garbage-collected by the Java Virtual Machine.
Java stores automatic garbage as background garbage. Each thread running in the Java runtime environment has two stacks attached to it: the first stack is used for Java code; The second is used for the C code.
The memory used by these piles, then we can force Java to perform more aggressive cleaning and thus reduce the total amount of memory used. To do this, reduce the maximum size of the Java and C code stacks. If our system has a lot of memory, then we can force Java to perform less aggressive cleaning, thus reducing the amount of background processing. To do this, increase the maximum size of the Java and C code stacks
There is constant pool associated with it in each class in the heap. Because the constants do not change, they are usually made at compile time. Items in the constant pool encode all the names used by any method in a particular category.
How many constants exist in the class, and an offset is present in which it has been specified that where a specific list of constants in the class description starts from
Method area is a not heap area in JVM. The Java method is stored in all the bytecode method fields. It also stores symbolic tables for dynamic linking and additional debugging information associated with the classes.
In addition to all runtime data areas as defined by the Java Virtual Machine Specification and previously described, a running Java application can be used by native methods or other data fields created for it. When a thread calls the original method, then it enters into a new world, in which the structure and security restrictions of the Java virtual machine no longer obstruct its independence.
An original method can access the runtime data areas of the virtual machine (this depends on the basic method interface), but it can do anything else. It can use the registers inside the original processor, allocate memory to any original heap, or use any type of stack.
Native methods are inherently implementation dependent. Implementation designers are free to decide what mechanisms they will use to enable a Java application running on their implementation to invoke native methods.
As a mechanic need a set of tools for repairing machines, a student needs stationary for study and a sportsman need sports related tools such as bat, ball, hockey sticks etc. Similarly a software developer need a set of tools for developing a software.
JDK stands for Java Development Kit, It is a program development environment for writing Java applets and applications.
It consists of a runtime environment that "sits on top" of the operating system layer as well as the tools for programming that developers need to compile, debug, and run applets and applications written in the Java language. JDK provides tool such as Javac compiler, Java Interpreter, Javadoc documentation generator tool, jar and other tool which are needed for programming.
For creating java application JDK must be installed and configured in the system.
Let us look at the steps involved in creating and executing our first java program.
Open any basic text editor such as notepad and type the following program.
Class First { Public static void main(String args[]) { System.out.println("This is my first java program."); } }
Now save the above program with the name First.java in any local hard drive (such as C or D).
To compile and run the program open command prompt (go to run and type cmd then press ok). And move to location where you have saved your java program using CD command.
Then set the path for the compiler and interpreter ( path="c:\program files\java\jdk1.7\bin") which may differ on the basis of operating system and JDK installed.
To Compile: - javac First.java To Run:- java First
Understanding First Java. | https://chercher.tech/java-programming/jvm-jdk-jre | CC-MAIN-2019-18 | refinedweb | 1,837 | 52.6 |
#include <ShowerVertex.h>
The ShowerVertex class is designed to implement the vertex for a branching in the shower for use with the spin correlation alogorithm. 40 of file ShowerVertex.h.
Method to calculate the matrix for the decaying particle.
It this case the argument is a dummy.
Implements ThePEG::HelicityVertex.
Method to calculate the matrix for one of the decay products.
The standard Init function used to initialize the interfaces.
Called exactly once for each class by the class description system before the main function starts or when this class is dynamically loaded.
Access to the matrix element.
Get the matrix element
Definition at line 59 of file ShowerVertex.h.
The assignment operator is private and must never be called.
In fact, it should not even be implemented. | https://herwig.hepforge.org/doxygen/classHerwig_1_1ShowerVertex.html | CC-MAIN-2019-04 | refinedweb | 128 | 60.01 |
Tutorial: Tree-LSTM in DGL¶
Author: Zihao Ye, Qipeng Guo, Minjie Wang, Jake Zhao, Zheng Zhang
In this tutorial, you learn to use Tree-LSTM networks for sentiment analysis. The Tree-LSTM is a generalization of long short-term memory (LSTM) networks to tree-structured network topologies.
The Tree-LSTM structure was first introduced by Kai et. al in an ACL 2015 paper: Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks. The core idea is to introduce syntactic information for language tasks by extending the chain-structured LSTM to a tree-structured LSTM. The dependency tree and constituency tree techniques are leveraged to obtain a ‘’latent tree’‘.
The challenge in training Tree-LSTMs is batching — a standard technique in machine learning to accelerate optimization. However, since trees generally have different shapes by nature, parallization is non-trivial. DGL offers an alternative. Pool all the trees into one single graph then induce the message passing over them, guided by the structure of each tree.
The task and the dataset¶
The steps here use the
Stanford Sentiment Treebank in
dgl.data. The dataset provides a fine-grained, tree-level sentiment
annotation. There are five classes: Very negative, negative, neutral, positive, and
very positive, which indicate the sentiment in the current subtree. Non-leaf
nodes in a constituency tree do not contain words, so use a special
PAD_WORD token to denote them. During training and inference
their embeddings would be masked to all-zero.
The figure displays one sample of the SST dataset, which is a constituency parse tree with their nodes labeled with sentiment. To speed up things, build a tiny set with five sentences and take a look at the first one.
import dgl from dgl.data.tree import SST from dgl.data import SSTBatch # Each sample in the dataset is a constituency tree. The leaf nodes # represent words. The word is an int value stored in the "x" field. # The non-leaf nodes have a special word PAD_WORD. The sentiment # label is stored in the "y" feature field. trainset = SST(mode='tiny') # the "tiny" set has only five trees tiny_sst = trainset.trees num_vocabs = trainset.num_vocabs num_classes = trainset.num_classes vocab = trainset.vocab # vocabulary dict: key -> id inv_vocab = {v: k for k, v in vocab.items()} # inverted vocabulary dict: id -> word a_tree = tiny_sst[0] for token in a_tree.ndata['x'].tolist(): if token != trainset.PAD_WORD: print(inv_vocab[token], end=" ")
Out:
Preprocessing... Dataset creation finished. #Trees: 5 the rock is destined to be the 21st century 's new `` conan '' and that he 's going to make a splash even greater than arnold schwarzenegger , jean-claud van damme or steven segal .
Step 1: Batching¶
Add all the trees to one graph, using
the
batch() API.
import networkx as nx import matplotlib.pyplot as plt graph = dgl.batch(tiny_sst) def plot_tree(g): # this plot requires pygraphviz package pos = nx.nx_agraph.graphviz_layout(g, prog='dot') nx.draw(g, pos, with_labels=False, node_size=10, node_color=[[.5, .5, .5]], arrowsize=4) plt.show() plot_tree(graph.to_networkx())
You can read more about the definition of
batch(), or
skip ahead to the next step:
.. note:
**Definition**: :func:`~dgl.batch` unions a list of :math:`B` :class:`~dgl.DGLGraph`\ s and returns a :class:`~dgl.DGLGraph` of batch size :math:`B`. - The union includes all the nodes, edges, and their features. The order of nodes, edges, and features are preserved. - Given that you have :math:`V_i` nodes for graph :math:`\mathcal{G}_i`, the node ID :math:`j` in graph :math:`\mathcal{G}_i` correspond to node ID :math:`j + \sum_{k=1}^{i-1} V_k` in the batched graph. - Therefore, performing feature transformation and message passing on the batched graph is equivalent to doing those on all ``DGLGraph`` constituents in parallel. - Duplicate references to the same graph are treated as deep copies; the nodes, edges, and features are duplicated, and mutation on one reference does not affect the other. - The batched graph keeps track of the meta information of the constituents so it can be :func:`~dgl.batched_graph.unbatch`\ ed to list of ``DGLGraph``\ s.
Step 2: Tree-LSTM cell with message-passing APIs¶
Researchers have proposed two types of Tree-LSTMs: Child-Sum Tree-LSTMs, and \(N\)-ary Tree-LSTMs. In this tutorial you focus on applying Binary Tree-LSTM to binarized constituency trees. This application is also known as Constituency Tree-LSTM. Use PyTorch as a backend framework to set up the network.
In N-ary Tree-LSTM, each unit at node \(j\) maintains a hidden representation \(h_j\) and a memory cell \(c_j\). The unit \(j\) takes the input vector \(x_j\) and the hidden representations of the child units: \(h_{jl}, 1\leq l\leq N\) as input, then update its new hidden representation \(h_j\) and memory cell \(c_j\) by:
It can be decomposed into three phases:
message_func,
reduce_func and
apply_node_func.
Note
apply_node_func is a new node UDF that has not been introduced before. In
apply_node_func, a user specifies what to do with node features,
without considering edge features and messages. In a Tree-LSTM case,
apply_node_func is a must, since there exists (leaf) nodes with
\(0\) incoming edges, which would not be updated with
reduce_func.
import torch as th import torch.nn as nn class TreeLSTMCell(nn.Module): def __init__(self, x_size, h_size): super(TreeLSTMCell, self).__init__() self.W_iou = nn.Linear(x_size, 3 * h_size, bias=False) self.U_iou = nn.Linear(2 * h_size, 3 * h_size, bias=False) self.b_iou = nn.Parameter(th.zeros(1, 3 * h_size)) self.U_f = nn.Linear(2 * h_size, 2 * h_size) def message_func(self, edges): return {'h': edges.src['h'], 'c': edges.src['c']} def reduce_func(self, nodes): # concatenate h_jl for equation (1), (2), (3), (4) h_cat = nodes.mailbox['h'].view(nodes.mailbox['h'].size(0), -1) # equation (2) f = th.sigmoid(self.U_f(h_cat)).view(*nodes.mailbox['h'].size()) # second term of equation (5) c = th.sum(f * nodes.mailbox['c'], 1) return {'iou': self.U_iou(h_cat), 'c': c} def apply_node_func(self, nodes): # equation (1), (3), (4) iou = nodes.data['iou'] + self.b_iou i, o, u = th.chunk(iou, 3, 1) i, o, u = th.sigmoid(i), th.sigmoid(o), th.tanh(u) # equation (5) c = i * u + nodes.data['c'] # equation (6) h = o * th.tanh(c) return {'h' : h, 'c' : c}
Step 3: Define traversal¶
After you define the message-passing functions, induce the right order to trigger them. This is a significant departure from models such as GCN, where all nodes are pulling messages from upstream ones simultaneously.
In the case of Tree-LSTM, messages start from leaves of the tree, and propagate/processed upwards until they reach the roots. A visualization is as follows:
DGL defines a generator to perform the topological sort, each item is a tensor recording the nodes from bottom level to the roots. One can appreciate the degree of parallelism by inspecting the difference of the followings:
print('Traversing one tree:') print(dgl.topological_nodes_generator(a_tree)) print('Traversing many trees at the same time:') print(dgl.topological_nodes_generator(graph))
Out:
Traversing one tree: (tensor([ 2, 3, 6, 8, 13, 15, 17, 19, 22, 23, 25, 27, 28, 29, 30, 32, 34, 36, 38, 40, 43, 46, 47, 49, 50, 52, 58, 59, 60, 62, 64, 65, 66, 68, 69, 70]), tensor([ 1, 21, 26, 45, 48, 57, 63, 67]), tensor([24, 44, 56, 61]), tensor([20, 42, 55]), tensor([18, 54]), tensor([16, 53]), tensor([14, 51]), tensor([12, 41]), tensor([11, 39]), tensor([10, 37]), tensor([35]), tensor([33]), tensor([31]), tensor([9]), tensor([7]), tensor([5]), tensor([4]), tensor([0])) Traversing many trees at the same time: (tensor([ 2, 3, 6, 8, 13, 15, 17, 19, 22, 23, 25, 27, 28, 29, 30, 32, 34, 36, 38, 40, 43, 46, 47, 49, 50, 52, 58, 59, 60, 62, 64, 65, 66, 68, 69, 70, 74, 76, 78, 79, 82, 83, 85, 88, 90, 92, 93, 95, 96, 100, 102, 103, 105, 109, 110, 112, 113, 117, 118, 119, 121, 125, 127, 129, 130, 132, 133, 135, 138, 140, 141, 142, 143, 150, 152, 153, 155, 158, 159, 161, 162, 164, 168, 170, 171, 174, 175, 178, 179, 182, 184, 185, 187, 189, 190, 191, 192, 195, 197, 198, 200, 202, 205, 208, 210, 212, 213, 214, 216, 218, 219, 220, 223, 225, 227, 229, 230, 232, 235, 237, 240, 242, 244, 246, 248, 249, 251, 253, 255, 256, 257, 259, 262, 263, 267, 269, 270, 271, 272]), tensor([ 1, 21, 26, 45, 48, 57, 63, 67, 77, 81, 91, 94, 101, 108, 111, 116, 128, 131, 139, 151, 157, 160, 169, 173, 177, 183, 188, 196, 211, 217, 228, 247, 254, 261, 268]), tensor([ 24, 44, 56, 61, 75, 89, 99, 107, 115, 126, 137, 149, 156, 167, 181, 186, 194, 209, 215, 226, 245, 252, 266]), tensor([ 20, 42, 55, 73, 87, 124, 136, 154, 180, 207, 224, 243, 250, 265]), tensor([ 18, 54, 86, 123, 134, 148, 176, 206, 222, 241, 264]), tensor([ 16, 53, 84, 122, 172, 204, 239, 260]), tensor([ 14, 51, 80, 120, 166, 203, 238, 258]), tensor([ 12, 41, 72, 114, 165, 201, 236]), tensor([ 11, 39, 106, 163, 199, 234]), tensor([ 10, 37, 104, 147, 193, 233]), tensor([ 35, 98, 146, 231]), tensor([ 33, 97, 145, 221]), tensor([ 31, 71, 144]), tensor([9]), tensor([7]), tensor([5]), tensor([4]), tensor([0]))
Call
prop_nodes() to trigger the message passing:
import dgl.function as fn import torch as th graph.ndata['a'] = th.ones(graph.number_of_nodes(), 1) graph.register_message_func(fn.copy_src('a', 'a')) graph.register_reduce_func(fn.sum('a', 'a')) traversal_order = dgl.topological_nodes_generator(graph) graph.prop_nodes(traversal_order) # the following is a syntax sugar that does the same # dgl.prop_nodes_topo(graph)
Note
Before you call
prop_nodes(), specify a
message_func and reduce_func in advance. In the example, you can see built-in
copy-from-source and sum functions as message functions, and a reduce
function for demonstration.
Putting it together¶
Here is the complete code that specifies the
Tree-LSTM class.
class TreeLSTM(nn.Module): def __init__(self, num_vocabs, x_size, h_size, num_classes, dropout, pretrained_emb=None): super(TreeLSTM, self).__init__() self.x_size = x_size self.embedding = nn.Embedding(num_vocabs, x_size) if pretrained_emb is not None: print('Using glove') self.embedding.weight.data.copy_(pretrained_emb) self.embedding.weight.requires_grad = True self.dropout = nn.Dropout(dropout) self.linear = nn.Linear(h_size, num_classes) self.cell = TreeLSTMCell(x_size, h_size) def forward(self, batch, h, c): """Compute tree-lstm prediction given a batch. Parameters ---------- batch : dgl.data.SSTBatch The data batch. h : Tensor Initial hidden state. c : Tensor Initial cell state. Returns ------- logits : Tensor The prediction of each node. """ g = batch.graph g.register_message_func(self.cell.message_func) g.register_reduce_func(self.cell.reduce_func) g.register_apply_node_func(self.cell.apply_node_func) # feed embedding embeds = self.embedding(batch.wordid * batch.mask) g.ndata['iou'] = self.cell.W_iou(self.dropout(embeds)) * batch.mask.float().unsqueeze(-1) g.ndata['h'] = h g.ndata['c'] = c # propagate dgl.prop_nodes_topo(g) # compute logits h = self.dropout(g.ndata.pop('h')) logits = self.linear(h) return logits
Main Loop¶
Finally, you could write a training paradigm in PyTorch.
from torch.utils.data import DataLoader import torch.nn.functional as F device = th.device('cpu') # hyper parameters x_size = 256 h_size = 256 dropout = 0.5 lr = 0.05 weight_decay = 1e-4 epochs = 10 # create the model model = TreeLSTM(trainset.num_vocabs, x_size, h_size, trainset.num_classes, dropout) print(model) # create the optimizer optimizer = th.optim.Adagrad(model.parameters(), lr=lr, weight_decay=weight_decay) def batcher(dev): def batcher_dev(batch): batch_trees = dgl.batch(batch) return SSTBatch(graph=batch_trees, mask=batch_trees.ndata['mask'].to(device), wordid=batch_trees.ndata['x'].to(device), label=batch_trees.ndata['y'].to(device)) return batcher_dev train_loader = DataLoader(dataset=tiny_sst, batch_size=5, collate_fn=batcher(device), shuffle=False, num_workers=0) # training loop for epoch in range(epochs): for step, batch in enumerate(train_loader): g = batch.graph n = g.number_of_nodes() h = th.zeros((n, h_size)) c = th.zeros((n, h_size)) logits = model(batch, h, c) logp = F.log_softmax(logits, 1) loss = F.nll_loss(logp, batch.label, reduction='sum') optimizer.zero_grad() loss.backward() optimizer.step() pred = th.argmax(logits, 1) acc = float(th.sum(th.eq(batch.label, pred))) / len(batch.label) print("Epoch {:05d} | Step {:05d} | Loss {:.4f} | Acc {:.4f} |".format( epoch, step, loss.item(), acc))
Out:
TreeLSTM( (embedding): Embedding(19536, 256) (dropout): Dropout(p=0.5, inplace=False) (linear): Linear(in_features=256, out_features=5, bias=True) (cell): TreeLSTMCell( (W_iou): Linear(in_features=256, out_features=768, bias=False) (U_iou): Linear(in_features=512, out_features=768, bias=False) (U_f): Linear(in_features=512, out_features=512, bias=True) ) ) Epoch 00000 | Step 00000 | Loss 433.6387 | Acc 0.3077 | Epoch 00001 | Step 00000 | Loss 247.8803 | Acc 0.7326 | Epoch 00002 | Step 00000 | Loss 737.2761 | Acc 0.4249 | Epoch 00003 | Step 00000 | Loss 315.7477 | Acc 0.7509 | Epoch 00004 | Step 00000 | Loss 154.1384 | Acc 0.8352 | Epoch 00005 | Step 00000 | Loss 289.3680 | Acc 0.7766 | Epoch 00006 | Step 00000 | Loss 193.6976 | Acc 0.8425 | Epoch 00007 | Step 00000 | Loss 95.8642 | Acc 0.8938 | Epoch 00008 | Step 00000 | Loss 72.3045 | Acc 0.9231 | Epoch 00009 | Step 00000 | Loss 60.1748 | Acc 0.9341 |
To train the model on a full dataset with different settings (such as CPU or GPU), refer to the PyTorch example. There is also an implementation of the Child-Sum Tree-LSTM.
Total running time of the script: ( 0 minutes 2.204 seconds)
Gallery generated by Sphinx-Gallery | https://docs.dgl.ai/tutorials/models/2_small_graph/3_tree-lstm.html | CC-MAIN-2020-29 | refinedweb | 2,227 | 59.7 |
How to tell a TextView to scroll?
Hi, I have info printing into text view.. but as I append new data it will not scroll the textview.. but just show 1 line of the new data and you have to scroll to see the rest manually with a finger scroll. Is there any way to fix this or am I stuck?
See this example posted few days ago. (The scroll function is based on a post by JonB sometime ago).
Thanks Abcabc!
Here is the only function you need to do the TextView scroll. The function will just read your 'textview' content and offset. Works great!
def scroll():
v['textview1'].content_offset = (0, v['textview1'].content_size[1] -v['textview1'].height) | https://forum.omz-software.com/topic/3653/how-to-tell-a-textview-to-scroll/? | CC-MAIN-2022-27 | refinedweb | 119 | 94.96 |
US6718331B2 - Method and apparatus for locating inter-enterprise resources using text-based strings - Google PatentsMethod and apparatus for locating inter-enterprise resources using text-based strings Download PDF
Info
- Publication number
- US6718331B2US6718331B2 US09736583 US73658300A US6718331B2 US 6718331 B2 US6718331 B2 US 6718331B2 US 09736583 US09736583 US 09736583 US 73658300 A US73658300 A US 73658300A US 6718331 B2 US6718331 B2 US 6718331B2
- Authority
- US
- Grant status
- Grant
- Patent type
-
- Prior art keywords
- resource
- identifier
- enterprise
-2209/00—Indexing scheme relating to G06F9/00
- G06F2209/46—Indexing scheme relating to G06F9/46
- G06F2209/462
Abstract
Description
1. Technical Field
The present invention relates to enterprise resources and, in particular, to accessing resources with disparate sources and technologies. Still more particularly, the present invention provides a method, apparatus, and program for locating inter-enterprise resources using text-based strings.
2. Description of Related Art
Enterprise Java Beans (EJB) is a component software architecture from Sun that is used to build Java applications that run in a server. EJB uses a “container” layer that provides common functions such as security and transaction support and delivers a consistent interface to applications regardless of the type of server. Components are program modules that are designed to interoperate with each other at runtime. Components may be written by different programmers using different development environments and they may or may not be platform independent. Components can be run in stand-alone machines, on a LAN, intranet or the Internet.
The terms component and object are used synonymously. Component architectures have risen out of object-oriented technologies, but the degree to which they comply to all the rules of object technology is often debated. Component architectures may use a client/component model, in which the client application is designed as the container that holds other components or objects. The client container is responsible for the user interface and coordinating mouse clicks and other inputs to all the components. A pure object model does not require a container. Any object can call any other without a prescribed hierarchy.
Common Object Request Broker Architecture (CORBA) defines the communication protocols and datatype mappings for EJBs. CORBA is a standard from the Object Management Group (OMG) for communicating between distributed objects. CORBA provides a way to execute programs (objects) written in different programming languages running on different platforms no matter where they reside in the network. Technically, CORBA is the communications component of the Object Management Architecture (OMA), which defines other elements such as naming services, security service, and transaction services.
A naming service is software that converts a name into a physical address on a network, providing logical to physical conversion. Names can be user names, computers, printers, services, or files. The transmitting station sends a name to the server containing the naming service software, which sends back the actual address of the user or resource. The process is known as name resolution. A naming service functions as a White Pages for the network.
CORBA defines an Inter-ORB-reference (IOR), which may be externalized as a text string; however, an IOR is not human-readable or interpretable and applies only to CORBA objects. Microsoft Component Object Model (COM), which defines a structure for building program routines that can be called up and executed in a Windows™ environment, defines a moniker as a generalized resource handle that can be externalized as human readable. However, monikers require binding to the resource and apply only to COM objects. The prior art fails to provide a standard method for identifying and accessing resources across technologies, such as EJB, CORBA, and COM.
Therefore, it would be advantageous to provide a method, apparatus, and program for locating inter-enterprise resources using human readable text-based strings.
The present invention provides a standard format for a text string called an enterprise identifier, which acts as a handle to access resources from disparate sources and technologies. Enterprise identifiers use extensible markup language (XML) format to allow a resource identifier to be created manually without accessing the resource. The identifier may be easily passed between enterprises via business-to-business connection, e-mail, telephone, or facsimile. Data may be extracted from the identifier for display or programmatic use without accessing the resource, thus avoiding unnecessary data access and an exemplary EI string for a resource in an EJB environment in accordance with a preferred embodiment of the present invention;
FIG. 5 is a data flow diagram of resource access between enterprises using an Enterprise Identifier in accordance with a preferred embodiment of the present invention; and
FIG. 6 is a flowchart of the operation of an EI object receiving a call in accordance with a preferred embodiment, servers 104, 106 are connected to network 102 along with storage unit 108. In addition, clients 110, 112, and 114 also are connected to network 102. These clients 110, 112, and 114 may be, for example, personal computers or network computers. In the depicted example, server 104 provides data, such as boot files, operating system images, and applications to clients 110-114. Clients 110, 112, and 114 are clients to server 104. Network data processing system 100 may include additional servers, clients, and other devices not shown. For example, naming server 106 may provide a naming service for resource provided in the network, such as those provided by server 104..
In accordance with a preferred embodiment of the present invention, an Enterprise Identifier (EI) provides information necessary for a client application to obtain a reference to an enterprise resource instance, which may be local to the client machine, remote within the enterprise or business domain (intra-enterprise), or remote in another enterprise or business domain (inter-enterprise). The EI is a text-based handle with enough intelligence to obtain information from various sources, depending on the circumstances, and return a reference to a resource instance in as specific a manner as possible. The EI is simple enough to construct manually, such as through e-mail or over the telephone.
The EI string is comprised of address information and optional attributes, described in extensible markup language (XML) and Document Type Definition (DTD). DTD is a language that describes the contents of an standard generalized markup language (SGML) document. The DTD is also used with XML, and the DTD definitions may be embedded within an XML document or in a separate file. The DTD may also contain data type information.
Referring to FIG. 2, a block diagram of a data processing system that may be implemented as a server, such as servers 104, 106, an exemplary EI string for a resource in an EJB environment is shown in accordance with a preferred embodiment of the present invention. Enterprise Identifier 400 includes a name server identifier 410, a resource identifier 420, and optional attributes 430. Name server identifier 410 points to the location of the name server used to resolve the EI reference. The uniform resource locator (URL) protocol value is used to distinguish between name server types for making connections and queries. For example, an EJB name server would be prefixed by “iiop”, and a lightweight directory access protocol (LDAP) name server would be prefixed by “ldap”.
The URL portion of the EI string points to the name server to be used for resolving the EI reference. In homogeneous namespaces, such as for EJBs within an enterprise, this may simply be the URL of an EJB name server. Heterogeneous namespaces, such as between EJBs and COM objects, or homogeneous namespaces that cross enterprise boundaries may require use of a shared global name server, such as an LDAP directory.
Resource identifier 420 includes a resource name and primary key, the latter in name-value pairs with optional data types. Resource name may vary according to the resource type, which could be a home or factory lookup name for an EJB, or an object lookup name for an LDAP entry, with the communications protocols being determined by the server type as given in the domain namespace identifier. Primary key values are not necessary for serialized Java objects.
Optional attributes 430 are not used in resolving the EI, but can be used to hold additional information, such as a “short name” value. The optional attributes comprise name-value pairs. It should be emphasized that the above example does not in any way constrain the EI format when used in other application environments or with other naming schemes, because both the format and supporting programming logic are soft configurable so as to apply potentially to any named enterprise resource. For example, the use of monikers in COM may be adaptable to EIs.
An EI object created from an EI string may have the following methods.
newEIObject(EI string)—creates an instance of an EI, which in the EJB environment will typically be stored using a container—or bean-managed persistence (the persistence mechanism is transparent to the EI).
getObject( )—returns an object narrowed to the most specific type that can be determined at run time. For EJBs, at a minimum this is an EJBObject. Serialized Java objects do not need narrowing.
getEIString( )—returns the object's EI string.
getAttribute(String attributeName)—returns an arbitrary named attribute, such as a “short name”.
setAttribute(String attributeName Object attributeValue)—modifies an arbitrary named attribute (assuming that it is mutable).
With reference now to FIG. 5, a data flow diagram of resource access between enterprises using an Enterprise Identifier is shown in accordance with a preferred embodiment of the present invention. In this example, a client in enterprise A accesses a specific part using a list of EIs provided by enterprise B. For simplicity, it is assumed that both enterprises deploy their applications in the EJB environment.
In step 1, using normal EJB protocols, a client A application invokes a “getPartsList” method on an Assembly object in enterprise B. Internet transport and security mechanisms are pre-defined according to the operative business or trading partner agreement and thus are outside the EI scope.
In step 2 the enterprise B Assembly object server acts as client to do a “SELECT” on a relational database (RDB), which returns the primary keys of the assembly's component parts. This is a relatively lightweight approach. Alternatively, the Assembly could query the Part objects' home to return a collection of references. Using the returned primary keys, in step 3 the Assembly object creates the associate EI objects. Optional attributes values such as for “short name” may be obtained either via the RDB query or via “get” requests to the actual Part object instances, depending on how the application is written.
At the completion of the “getPartsList” method, in step 4 the Assembly object returns an array of EI objects. In this example, the client never deals directly with the EI strings. However, these strings may also have been returned in text format for use in e-mail or similar applications.
In step 5 client A populates the desktop with icons representing parts in the assembly. For each icon, client A calls an EI object's “getAttribute” method to obtain the “short name” value. Because this value is stored in the EI object, no actual resource access is needed. In step 6 the human user clicks on a specific part icon for display, which causes client A to invoke the associated EI object's (now on the client machine) “getObject” method.
Let's assume that the EJB home or factory name is stored as an LDAP entry for inter-enterprise access. In step 7 the EI object makes a call to the LDAP directory, which returns the EJB home or factory location of the part resource, which in turn the EI object uses to access the part instance in step 8 and return its narrowed reference to client A in step 9.
Finally, in step 10 client A invokes the “displayPart( )” method on the part instance, just as if the EI were not involved.
With reference now to FIG. 6, a flowchart of the operation of an EI object receiving a call is illustrated in accordance with a preferred embodiment of the present invention. The process begins and receives a call (step 602). A determination is made as to whether the call is to the “newEIObject(EI String)” method (step 604). If the call is to the “newEIObject” method, the process creates an instance of the EI object (step 606) and ends.
If the call is not to the “newEIObject” method in step 604, a determination is made as to whether the call is to the “getObject( )” method (step 608). If the call is to the “getObject” method, the process access the name server and receives the resource location (step 610). Then, the process access the resource and returns to the caller the object narrowed to the most specific type (step 612) and ends.
If the call is not to the “getObject” method in step 608, a determination is made as to whether the call is to the “getEIString( )” method (step 614). If the call is to the “getEIString” method, the process returns the EI objects EI string (step 616) and ends. If the call is not to the “getEIString” method in step 614, a determination is made as to whether the call is to the “getAttribute(String attributeName)” method (step 618). If the call is to the “getAttribute” method, the process returns the named attribute, such as “short name,” and ends.
If the call is not to the “getAttribute” method in step 618, a determination is made as to whether the call is to the “setAttribute(String attributeName Object attributeValue)” method (step 622). If the call is to the “setAttribute” method, the process modifies the named attribute (step 624) and ends. If the call is not to the “setAttribute” method in step 622, the process executes the called method or returns an error if the called method cannot be executed (step 626). Thereafter, the process ends.
Thus, the present invention solves the disadvantages of the prior art by providing an Enterprise Identifier (EI). An EI provides the necessary information such that a client application can obtain a reference to an enterprise resource instance. The EI must support multiple naming schemes and application environments. In order to provide this degree of flexibility, an EI relies upon XML as a self-describing grammar for the identifier format and Java and other vendor-neutral technologies for using the identifier in order to obtain a specific resource reference. The EI conceptual framework is intended to be readily adaptable for proprietary technologies. The identifier may be easily passed between enterprises via business-to-business connection, e-mail, telephone, or facsimile. Data may be extracted from the identifier for display or programmatic use without accessing the resource, thus avoiding unnecessary data access and. | https://patents.google.com/patent/US6718331 | CC-MAIN-2018-17 | refinedweb | 2,459 | 50.26 |
I recently came across the website for Pepperstone, a forex broker. They give away FX tick data for 15 currency pairs. The data is stored in compressed CSV files, one file per month per currency pair. They include “fractional pip spreads in millisecond detail,” with timestamps in GMT. This seems to be really good data, and I was surprised to find it for free. New files arrive monthly, with about a two-month lag.
The only problem is the hassle of downloading such a large number of files: 1,215 zip-files, totaling 21GB. So, I wrote a Python script to handle it. The script should be saved in the download folder. It parses html to find the download links. It doesn’t download any files already present in the target folder, so it is suitable for running updates.
import urllib, os, urllib2, re, ntpath folder = r'E:/FX' localFiles = os.listdir(folder) print 'already downloaded ' + str(len(localFiles)) + ' files' # find all available files url = '' req = urllib2.Request(url) page = urllib2.urlopen(req) html = page.read() html = re.sub(r'[/n/r]+', '', html) anchor_pattern = re.compile(r'.*?/.zip') anchors = anchor_pattern.findall(html) # determine the missing files remoteFiles = [] for anchor in anchors: filename = ntpath.basename(anchor) if not filename in localFiles: remoteFiles.append(anchor) # download the missing files print 'downloading ' + str(len(remoteFiles)) + ' new files' os.chdir(folder) for remoteFile in remoteFiles: try: filename = ntpath.basename(remoteFile) urllib.urlretrieve(remoteFile, filename) except: print 'error with file: ' + filename print 'finished with: ' + filename script to download 7 years of FX tick data for 15 currency pairs
评论 抢沙发 | http://www.shellsec.com/news/6854.html | CC-MAIN-2018-09 | refinedweb | 265 | 61.33 |
This is a tracking bug for CSS3 selector implementation in Mozilla. Initially, there is widespread desire for the :not() selector, please list others desired and what we need them for in this bug. Personally, I think that the :nth-child(an+b) and :nth-of-type(an+b) and their related pseudos are extremely compelling and useful (table rows alternating colors is the obvious example).
bug 57686 is reportedly blocked by lack of the :not() pseudo-class...
Adding bug 68206 whcih requests the CSS3 ::selection pseudo-element for skinability purposes.
Can anyone estimate when this work might be done? It would be appreciated for mozilla 1.0 / NS 6.5
This is a tracking bug. Since you have bug 57686 which needs the :not selector, I'm thinking that we should spin off a separate bug requesting that selector specifically, and mark your bug dependent on that one instead of this one. It could take *years* to do all of he CSS3 selectors... I'm also reassigning this to Daniel since he will be taking over the new selector implementations in all likelihood (if anyone disagrees, please make the proper reassignments).
setting to moz1.0 for now
Marking "meta" since this is a tracking bug. Let's file new bugs each time we want to track the implementation of a particular part of CSS3 selectors, and mark them as dependencies.
status of css3 selectors that did not already exist in css 2 : IMPLEMENTED : substring matching attribute selectors negation pseudo-class root pseudo-class namespaces in type and attribute selectors (though buggy ; see 72302) RESERVED in nsCSSAtomList.h (by peter linss years ago !!!) AND PARSED BUT NOT IMPLEMENTED :checked, :enabled, :disabled, :selection
Bugs targeted at mozilla1.0 without the mozilla1.0 keyword moved to mozilla1.0.1 (you can query for this string to delete spam or retrieve the list of bugs I've moved)
CSS3 Selectors are now a candidate recommendation:
we don't have bugs for: :target :lang() [CSS2, but still missing] :first-of-type / :last-of-type :only-child :only-of-type ~ (indirect adjacent) :contains() except this the CSS3 Selector-test shows a problem with *:not(:hover) [it's ignored] and the :enabled and :disabled tests doesn't load. Also some problems with namespaces I can't determine any further..
Added the bug for :target (bug 188734). bug 75186 (:empty) is actually not fixed at all. There are other bugs where the issues are reported: bug 157395, bug 98997 and bug 188953.
:contains would be very nice to have. One example is for table formatting of empty cells, user you add background colour or even content to show it was purposely empty, without the need for additional classes.
You can do this with :empty, which is already supported by Mozilla. bug 221981 is for contains(). Please don't comment on bugs without additional information. It has been told to me that this is only slowing down the developers ;).
*** Bug 311039 has been marked as a duplicate of this bug. ***
Here is a nice test-suite for CSS3 selectors:
Opera 9.5 and KHTML 3.5.6 have a full
oups, sorry for spam. Opera 9.5 and KHTML 3.5.6 have a full implementation of css3 selectors.
KHTML 3.5.6 fails a bunch of the tests in (I've been clicking on a bunch randomly (probably about 20% of them), and already noticed 57, 97b, 98b, 103, 115b, 150, 171, 172a, 172b, d5, and d5b.) And that test suite isn't complete (although probably more complete than the one in comment 18, which I'm guessing is what led to that claim).
(In reply to comment #18) > Here is a nice test-suite for CSS3 selectors: > For the record, Firefox 3.1b2 passes this suite 100%. jresiq has another test suite available at the link below, which 3.1b2 is currently 99.3% passing.
The selectors draft is going back to last call shortly with ::selection removed, so we're done here. | https://bugzilla.mozilla.org/show_bug.cgi?id=65133 | CC-MAIN-2017-13 | refinedweb | 666 | 65.83 |
bexbier Wrote:hy
im a bit stupid i think, have download the launcher and the skin m360. than i copy the directory launcher to /usr/share/xbmc/plugins and to try in /home/user/.xbmc/plugins than i run xbmc and put manual by typing the path, under games->application. but i cant see the launcher "script" to add this to favourits ;(
with search i cant go to /home/user/.xbmc so i must type the path manually.
why cant i see the launcher icon ?
i have xbmc installed under ubuntu 8.10 the svn nr. is 15810
can anybody helps me ? what do i wrong ?
regards
bex
Quote:#!/usr/bin/python
import os
cmd = '/usr/bin/ratpoison | /usr/bin/firefox'
os.system(cmd)
WiSo Wrote:forget it. Now I get other results
I get an exception after -> Scriptresult: Succes
in thread.cpp line 266: m_iLastTime = GetTickCount() * 10000;
Also the call stack doesn't show any unusal.
Weird is that if I click retry it works and shows the following output:
thread start, auto delete: 0
WindowHelper thread started
Python script stopped
Does it mean the first script stops after I started a new thread?
Dunno have to sleep some nigths over it
Broads Wrote:Ok What am I doing wrong,
I really want to have a play with this plugin, But I am having some teething problems.
I am running vista64 if it makes any difference, but when I go to add a plugin as a source, the plugin adds launcher fine.
When I click on the launcher Icon It says could not connect to network source.? any Ideas what Im doing wrong?
Thanks.
fidoboy Wrote:I've another problem, i can't launch games using this plugin (i've tried with Crysis and Assassin's Creed games). I've added both launchers, but when i try to launch the game, it has problems using DirectX or something like that; it seems that XBMC is using exclusively the graphic resources, so i think that before launching the game this plugin should minimize or release the resources and reload them when the app/game is ended...
NOTE: i've noticed that both games have a command line option, something like -mce that its used to launch the game from Vista Media Center, can i use that parameter with this plugin? what is the purppose of this command line option?
regards, | http://forum.kodi.tv/showthread.php?tid=35739&page=11 | CC-MAIN-2016-30 | refinedweb | 400 | 72.87 |
On Thu, 21 Feb 2002, Jose Alberto Fernandez wrote:
> To tell you the truth I have had no chance to look at your proposed changes,
> but since in my <antlib> proposal I am heavily modifying ProjectHelper
> I feel concerned :-o
>
> If I understand correctly what you are trying to achieve, I would suggest defining
> a new factory class that is the one having the createProject() method and that can be
> driven by whatever is controling ANT.
>
> In my view ProjectHelper should stay as the default implementation used by the factory
> which should mean real minimal changes to ProjectHelper as such.
If ProjectHelper is not modified, then how can it delegate to a different
processor ?
Right now the only entry point ( except static methods that are
unchanged) is the configureProject(), which calls private stuff.
All the private stuff ( the actual xml processing ) just moved in
ProjectHelperImpl, so any change you made can be re-done there
( obviously changes in the xml helper makes me concerned because
other helpers will need to duplicate it :-).
> So ProjectFactory, should be the thing to work on.
Creating the project is just a side-thing, the XML processing is
what I want to abstract.
I need a createProject() because the helper may want to plug
a different implementation ( i.e. class that extends Project ).
The 'right' thing would be to have configureProject()
_return_ a project, instead of beeing passed one. This way
it can construct it from what's inside build.xml.
But for now plugging in a different xml processor and
letting the plugin create the Project impl is more than
enough for all use cases, creating different Project
impl. based on <project> attributes or namespace
doesn't seem like a big necesity.
If we would need that, it wouldn't be very hard - it
can be done using a new configureProject() method, with the
old method providing backward compatibility.
> This looks like something I had always thought we should have done, which is
> that the behaviour of calling ANT from the command line should be exactly
> the execution of:
> <ant target="targets" >
> <param .../>
> </ant>
>
> That would have save us so many inconsistencies :(
Exaclty. Except that Ant.java ( or EmbededAnt.java ) will not
have the System.exit() - so it can be embedded easily in any
application.
> Maybe you could put your changes in the sandbox so that we can look at it more precisely.
The change on ProjectHelper is pretty straightforward. I'll put
the new xml processor in commons-sandbox ( it'll be a general-purpose
component ).
For the EmbAnt.java - I'm still workin on it, I'll send it to
the list and if it's a problem I'll move it to commons-sandbox
too ( as a component that embeds ant ).
Costin
--
To unsubscribe, e-mail: <mailto:ant-dev-unsubscribe@jakarta.apache.org>
For additional commands, e-mail: <mailto:ant-dev-help@jakarta.apache.org> | http://mail-archives.apache.org/mod_mbox/ant-dev/200202.mbox/%3CPine.LNX.4.33.0202211514410.4087-100000@dyn-62.sfo.covalent.net%3E | CC-MAIN-2016-30 | refinedweb | 484 | 63.09 |
I constructed a geodesic..
LED tables and cubes:......-...
Geodesic dome:...
Step 1: Supply List
Materials:
1. Wood for struts of dome and base of dome (amount depends on type and size of dome)
2. Addressable LED strip (16.4ft/5m Addressable Color LED Pixel Strip 160leds Ws2801 Dc5v)
3. Arduino Uno (Atmega328 - assembled)
4. Prototype board (Penta Angel Double-Side Prototype PCB Universal (7x9cm))
5. Acrylic for diffusing LEDs (Cast Acrylic Sheet, Clear, 12" x 12" x 0.118" Size)
6. Power supply (Aiposen 110/220V to DC12V 30A 360W Switch Power Supply Driver)
7. Buck converter for Arduino (RioRand LM2596 DC-DC Buck Converter 1.23V-30V)
8. Buck converter for LEDs and sensors (DROK Mini Electric Buck Voltage Converter 15A)
9. 120 IR sensors (Infrared Obstacle Avoidance Sensor Module)
10. Five 16 channel multiplexers (Analog/Digital MUX Breakout - CD74HC4067)
11. Six 8 channel multiplexers (Multiplexer Breakout - 8 Channel (74HC4051))
12. Five 2 channel multiplexers (MAX4544CPA+)
13. Wire wrap wire (PCB Solder 0.25mm Tin Plated Copper Cord Dia Wire-wrapping Wire 305M 30AWG Red)
14. Hook-up wire (Solid Core, 22 AWG)
15. Pin Headers (Gikfun 1 x 40 Pin 2.54mm Single Row Breakaway Male Pin Header)
16. Five MIDI jacks (Breadboard-friendly MIDI Jack (5-pin DIN))
17. Ten 220ohm resistors for MIDI jacks
18. Stand-off spacers for mounting electronics to dome (Stand-off Spacer Hex M3 Male x M3 Female)
19. Thread adapters to connect stand-offs to wood (E-Z Lok Threaded Insert, Brass, Knife Thread)
20. Epoxy or Gorilla Superglue
21. Electrical tape
22. Solder
Tools:
1. Soldering Station
2. Power drill
3. Circular saw
4. Orbital sander
5. Jig saw
6. Miter saw
7. Protractor
8. 3D printer
9. Wire cutters
10. Wire wrap tool
11. Laser cutter for cutting LED plates (optional)
12. CNC shopbot for base of dome (optional)
Step 2: Designing the Geodesic Dome
As I mentioned in the intro, there are several online sources for building your own geodesic dome. perfect spherical surface. To construct your own dome, you must first select a dome diameter and class.
I used a site called Domerama to help me design a 4V dome that was truncated to 5/12 of a sphere with radius of 40cm. For this type of dome, there are six different length struts:
30 X “A” - 8.9cm
30 X “B” - 10.4cm
50 X “C” - 12.4cm
40 X “D” - 12.5cm
20 X “E” - 13.0cm
20 X “F” - 13.2cm
That is a total of 190 struts that add up to 2223cm (73 ft) of material. I used 1x3 (3/4" × 2-1/2") pine lumber for the struts in this dome. To connect the struts, I designed and 3D printed connectors using Autocad. The STL files are available to download at the end of this step. The number of connectors for a 4V 5/12 dome is:
20 X 4-connector
6 X 5-connector
45 X 6-connector
In the next step, I describe how this dome is constructed with the wooden struts and the 3D printed connectors I designed.
Step 3: Constructing Dome With Struts and Connectors
Using the calculations from Domerama for a 4V 5/12 dome, I cut the struts using a circular saw. The 190 struts were labeled and placed in a box after cutting. The 71 connectors (20 four-connectors, 6 five-connectors, and 45 six-connectors) were 3D printed using a Makerbot. The wood struts were inserted into the connectors according to the diagram created by Domerama. I started the construction from the top and moved radially outward.
After all the struts were connected, I removed one strut at a time and added epoxy to the wood and connector. The connectors were designed to have flexibility in how they connected the structs, so it was important to check the symmetry of the dome before adding any epoxy.
Step 4: Laser Cutting and Mounting Base Plates
Now that the skeleton of the dome is constructed, it is time to cut the triangular baseplates. These baseplates are attached to the bottom of the struts, and are used to mount the LEDs to the dome. I initially cut the baseplates out of 5mm (3/16”) thick plywood by measuring the five different triangles that are on the dome: AAB (30 triangles), BCC (25 triangles), DDE (20 triangles), CDF (40 triangles), and EEE (5 triangles). The dimensions of each side and the shape of the triangles were determined using a dome calculator (Domerama) and some geometry. After cutting test baseplates with a jigsaw, I drew the triangle design using Coral Draw, and cut the remaining baseplates with a laser cutter (much faster!). If you do not have access to a laser cutter, you can draw the baseplates onto plywood using a ruler and protractor and cut all of them with a jigsaw. Once the baseplates are cut, the dome is flipped over and the plates are glued to the dome using wood glue.
Step 5: Electronics Overview
Shown in the figure above is a schematic of the electronics for the dome. An Arduino Uno is used for writing and reading signals for the dome. To light up the dome, a RGB LED strip is run over the dome so that an LED is positioned at each one of the 120 triangles. For information on how an LED strip works, check out this instructable. Each LED can be addressed separately using the Arduino, which produces a serial data and clock signal for the strip (see the A0 and A1 pin in schematic). With the strip and these two signals alone, you can have an awesome light up dome. There are other ways to go about writing signals for lots of LED from an Arduino, such as Charlieplexing and shift registers.
In order to interact with the dome, I set up an IR sensor above each LED. These sensors are used to detect when someone’s hand is close to a triangle on the dome. Because each triangle on the dome has its own IR sensor and there are 120 triangles, you will have to do some sort of multiplexing before the Arduino. I decided to use five 24-channel multiplexers (MUX) for the 120 sensors on the dome. Here is a instructable on multiplexing, if you are unfamiliar. A 24 channel MUX requires five control signals. I chose pins 8-12 on the Arduino, so I could do port manipulation (see Step 10 for more information). The output of the MUX boards are read in using pins 3-7.
I also included five MIDI outputs on the dome so that it could produce sound (Step 11). In other words, five people can play the dome simultaneously with each output playing a different sound. There is only one TX pin on the Arduino, so five MIDI signals requires demultiplexing. Because the MIDI output is produced at a different time than the IR sensor reading, I used the same control signals.
After all the IR sensor inputs are read into the Arduino, the dome can light up and play sounds however you program the Arduino. I have a few examples in Step 14 of this instructable.
Step 6: Mounting LEDs Onto Dome
Because the dome is so large, the LED strip needs to be cut to place one LED on each triangle. Each LED is glued on the triangle using super glue. On either side of the LED, a hole is drilled through the baseplate for cables to be run through the dome. I then soldered hook-up wire at each contact on the LED (5V, ground, clock, signal) and feed the wires through the baseplate. These wires are cut so that they are long enough to reach the next LED on the dome. The wires are pulled through to the next LED, and the process is continued. I connected the LEDs in a configuration that would minimize the amount of wire required while still making sense for addressing the LEDs using the Arduino later. A smaller dome would eliminate the need for cutting the strip and save a lot of time soldering. Another option is to use separate RGB LEDS with shift registers.
Serial communication to the strip is achieved using two pins (a data and clock pin) from the Arduino. In other words, the data for lighting up the dome is passed from one LED to the next as it leaves the data pin. Here is example code modified from this Arduino forum:
// Make entire dome increase and decrease intensity of single color #define numLeds 120 //Number of LEDs // OUTPUT PINS // int clockPin = A1; // define clock pin int dataPin = A0; // define data pin // VARIABLES // int red[numLeds]; // Initialize array for LED strip int green[numLeds]; // Initialize array for LED strip int blue[numLeds]; // Initialize array for LED strip //CONSTANT double scaleA[] = {0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1}; // fraction of intensity of LEDs void setup() { pinMode(clockPin, OUTPUT); pinMode(dataPin, OUTPUT); memset(red, 0, numLeds); memset(green, 0, numLeds); memset(blue, 0, numLeds); } void updatestring(int redA[numLeds], int greenA[numLeds], int blueA[numLeds]) { for (int i = 0; i < numLeds; i++) { shiftOut(dataPin, clockPin, MSBFIRST, redA[i]); shiftOut(dataPin, clockPin, MSBFIRST, greenA[i]); shiftOut(dataPin, clockPin, MSBFIRST, blueA[i]); } } void loop() { for (int p = 0; p < 20; p++) // loop for increasing light intensity of dome { double scale = scaleA[p]; delay(20); for (int i = 0; i < numLeds; i++) // cycle through all LEDS { red[i] = 255 * scale; green[i] = 80 * scale; blue[i] = 0; } updatestring(red, green, blue); // update led strip } }
Step 7: Sensor Mount Design and Implementation
I decided to use IR sensors for the dome. These sensors have an IR LED and receiver. When an object gets in front of the sensor, some IR radiation from the IR LED is reflected towards the receiver. I started this project by making my own IR sensors, which were based off Richardouvina’s instructable. All the soldering took way too long, so I purchased 120 IR sensors from eBay that each produce a digital output. The threshold of the sensor is set with a potentiometer on the board so that the output is high only when a hand is near that triangle.
Each triangle consists of a plywood LED-baseplate, a sheet of diffusive acrylic mounted about 2.5cm above the LED plate, and an IR sensor. The sensor for each triangle was mounted onto a sheet of thin plywood shaped as a pentagon or hexagon depending on the position on the dome (see the figure above). I drilled holes into the IR sensor base to mount the IR sensors, and then connected the ground and 5V pins with wire-wrap wire and a wire-wrap tool (red and black wires). After connecting ground and 5V, I wrapped long wire-wrap wire on each output (yellow), ground, and 5V to run through the dome.
The hexagon or pentagon IR sensor mounts were then epoxied to the dome, right above the 3D printed connectors, so that the wire could run through the dome. By having the sensors above the connectors, I was also able to access and adjust the potentiometers on the IR sensors that control the sensitivity of the sensors. In the next step, I will describe how the outputs of the IR sensors are connected to multiplexers and read into the Arduino.
Step 8: Multiplexing Sensor Output
Because the Arduino Uno has only 14 digital I/O pins and 6 analog input pins and there are 120 sensor signals that must be read, the dome requires multiplexers to read in all the signals. I chose to construct five 24-channel multiplexers, each of which read 24 of the IR sensors (see the electronics overview figure). The 24-channel MUX consist of an 8-channel MUX breakout board, 16-channel MUX breakout board, and 2-channel MUX. Pin headers were soldered to each breakout board so that they could be connected to prototype board. Using a wire-wrap tool, I then connected ground, 5V, and the control signal pins of the MUX breakout boards.
A 24-channel MUX requires five control signals, which I chose to connect to pin 8-12 on the Arduino. All five 24-channel MUX receive the same control signals from the Arduino so I connected wire from the Arduino pins to the 24-channel MUX. The digital outputs of the IR sensors are connected to the input pins of the 24-channel MUX so that they can be read in serially to the Arduino. Because there are five separate pins for reading in all 120 sensor outputs, it is helpful to imagine the dome being split into five separate sections consisting of 24 triangles (check colors of dome in figure).
Using Arduino port manipulation, you can quickly increment the control signals sent by pins 8-12 to the multiplexers. I have attached some example code for operating the multiplexers here:
int numChannel = 24; // void setup() { // put your setup code here, to run once: DDRB = B11111111; // sets Arduino pins 8 to 13 as inputs // DO SOMETHING WITH MUX INPUTS OR STORE IN AN ARRAY HERE // PORTB ++; // increment control signals for MUX } }
Step 9: Diffusing Light With Acrylic
To diffuse the light from the LEDs, I sanded transparent acrylic with a circular orbital sander. The sander was moved over both sides of the acrylic in a figure-8 motion. I found this method to be much better than “frosted glass” spray paint.
After sanding and cleaning up the acrylic, I used a laser cutter to cut out triangles to fit over the LEDs. It is possible to cut the acrylic using an acrylic cutting tool or even a jigsaw if the acrylic does not crack. The acrylic was held over the LEDs by 5mm thick plywood rectangles also cut with a laser cutter. These small planks were glued to the struts on the dome, and the acrylic triangles were epoxied onto the planks.
Step 10: Making Music With the Dome Using MIDI
I wanted the dome to be capable of producing sound, so I set up five MIDI channels, one for each subset of the dome. You first need to purchase five MIDI jacks and connect it as shown in the schematic (see this tutorial from Arduino support for more info).
Because there is only one transmit serial pin on the Arduino Uno (pin 2 labeled as the TX pin), you need to de-multiplex the signals being sent to the five MIDI jacks. I used the same control signals (pin 8-12), because MIDI signals are sent at a different time than when the IR sensors are being read into the Arduino. These control signals are sent to an 8-channel demultiplexer so that you control which MIDI jack receives the MIDI signal created by the Arduino. The MIDI signals were generated by the Arduino with the terrific MIDI signal library created by Francois Best. Here is some example code for producing multiple MIDI outputs to different MIDI jacks with an Arduino Uno:
#include <MIDI.h> // include MIDI library #define numChannel 24 //Number of IR per Triangle #define numSections 5 // number of sections in dome, number of 24channel MUX, number of MIDI jacks // int midArr[numSections]; // Store whether or not a note has been pressed by one of the players int note2play[numSections]; // Store note to be played if sensor is touched int notes[numChannel] = {60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83}; int pauseMidi = 4000; // pause time between midi signals MIDI_CREATE_DEFAULT_INSTANCE(); void setup() { // put your setup code here, to run once: DDRB = B11111111; // sets Arduino pins 8 to 13 as inputs MIDI.begin(MIDI_CHANNEL_OFF); if (arr0r == 0)// Sensor on section 0 was blocked { midArr[0] = 1; // Player 0 has hit a note, set HI so that there is MIDI output for player 0 note2play[0] = notes[i]; // Note to play for Player 0 } if (arr1r == 0)// Sensor on section 1 was blocked { midArr[1] = 1; // Player 0 has hit a note, set HI so that there is MIDI output for player 0 note2play[1] = notes[i]; // Note to play for Player 0 } if (arr2r == 0)// Sensor on section 2 was blocked { midArr[2] = 1; // Player 0 has hit a note, set HI so that there is MIDI output for player 0 note2play[2] = notes[i]; // Note to play for Player 0 } if (arr3r == 0)// Sensor on section 3 was blocked { midArr[3] = 1; // Player 0 has hit a note, set HI so that there is MIDI output for player 0 note2play[3] = notes[i]; // Note to play for Player 0 } if (arr4r == 0)// Sensor on section 4 was blocked { midArr[4] = 1; // Player 0 has hit a note, set HI so that there is MIDI output for player 0 note2play[4] = notes[i]; // Note to play for Player 0 } PORTB ++; // increment control signals for MUX } updateMIDI(); } void updateMIDI() { PORTB = B00000000; // SET control pins for mux low if (midArr[0] == 1) // Player 0 MIDI output { MIDI.sendNoteOn(note2play[0], 127, 1); delayMicroseconds(pauseMidi); MIDI.sendNoteOff(note2play[0], 127, 1); delayMicroseconds(pauseMidi); } PORTB ++; // increment MUX if (midArr[1] == 1) // Player 1 MIDI output { MIDI.sendNoteOn(note2play[1], 127, 1); delayMicroseconds(pauseMidi); MIDI.sendNoteOff(note2play[1], 127, 1); delayMicroseconds(pauseMidi); } PORTB ++; // increment MUX if (midArr[2] == 1) // Player 2 MIDI output { MIDI.sendNoteOn(note2play[2], 127, 1); delayMicroseconds(pauseMidi); MIDI.sendNoteOff(note2play[2], 127, 1); delayMicroseconds(pauseMidi); } PORTB ++; // increment MUX if (midArr[3] == 1) // Player 3 MIDI output { MIDI.sendNoteOn(note2play[3], 127, 1); delayMicroseconds(pauseMidi); MIDI.sendNoteOff(note2play[3], 127, 1); delayMicroseconds(pauseMidi); } PORTB ++; // increment MUX if (midArr[4] == 1) // Player 4 MIDI output { MIDI.sendNoteOn(note2play[4], 127, 1); delayMicroseconds(pauseMidi); MIDI.sendNoteOff(note2play[4], 127, 1); delayMicroseconds(pauseMidi); } midArr[0] = 0; midArr[1] = 0; midArr[2] = 0; midArr[3] = 0; midArr[4] = 0; }
Step 11: Powering the Dome
There are several components that need to be powered in the dome. You will therefore need to calculate the amps drawn from each component to determine the power supply you need to purchase.
The LED strip: I used approximately 3.75meters of the Ws2801 LED strip, which consumes 6.4W/meter. This corresponds to 24W (3.75*6.4). To convert this to amps, use Power = current*volts (P=iV), where V is the voltage of the LED strip, in this case 5V. Therefore, the current drawn from the LEDs is 4.8A (24W/5V = 4.8A).
The IR sensors: Each IR sensor draws about 25mA, totaling 3A for 120 sensors.
The Arduino: 100mA, 9V
The multiplexers: There are five 24 channel multiplexers that each consist of a 16 channel multiplexer and 8 channel multiplexer. The 8 channel and 16 channel MUX each consumes about 100mA. Therefore, the total power consumption of all the MUX is 1A.
Adding up these components, the total power consumption is expected to be around 9A. The LED strip, IR sensors, and multiplexers have input voltage at 5V, and the Arduino has 9V input voltage. Therefore, I selected a 12V 15A power supply, a 15A buck converter for converting the 12V to 5V, and a 3A buck converter for converting 12V to 9V for the Arduino.
Step 12: Circular Dome Base
The dome rests on a circular piece of wood with a pentagon cut out of the middle for easy access to the electronics. To create this circular base, a 4x6’ sheet of plywood was cut using a wood CNC router. A jigsaw could also be used for this step. After the base was cut, the dome was attached to it using small 2x3” blocks of wood.
On top of the base, I attached the power supply with epoxy and the MUX’s and Buck converters with PCB stand-off spacers. The spacers were attached to the plywood using E-Z Lok thread adapters.
Step 13: Pentagon Dome Base
In addition to the circular base, I also constructed a pentagon base for the dome with a looking-glass window at the bottom. This base and looking window were also made out of plywood cut with a wood CNC router. The sides of the pentagon are made out of wooden planks with one side having a hole in it for the connectors to go through. Using metal brackets and 2x3 block joints, the wooden planks are attached to the pentagon base. A power switch, MIDI connectors, and USB connector are attached to a front panel that I created using a laser cutter. The entire pentagon base is screwed to the circular base described in Step 12.
I installed a window into the bottom of the dome so that anyone can look up into the dome to see the electronics. The looking glass is made out of acrylic cut with a laser cutter and is epoxied to a circular piece of plywood.
Step 14: Programming the Dome
There are endless possibilities for programming the dome. Each cycle of the code takes in the signals from the IR sensors, which indicate the triangles that have been touched by someone. With this information you can color the dome with any RGB color and/or produce a MIDI signal. Here are a few examples of programs that I wrote for the dome:
Color the dome: Each triangle cycles through four colors as it is touched. As the colors change, an arpeggio is played. With this program, you get to color the dome in thousands of different ways.
Dome music:The dome is colored with five colors, each section corresponding to a different MIDI output. In the program, you can choose which notes each triangle plays. I chose to start at middle C at the top of the dome, and increase the pitch as the triangles moved closer to the base. Because there are five outputs, this program is ideal for having multiple people play the dome simultaneously. Using a MIDI instrument or MIDI software, these MIDI signals can be made to sound like any instrument.
Simon: I wrote a rendition of Simon, the classic memory light-up game. A random sequence of lights is illuminated one at a time over the entire dome. In each turn, the player must copy the sequence. If the player matches the sequence correctly, an additional light is added to the sequence. The high score is stored on one of the sections of the dome. This game is also very fun to play with multiple people.
Pong: Why not play pong on a dome? A ball propagates across the dome until in hits the paddle. When it does, a MIDI signal is produced, indicating the paddle hit the ball. The other player must then direct the paddle along the bottom of the dome so that it hits the ball back.
Step 15: Photos of Completed Dome
Grand Prize in the
Arduino Contest 2016
Second Prize in the
Remix Contest 2016
Second Prize in the
Make it Glow Contest 2016
31 Discussions
3 months ago
Nice job, actually I share your video in our website before discover this post.
Reply 3 months ago
Thanks for letting me know you are sharing my project. I just visited your website. You are creating some great dome products!
1 year ago
This is brilliant, well done!
I have been trying to come up with a novel approach to interactive LED lighting displays for a while and I am delighted that you have created this and shared it with the community.
It's now on my build list :)
2 years ago
How much % of tax we should pay for the prize
2 years ago
Nice work
2 years ago
Cool :)
Reply 2 years ago
Thanks!
2 years ago
this is damn cool I'm your fan from the day I saw it and a well deserved grand prize :)
Reply 2 years ago
Thanks for your note! It means a lot coming from the creator of the amazing Dot2 LED coffee table.
2 years ago
For the grand prize (congratulations by the way) do you have to pay taxes on the prizes?
Reply 2 years ago
Yes, it looks like you are responsible for paying taxes on any prize you win. The official rules of the competition state:.
2 years ago
A well deserved Grand Prize of the Arduino 2016 contest
Well done!
Reply 2 years ago
Thanks! What exciting news. Going to have to pass on the swap though, haha.
Reply 2 years ago
I undestand you have already a 3D Printer, wanna swap your Prusa MK 2 with my Arduino MKR1000?
:D
2 years ago
Congratulations on your grand prize!!you deserve it!!
Reply 2 years ago
Thank you!
2 years ago
Is it possible to build a flat version as getting 3d printed stuff is near impossible here
Reply 2 years ago
maybe 5cm squares in an 8x8 grid
Reply 2 years ago
Absolutely. Here is an instructable by arbalet_project on building a flat version:...
I think you could scale it down to 8x8 with bigger squares.
Reply 2 years ago
Thanks | https://www.instructables.com/id/Interactive-Geodesic-LED-Dome/ | CC-MAIN-2019-30 | refinedweb | 4,222 | 70.23 |
as expected this prog. should accept a number until it encounters a 4 but it gives some garbage value. why?
int main(void) { int a; printf("Enter a number: "); scanf("%[^4]d", &a); printf("You entered: %d\n", a); return 0; }
-------------Problems Reply------------
As far as I know scansets are meant to be used with strings (which makes the
d not act as an integer placeholder specification). One way to write it is to read the input into a string and then parse it:
int main(void)
{
int a;
char b[255];
printf("Enter a number: ");
scanf("%254[^4]", &b);
a = atoi(b);
printf("You entered: %d\n", a);
return 0;
}
I'm keeping the modified code to a minimum, you'd definitely need some extra checks for input sanity.
To clarify: The
254 prefix limits the amount of data that
scanf will capture, so as to not exceed the size of the buffer (strings are terminated with an extra null character, so the read length must be smaller than the actual size)1.
The scanset working with only characters.
Here is my sample code. (but, I don't know what you really want.)
#include <stdio.h>
int main(void) {
char buffer[128];
printf("Enter a number: ");
scanf("%[^4]s", buffer);
printf("You entered: %s\n", buffer);
return 0;
}
The result is,
Enter a number: 12345678
You entered: 123
Additionally, if you want integer value, use
atoi(). | http://www.dskims.com/unexpected-output-of-scansets-in-c/ | CC-MAIN-2018-30 | refinedweb | 237 | 56.18 |
import component of resource management in C++. The story is a lot more versatile. So I want to write about three domains of resource management in C++.
There is the automatic memory management in C++ that is quite easy to use. I addition we have the well known idioms in C++ that are the base of the automatic memory management. At last C++ offers explicit memory management in which the user has the full power to his disposal. I will follow this it.
Each modern STL implementations uses the C++ idioms move semantic, perfect forwarding, and the RAII idiom very often. To understand the underlying mechanism we have to dig deeper in the details of it to take arguments in a function template and forward them identically. Typical use case stand flavours of memory management in C++ from top to bottom. I start in the next post with the automatic memory management with smart pointers.
Go to Leanpub/cpplibrary "What every professional C++ programmer should know about the C++ standard library". Get your e-book. Support my blog.
Name (required)
Website
Notify me of follow-up comments
Hunting
Today 635
All 1930508
Currently are 311 guests and no members online
Kubik-Rubik Joomla! Extensions
Read more...
Read more... | http://www.modernescpp.com/index.php/careful-handling-of-resources | CC-MAIN-2019-22 | refinedweb | 208 | 58.48 |
#include <EBAdvectPatchIntegrator.H>
I have pared down the EBPatchGodunov interface and tried to optimize for minimal memory without completely destroying performance.
boundary condions are set via setEBPhysIBC
weak construction is bad.
Version that leaves out the covered face stuff. This is wrong near the EB but the codes that use it overwrite the EB stuff anyway.
References EBPhysIBCFactory::create(), m_bc, m_domain, m_dx, and m_isBCSet.
References m_advectionVelPtr, m_isVelSet, and m_normalVelPtr.
For when EBFlux is always zero.
This is called by EBAMRNoSubCycle. The insane version that uses cell-centered data holders for face data (the plus-minus stuff) is called internally.
References m_isMaxMinSet, m_maxVal, and m_minVal.
internal functions (all the madness below probably needs to get cleaned up)
This insane version called internally. The primMinu and primPlus stuff are really face centered data but because they *came* from cell-centered data they are left there.
floors if m_isMaxMinSet
floors if m_isMaxMinSet
and this is the *simplified* version
options for 4th ordeer slopes and flattening removed
these exist because special things have to be done for velocity
Referenced by getCurComp(), and setCurComp().
Referenced by getDoingVel(), and setDoingVel(). | http://davis.lbl.gov/Manuals/CHOMBO-SVN/classEBAdvectPatchIntegrator.html | CC-MAIN-2018-34 | refinedweb | 184 | 50.02 |
$ cnpm install @atomist/sdm-pack-rcca-github
Atomist software delivery machine (SDM) extension Pack to manage and converge GitHub resources.
See the Atomist documentation for more information on what SDMs are and what they can do for you using the Atomist API for software.
Use the Atomist CLI to create or configure your GitHub SCM provider configuration with Atomist:
# To login and connect to Atomist run: $ atomist config # If you already have an Atomist workspace you can skip the next step: $ atomist workspace create # Finally run the following command to create a GitHub SCM provider: $ atomist provider create
Once you created the SCM provider, you can now start converging it. To do this, install this extension pack into your SDM:
$ npm install @atomist/sdm-pack-rcca-github
Next register the
convergeGitHub pack in your SDM:
import { convergeGitHub } from "@atomist/sdm-pack-rcca-github"; ... sdm.addExtensionPacks( convergeGitHub(), ); ...
This pack supports polling for SCM events against GitHub or GHE.
The following steps install and register the extension in your SDM:
$ npm install @atomist/sdm-pack-rcca-github
Next register the
convergeGitHub pack in your SDM:
import { watchGitHub } from "@atomist/sdm-pack-rcca-github"; ... sdm.addExtensionPacks( watchGitHub({ owner: ["atomist", "atomisthq"], }), ); ...
The configuration can also be provided in the
client.config.json:
{ "sdm": { "watch": { "github": { "token": "<your github token>", "owner": ["atomist", "atomisthq"], "user": false, "interval": 60000, "apiUrl": "" } } } }
Note: This extension only watches GitHub when the SDM is started in local mode
atomist start --local
General support questions should be discussed in the
#support
channel in the Atomist community Slack workspace.
If you find a problem, please create an issue.
You will need to install Node to build and test this project.
Use the following package scripts to build, test, and perform other development tasks.
Releases are handled via the Atomist SDM. Just press the 'Approve' button in the Atomist dashboard or Slack.
Created by Atomist. Need Help? Join our Slack workspace. | https://npm.taobao.org/package/@atomist/sdm-pack-rcca-github | CC-MAIN-2019-30 | refinedweb | 317 | 52.9 |
Carousel component for Vue.js
vue-agile
The Carousel component for Vue.js is a very simple & touch-friendly component, written in Vue and Vanilla JS (without jQuery dependencies) inspired by Slick. It can be used to create image carousels in variations.
Example
Begin by installing it to your Vue project by running
yarn add vue-agile
Import it in your main file so it can be used globally.
import VueAgile from 'vue-agile' Vue.use(VueAgile)
Usage
Using vue-agile with background-images, fade effect
<!-- using a set of options to customize --> <agile : <div class="slide slide--1"> <img src=".." alt=""> </div> <div class="slide slide--2"> <img src=".." alt=""> </div> <div class="slide slide--3"> <img src=".." alt=""> </div> </agile>
Every first-level child of
<agile> is a new slide. Check all available options here.
If you are thinking this may help you in your current projects or in the future, take a look at the plugin's repository, available on GitHub. | https://vuejsfeed.com/blog/carousel-component-for-vue-js | CC-MAIN-2019-35 | refinedweb | 164 | 77.03 |
I want to Make a Python program that takes a folder name from the input argument, and then renames all its files by adding an "_{n}" at the end of it where n is the serial of that file. For example, if folder "images" contained, "images/cat.jpg", "images/dog.jpg", then after running the command, it will have "images/cat_1.jpg", "images/dog_2.jpg" . Sort the files according to last accessed date. I tried the problem partially as following:-
import os
from os import rename
from os.path import basename
path = os.getcwd()
filenames =next(os.walk(path))[2]
countfiles=len(filenames)
for filename in filenames:
fname=os.path.splitext(filename)[0]
ext=os.path.splitext(filename)[1]
old=fname+ext
new=fname + '_' +ext
os.rename(old, new)
Have you tried something like:
import os filepath = 'C:/images/' os.chdir(filepath) for num, filename in enumerate(os.listdir(os.getcwd()), start= 1): fname, ext = filename, '' if '.' in filename: fname, ext = filename.split('.') os.rename(filename, fname + '_%s' %num + '.' + ext) | https://codedump.io/share/P6eo0PqmKiZ4/1/rename-all-file-in-a-directory-using-python | CC-MAIN-2017-04 | refinedweb | 173 | 62.04 |
Microsoft Jscript.NET Programming: Datatypes, Arrays, and Strings
- Strongly Typing in JScript .NET
- Basic Datatypes
- Declaring and Typing Arrays
- Using the String Object
- Summary
Introduction
This chapter discusses the declaration of variables in JScript .NET, the correlation of variables with common language runtime (CLR) types, and how the compiler makes intelligent decisions between legacy JScript semantics and new performance-oriented JScript .NET semantics.
The discussion in this chapter moves from the new data typing syntax used in JScript .NET straight to the base language datatypes. A thorough explanation of strongly typing variables is followed an exploration of how to create arrays. Both the legacy JScript style array and the newer CLR-style array are discussed. The only other remaining datatype of importance is the string. Strings are the basis for the majority of programs.
Strongly Typing in JScript .NET
Strongly typed variables allow the JScript .NET compiler to use appropriately sized internal data structures when performing operations. Typed data doesn't have to be converted to and from different datatypes when basic operations and assignments occur. It always occupies the appropriate amount of memory space for its size, thus allowing the developer to optimize for size and use smaller datatypes when appropriate. Furthermore, it allows the compiler to remove various checks that were once required when the same variable could at one point be a string and at another point an integer.
How to Strongly Type Variables
So how do you start taking advantage of these new improvements in JScript .NET? You declare variables to give them scope and you type variables so that the compiler uses the optimized operations and removes the type checks during runtime. You declare all variables by using the var keyword. Any additional access modifiers needed to complete the declaration can precede the var keyword. An additional modifier, a custom attribute, can also be applied (both access modifiers and attributes are discussed in Chapter 6, "Creating Classes"). The following example demonstrates how to declare a nontyped variable named UnknownVar:
// Untyped variable declaration in Global Code var UnknownVar;
NOTE
For a complete explanation of the access modifiers and custom attributes, you can jump to Chapter 6, which discusses class members. The following are some of the important modifiers you can learn more about in Chapter 6:
The public keyword makes variables available outside the current scope but does not make the variable global.
The static keyword makes variables persistent after they go out of scope, effectively making the variable a global variable yet limited to being global inside the current scope, without additional modifiers.
The private keyword declares variables as locally scoped variables that are available only to code within the current scope.
The scope-related modifiers are available only from within a class definition. So, for the most part, you won't be using them in the early chapters of this book.
Simply declaring a variable doesn't do anything more than give that variable a scope. If you performed a declaration within a class, the variable would be scoped to that class. If you performed the same declaration within a method, the variable would be scoped to the function level. Because the declaration in the preceding example occurs in global code, it creates a global variable named UnknownVar of type Object. This doesn't provide any performance benefits, and the variable isn't strongly typed. To strongly type a variable, you use the syntax name:type (where name is the name of the variable and type is the variable type). The colon after name tells the compiler that you want to strongly type the variable. The following is an example of strongly typing several variables (notice that whitespace doesn't matter to the compiler):
NOTE
Notice that the introduction of whitespace (that is, space characters, tabs, and carriage returns) in the code doesn't matter to the compiler and doesn't affect the way the compiler parses the source code.
var StringVar:String var IntVar :Int32 var BoolVar: Boolean var ArrayVar : String[]
All these statements create valid variables that are strongly typed. The whitespace between the variable name, colon, and type name doesn't matter because the compiler ignores whitespace. You can make the type name a short name, or you can make it a fully qualified name by appending the namespace to the type name. After you type this code, the compiler will throw warnings if you try to assign invalid values. Assigning to StringVar an object of some type that can't be converted to a string would result in a compiler error, as would assigning a string to IntVar.
Providing Initial Values for Variables
You can initialize variables with default values. You might want to do this to ensure that you have some working values before the code starts operating on them. You can also see some of the compiler errors by creating some initialization expressions where the type of the initializing value is of a different type than the variable to which the value is being assigned. For example, the following example assigns the string constant "Hello" to StringVar and the value 10 to IntVar:
import System; import System.Collections; // Valid Initializers var StringVar:String = "Hello"; var IntVar:Int32 = 10; var HashVar:Hashtable = new Hashtable(); // Tricky Initializers var String2Var:String = 23; var Int2Var:Int32 = "27"; // Invalid initializers var IntVar:Int32 = "Hello";
The new keyword creates a new instance of a hashtable. The next assignment is rather trickyit assigns a numeric value to String2Var. You would think that this would result in a compiler error, but JScript converts the numeric constant 23 to the string constant "23". Furthermore, the JScript compiler turns the string constant "27" into the numeric constant 27 so that Int2Var contains a numeric value. The next couple initializers throw some errors because you can't assign a string constant that can't be parsed as a number into a numeric variable. Notice that the example does not provide invalid syntax for assigning initializer values to a String object because any type or value can be converted to a string to be assigned to the String object. We'll discuss this further in the section "Using the String Object," later in this chapter.
TIP
Every object in the .NET framework supports the ToString() method, which returns the string representation of an object (which can be a name, a serialized version of the object, or any other textual representation). Therefore, assigning any value or object to a String object does not throw an error in JScript .NET because JScript converts any objects or initializer values into string values before continuing. Note that this can cause strange values to be assigned to string variables and can sometimes cause code to go down a code path you wouldn't normally expect.
Performance and Strongly Typed Variables
The performance benefits of strongly typing variables are extremely obvious when you look at the process of performing operations on each type of code. You can directly operate on strongly typed variables without the extra level of indirection that is required for traditional JScript variables. In JScript, the underlying object currently referenced by a variable is checked for type and then converted, if necessary, to a type that is compatible with the current operation. In JScript .NET code, the operation occurs automatically. If the type is incompatible, an exception is thrown, and this forces you, as the programmer, to preemptively check variable operations or to wrap them in try...catch blocks (discussed in Chapter 8, "Exception Handling") and handle the exceptions. It also means that in well-programmed code, several operations are saved for each variable operation.
A second performance advantage has to do with the JScript compiler and is a direct result of not typing variables. The JScript compiler team assumed that there would be quite a bit of legacy code to deal with, and it wanted to see performance benefits without having to recode all the existing samples and programs. The team decided that if the context of a variable and all assignments to that variable could be determined within a local scope, it could guess the type of a variable. Basically, this means that performance-oriented variables, such as loop variables and counters, are strongly typed if they are declared in a local scope. To declare a variable in local scope, you use the var keyword on the variable, but it doesn't have to be strongly typed because the compiler infers the type. This option is not available for global variables or variables that aren't declared within a particular scope (because the compiler sets them as global variables).
The Flexibility of Native JScript Variables
All the existing flexibility of native JScript variables is still available in JScript .NET. All String variables, for instance, are stored as JScript strings and are converted to CLR strings as needed at runtime. This conversion is completed by some helper functions in the JScript namespace that are called whenever a complex conversion needs to be made. Any variable declared as an Array object is considered to be a JScript array, and any variable declared as a System.Array object is considered to be a CLR array. The automatic conversions are in place, and interaction with the .NET platform is not an issue because the compiler provides services for conversion of all native JScript types to their equivalent CLR types. So, if flexibility and backward compatibility are issues when you're working with variables, or there is some behavior about classic JScript that you loved to use, then you should feel free use it. Just be aware of the performance issues involved and of the new syntaxes that you can take advantage of for creating faster and smaller code. | http://www.informit.com/articles/article.aspx?p=27311&seqNum=2 | CC-MAIN-2017-17 | refinedweb | 1,623 | 50.57 |
#include <MQTTClient.h>
MQTTClient_sslProperties defines the settings to establish an SSL/TLS connection using the OpenSSL library. It covers the following scenarios:
The eyecatcher for this structure. Must be MQTS
The version number of this structure. Must be 0
The file in PEM format containing the public digital certificates trusted by the client.
The file in PEM format containing the public certificate chain of the client. It may also include the client's private key.
If not included in the sslKeyStore, this setting points to the file in PEM format containing the client's private key.
The password to load the client's privateKey if encrypted.
The list of cipher suites that the client will present to the server during the SSL handshake. For a full explanation of the cipher list format, please see the OpenSSL on-line documentation: If this setting is ommitted, its default value will be "ALL", that is, all the cipher suites -excluding those offering no encryption- will be considered. This setting can be used to set an SSL anonymous connection ("aNULL" string value, for instance).
True/False option to enable verification of the server certificate | http://www.eclipse.org/paho/files/mqttdoc/Cclient/struct_m_q_t_t_client___s_s_l_options.html | CC-MAIN-2014-49 | refinedweb | 190 | 64.41 |
The following form allows you to view linux man pages.
#include <sys/eventfd.h>
int eventfd(unsigned int initval, int flags);
event
behaviour of eventfd():
EFD_CLOEXEC (since Linux 2.6.27)
Set the close-on-exec (FD_CLOEXEC) flag on the new file descrip-
tor.)
will fail contain-
ing that value, and the counter's value is reset to zero.-
blocking. argu-
ment; the poll(2) POLLIN flag) if the counter has a value
greater than 0.
* The file descriptor is writable (the select(2) writefds argu-
ment; the poll(2) POLLOUT flag) if it is possible to write a
value of at least "1" without blocking.
* If an overflow of the counter value was detected, then
select(2) indicates the file descriptor as being both read-
able and writable, and poll(2) returns a POLLERR event. As
noted above, write(2) can never overflow the counter. How-
ever..
On success, eventfd() returns a new eventfd file descriptor. On error,
-1 is returned and errno is set to indicate the error..
eventfd() and eventfd2() are Linux-specific., allowing, for example, functionali-
ties like KAIO (kernel AIO) to signal to a file descriptor that some
operation is complete.
A key point about an eventfd file descriptor is that it can be moni-
tored).):
Parent about to read
Parent read 28 (0x1c) from efd
Program source
",
webmaster@linuxguruz.com | http://www.linuxguruz.com/man-pages/eventfd2/ | CC-MAIN-2018-26 | refinedweb | 226 | 64.81 |
Ceph/Metadata Server
The Ceph metadata server is used to handle mounting a Ceph file system on a Linux client. This is an optional component as Ceph runs well without a file system (as plain object stores or through RBDs), but if the file system is wanted then the metadata server is necessary as well.
Metadata server in a Ceph cluster
Ceph provides a MetaData Server (MDS) which provides a more traditional style of filesystem based on POSIX standards that translates into objects stored in the OSD pool. This is typically where a non-Linux platform can implement client support for Ceph. This can be shared via CIFS and NFS to non-Ceph and non-Linux based systems including Windows. This is also the way to use Ceph as a drop-in replacement for HADOOP. The filesystem component started to mature around the Dumpling release.
Ceph requires all of its servers to be able to see each other directly in the cluster. So this filesystem would also be the point where external systems would be able to see the content without having direct access to the Ceph Cluster. For performance reasons, the user may have all of the Ceph cluster participants using a dedicated network on faster hardware with isolated switches. The MDS server would then have multiple NICs to straddle the Ceph network and the outside world.
As of the Firefly release, there is only one active MDS server at a time. Other MDS servers run in a standby mode to quickly perform a failover when the active server goes down. The cluster will take about 30 seconds to determine whether the active MDS server has failed. This may appear to be a bottleneck for the cluster, but the MDS only does the mapping of POSIX file names to object ids. With an object id, a client then directly contacts the OSD servers to perform the necessary i/o of extents/shards.
Eventually Ceph will allow multiple active MDS servers, dividing the POSIX filesystem namespace with a mapping scheme that distributes the load. | https://wiki.gentoo.org/wiki/Ceph/Metadata_Server | CC-MAIN-2021-43 | refinedweb | 344 | 59.84 |
Your.
The array will always have at least 2 elements1 and all elements will be numbered. The numbers will also all be unique and in ascending order. The numbers could be positive or negative and the first non-consecutive could be either too!
Solution :
Take out the first number in the array, then continue to add one to that number, if one of the summation outcomes is not the same as the next number in the array then the program will return that next number or else the program will return None if no non-consecutive number has been found.
def first_non_consecutive(arr): seed = arr.pop(0) for num in arr: seed += 1 if num != seed: return num return None
Any thoughts about the above solution? Please comment below.
If you like any post on this website, please share on social media to help this site gets more readers!
2 Comments
If they’re consecutive you can just see if any given entry is equal to the first entry plus the index. For an arbitrary array the quickest (?) way would be by a binary search using this comparison.
As a generator:
`(x for x, y in zip(l, range(l[0], l[0]+len(l))) if x != y)` | https://kibiwebgeek.com/find-the-first-non-consecutive-number-with-python/ | CC-MAIN-2021-04 | refinedweb | 208 | 73.07 |
On 2009-04-15 16:44, P.J. Eby wrote: > At 09:51 AM 4/15/2009 +0200, M.-A. Lemburg wrote: >>. > > Up until this point, I've been trying to help you understand the use > cases, but it's clear now that you already understand them, you just > don't care. > > That wouldn't be a problem if you just stayed on the sidelines, instead > of actively working to make those use cases more difficult for everyone > else than they already are. > > Anyway, since you clearly understand precisely what you're doing, I'm > now going to stop trying to explain things, as my responses are > apparently just encouraging you, and possibly convincing bystanders that > there's some genuine controversy here as well. Hopefully, bystanders will understand that the one single use case you are always emphasizing, namely that of Linux distribution maintainers trying to change the package installation layout, is really a rather uncommon and rare use case. It is true that I do understand what the namespace package idea is all about. I've been active in Python package development since they were first added to Python as a new built-in import feature in Python 1.5 and have been distributing packages with package add-ons for more than a decade... For some history, have a look at: Also note how that essay discourages the use of .pth files: """ If the package really requires adding one or more directories on sys.path (e.g. because it has not yet been structured to support dotted-name import), a "path configuration file" named package.pth can be placed in either the site-python or site-packages directory. ... A typical installation should have no or very few .pth files or something is wrong, and if you need to play with the search order, something is very wrong. """ Back to the PEP:. My proposal tries to solve this without adding yet another .pth file like mechanism - hopefully in the spirit of the original Python package idea. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Apr 15 | https://mail.python.org/pipermail/python-dev/2009-April/088769.html | CC-MAIN-2014-15 | refinedweb | 351 | 63.49 |
AbhiBAN USER
Search for K Nearest Neighbors Algorithm for the best answer. However since this was asked in an interview looks like the below approach would work. You have N Cordinates of Taxis and you have a Point P(x,y) which is your location. Using the nearest point algorithm find the nearest point to P. This gives you a cab which is closest to You. Now you delete this cab and find another cab closest to you. Do this 3 more times and you get the closest 5 cabs to you.
I upvoted HJ but it wont appear as they run in isolation. One will throw a Segmentation fault :P
This is not really the question. I suspect the interviewee got confused. The question is more like a frog can jump 1stone or 2 stones at a time. can he get to the end, if yes print all possible paths.
It is a DP problem. Sum the entire array and divide by 2. If the sum is even than use the subset sum problem to search for sum/2. If the sum is odd no solution exists.
You need a windowing approach to solve this problem. Actually the use case has been picked straight from Apache Spark which does many such operations. For the sake of this question you need to create a Min Heap for each product id. The timestamp decides which element is the root. Whenever you do a read or insert operation you delete elements from the heap till the only elements that are left are less than a month old and then you do the insert or read operation.
I don't get the confusion ?? Total cost of joining is L1 + 2*(L2+L3+L4...) + Ln. So you actually need to find the maximum and 2nd max make them the end of the rope and just join others. In my example I assumed L1,Ln are max. This will be the minimum cost.
I am sure you are not looking for the answer. Simple answer would be to use a Regex or split based on . and compare each number.
1. Use Java Serialization on the tree & tree nodes and then deserialize this tree.
2. Write the pre/post traversal of the trees to the disk as text and recreate the tree.
Create a new bit vector with same length. Set bits from start to end point as opposites to the main vector and do an AND for these two. Use the results as the toggled vector
1. Write multi threaded code which simulates emails from multiple hosts.
2. Simulate other factors like new users joining/ adminstrative tasks/emails/spams.
3. Since he talks about distribution, test node failures.
4. Test replication of storage.
5. Test storage of attachments.
6. Test across data centres.
7. Test with timezones. Various timezones have various loads at different times.
8. Test dispatch of advertisements.
IPV6 doesn't have a Header which reduces packet size.
Reduces the number of NAT's required.
The answer is a LinkedHashMap which maintains the order in which elements are inserted. Regarding Random is easy : Generate a random number between 0 and Size and do a get(randomnumber)
Use Multimap. Use the sorted value of a string as Key.
So for tac. The key will be act.
Also for atc. The key will be act.
If it is a doubly link list it is trivial. Start one node from the front and the second node from the back. Time O(n) space O(1)
While almost all the answers are good the objective of such a question cannot be the solution. If I was the interviewer I would like to see how better corner cases are held. Almost all the answers above will fail in some cases.
In place merge sort might be the answer. In reality In place merge sort is hard to get right the easier solution is Quicksort.
If you want to search on any of the categories like ItemNo or Authour or title etc you would need multiple maps.
1. You can create object which encapsulates everything.
2. Create multiple maps with Keys like(Title/Author/Product No) and value is a pointer to the Object.
You need not store the whole objects in the map.
Create a BST with words from the Files. Each Node in the tree will have below structure
Node{
String word;
String[] fileName;
int count
int[] line
}
For each word increment the count and insert the File name and line# in node structure
Once you are done creating the tree. Remove all Nodes that have count as 1.
Since the Files are already sorted. All you have to do is a external merge till you get 1 million elements. You don't need any more items from the Files.
Do you even need a Data structure for this ? The question says that you want to find the timestamp for which the traffic was maximum. Just keep a max variable and keep on updating it. Whenever max gets updated store the timestamp in a variable.
Consider what your operations are. If you need a multi threaded application go for two CPU's. If you are more into single threaded computations go for single processor.
Overrride the HashMap class. Instead of one HashMap have two HashMaps to store the data. Override the get Method in your classs and check the type of Key passed to it by typecasting and then decide on which HashMap you want to run the GET Operation.
You can design the system in 2 ways :
1. Implement the virtual memory yourself and use File IO operations if and when you need more memory than existing memory. You need to swap some of your data to a File to create space for new Objects.
2. You try to clean memory just like a garbage collector does and remove unwanted objects from time to time. This is inferior to 1 and you run the risk of your program crashing because of lack of memory.
You can use a combination of 1 & 2.
@JZ : this is incorrect.. search CAT in this matrix with your Algo.
B,C,D,Z
E,A,F,G
J,K,L,T
CAT doesn't exist in this matrix but your Algo will return true.
Interesting.. but is it correct ? consider searching CAT in the below matrix
ACA
AAA
ATD
How will you Hash A ?
@Anon : You could use DFS dear friend, point I am making is don't optimize in DP but get rid of the 1's that will also solve the problem.
I have a variation of the program already written.
Run it.
Here F = 1, means dead end path.
This is recursive convert to DP.
public class printPossiblePathsInMatrix {
/**
* @param args
*/
public static void main(String[] args) {
// TODO Auto-generated method stub
char[][] arr = new char[][]{
{'A','B','F'},
{'C','D','F'},
{'E','G','H'}
};
possiblePath("", arr, 0, 0);
}
private static void possiblePath(String str, char[][] a, int r, int c)
{
if(r > 2 || c > 2)
return;
if(r==2 && c==2){
str+= a[r][c];
System.out.println("---" + str);
return;
}
if(c+1 < 3)
if(a[r][c+1] != 'F'){
String s1 = str + a[r][c];
possiblePath(s1, a, r, c+1);
}
if(r+1 < 3)
if(a[r+1][c] != 'F'){
String s2 = str + a[r][c];
possiblePath(s2, a, r+1, c);
}
if( r+1 < 3 && c+1 < 3)
if(a[r+1][c+1] != 'F'){
String s3 = str + a[r][c];
possiblePath(s3, a, r+1, c+1);
}
}
}
This is a DP problem. At each node you can think of path covered so far and the optimal solution to reach the end from here. Without recursion also gives a hint that DFS is not the answer. Look for min cost path problem in DP section at geeksforgeeks for a variation to this problem.
This is an implementation of the Push/Pull Model. The first time you go online you pull the data of your online friends, there after you push your name into the Active Maps of all the Users. When thier status changes they push their status to all the people in the Map. Read Observer/Observable design pattern for this.
Use two heaps a MAX Heap and a MIN Heap. This lets us decide in O(1) time to decide in which heap to enter the new number. If the heaps fall out of order, rebalance them by moving one item in LogN.
public class PermutationTest {
/**
* @param args
*/
public static void main(String[] args)
{
// TODO Auto-generated method stub
String permString = "ABCDE";
Set<String> se = permutations(permString);
Iterator<String> itr = se.iterator();
System.out.println("Total size of Permutations = " + se.size());
while(itr.hasNext()){
System.out.println("--" + itr.next());
}
}
public static Set<String> permutations(String str)
{
Set<String> se = new HashSet<String>();
permute(str.toCharArray(),"",se);
return se;
}
public static void permute(char[] rem, String str, Set<String> se)
{
if(rem.length == 1){
String str2 = str + rem[0];
se.add(str2);
return;
}
for(int i = 0 ; i < rem.length ; i++)
{
char[] rem2 = getRemainingArray(rem,i);
String s = "" + str + rem[i];
permute(rem2, s, se);
}
}
private static char[] getRemainingArray(char[] rem, int i)
{
char[] rem2 = new char[rem.length - 1];
int k = 0;
for(int j = 0 ; j < rem.length ; j++)
{
if(i!=j){
rem2[k] = rem[j];k++;
}
}
return rem2;
}
}
Very strange question. What's the big deal. There has to be atleast one word and the sentence should start from it. Start a linear traversal from beginning and keep on adding characters, for every character added see if the word formed so far is a well formed word. If NO add more characters. If Yes, print this valid word and repeat the process for the remaining sentence.
It is easy to think it is a largest number problem but it is not. For eg : List(9,6,3,0). largest number is 9630 which is divisible by 3. Like wise 3069 is divisible by 3. It is about taking a set of numbers and creating a subset which is divisible by 3. It is a subset sum problem and the subset sum should be divisible by 3.- Abhi February 25, 2017 | https://careercup.com/user?id=15311703 | CC-MAIN-2021-49 | refinedweb | 1,709 | 76.22 |
I have run into another problem now, with below given code i am unable to login, it still says Access is Denied, even though the username and password is correct .No errors on the console. Looks like i am missing something after the connection.
All i am trying to do here is modify the TODO section so that it runs the computed query against my Oracle database after connecting and a successful login, it should show the results from permissions table.
import cgi import cx_Oracle print("Content-type: text/html\n") print("<title>Test</title>") print("<body><center>") try: # get post data form = cgi.FieldStorage() name = form['name'].value if 'name' in form else '' pwd = form['pwd'].value if 'pwd' in form else '' permissions = [] # query to check password and get permissions <input type="submit" value="Back to Login"> </form> """) print('</center></body>') | http://www.howtobuildsoftware.com/index.php/how-do/rKn/python-oracle-issues-with-python-script | CC-MAIN-2018-13 | refinedweb | 142 | 55.74 |
freddyscoming4youMember
Content count82
Joined
Last visited
Community Reputation112 Neutral
About freddyscoming4you
- RankMember
When To And Not To Use Exceptions?
freddyscoming4you replied to 3dmodelerguy's topic in General and Gameplay ProgrammingConceptually speaking I see exceptions as something that needs to halt execution of your application and NEEDS your application's attention if you're looking for specific exceptions to handle. Return codes on the other hand are simply ways to do small talk, so to speak, in your code. I find it to be generally useful to use state enums to let your code know where it is since exception throwing in most cases is a very slow event. So... [code] enum ProcessingStatus { Pass, InputIsIncorrectFormat } void DoProcess(string input) { switch(ProcessWork(input)) { case ProcessingStatus.Pass: // proceed, return, do nothing, call NextStep() or w/e break; case ProcessingStatus.InputIsIncorrectFormat: // either report back and ask for new input or try to auto correct and try again break; } } ProcessingStatus ProcessWork(string input) { /// validate code if fail return ProcessingStatus.InputIsIncorrectFormat else carry on and return ProcessingStatus.Pass on success // however on something critical like a memory you may want to throw MemoryLeakException("ProcessWork created a memory leak with the following state at the time of the leak") } [/code] To me that is pretty easy to read, understand and is generally friendly. Granted, that is coming from a C# background but it's generally very clean.
C vs C++ features
freddyscoming4you replied to freddyscoming4you's topic in General and Gameplay Programming[quote name='VReality' timestamp='1315469061' post='4858950'] Anyway, the only mention of the video game industry (that I heard during the part of the discussion I sat through) was that it's one of the reasons that C++ is still so relevant. I don't think there's any reason to infer a trend in the industry of not living up to her expectations of C++ programers. And if anyone doesn't, I don't think it would be fair to say that it's because an antiquated style is preferred in the industry. [/quote] I wasn't trying to say that antiquated style is preferred or not. I was simply curious if some of the older features are "better" for highly performant systems like a game engine. Regardless, excellent commentary!
C vs C++ features
freddyscoming4you replied to freddyscoming4you's topic in General and Gameplay ProgrammingSo it's like calculus... excellent. LOL
C vs C++ features
freddyscoming4you posted a topic in General and Gameplay ProgrammingListening to the dotnetwrocks pod cast here: [url=""][/url] Kate talks briefly how people are actually using C rather than C++ with things like using malloc rather than just newing an object or using smart pointers. Is using the older methods for these things preferred for game development or are people just being newbs and not realizing what tools they have at their disposal?
[web] Javascript & PHP
freddyscoming4you replied to AlysiumX's topic in General and Gameplay ProgrammingJavascript is inherently insecure. That said you can implement some schemes to mitigate its insecurities such as having your script make a call for a decryption key that expires very soon (like 5 seconds) or even is only good for one use but that could take some more algorithmic math than you may have. The only way to be able to "secure" your app is to use a browser plug in that incorporates encrypted communication like Java. I'm not sure about Flash but I would think they'd have something like that built in. No matter what though, you should NEVER take input received by the server from the client and pass it directly to your database. The best way to secure your database is to thoroughly scrub and verify the data you pass into your database. I would argue that this single task will take about as long to develop the game mechanics themselves. Best of luck to you.
[.net] Trying to Implement Generic Method
freddyscoming4you replied to freddyscoming4you's topic in General and Gameplay Programming[quote name='kunos' timestamp='1315041473' post='4857070'] if you dont know about generics... why would you put that into your code? you can easily code it without. if you really want to learn about generics.. buy a C# book, study it.. implement some single examples and, when you feel confident enough with it you'll have your big chance to use it in your producion code. [/quote] I started coding that around 12:30 and I was trying to think of a generic way to use interfaces and generics to create a singular method that I could use to execute API calls since they all follow the same pattern and my mind shouted "GENERICS!" And, so it was, or wasn't. Haha. [quote name='Zipster' timestamp='1315042836' post='4857077'] [font="Arial"] I'm not sure I see the purpose of [/font][font="Arial"][font="Courier New"]ParseResponse[/font] as a factory-like instance method. If you already have an instance of the concrete type you want (i.e. [/font][font="Arial"][font="Courier New"]ServerStatusResponse[/font]), why would you create a clone? As it stands right now, your generic method creates a default-initialized instance of a particular response, and then promptly throws it away by having it return a new instance of that response. I also don't believe the generic method is necessary here since your code doesn't require concrete type information -- it can just use regular inheritence by having the user pass in their own response object: [/font][/quote] Thanks for the suggestion. I stripped out all the generics stuff and simply passed in a IApiResponse object and return it's parse method from that. Subsequently I removed the parsing code out of the constructor and put it in the parse method which returns void. A few hours of sleep and a thought out suggestion will do wonders.
[.net] Trying to Implement Generic Method
freddyscoming4you posted a topic in General and Gameplay ProgrammingI have implemented an interface to be used by all certain types of a class which represents a response to a WebRequest. I have the interface implement a method whose return type is also the interface so I can get a return value to be the parsed response. The parsing of the response is handled in the interface implementation in each class, naturally. However, I'm having difficulty in implementing this as I don't have hardly any experience writing generic methods and I'm not having much help using Google. Below is my class implementation, or what I have managed to cobble together so far. Can you see what's off? Thanks. First, the interface declarations: [code] using System.Xml; namespace EVE_API.Interfaces { public interface IAPIResponse { IAPIResponse ParseResponse(XmlDocument response); } } [/code] [code] using System.Collections.Generic; using System.Xml; namespace EVE_API.Interfaces { public interface IEveAPI { string ApiName { get; } Dictionary<string,string> Arguments { get; set; } } } [/code] Now, the response class: [code] using System; using System.Xml; using EVE_API.Interfaces; namespace EVE_API.APIs.Miscelleneous { public class ServerStatusResponse : IAPIResponse { private string _version; private DateTime _currentTime; private bool _open; private int _onlinePlayers; private DateTime _cachedUntil; public string Version { get { return _version; } } public DateTime CurrentTime { get { return _currentTime; } } public bool Open { get { return _open; } } public int OnlinePlayers { get { return _onlinePlayers; } } public DateTime CachedUntil { get { return _cachedUntil; } } private ServerStatusResponse(XmlDocument responseDocument) { _version = responseDocument.SelectSingleNode("/eveapi").Attributes["version"].Value; DateTime dtTry = DateTime.MinValue; DateTime.TryParse(responseDocument.SelectSingleNode("/eveapi/currentTime").Value, out dtTry); if (dtTry != DateTime.MinValue) { _currentTime = dtTry; } else { throw new MalformedResponseException("Could not find currentTime in response from SeverStatus API."); } bool boolTry; bool.TryParse(responseDocument.SelectSingleNode("/eveapi/result/open").Value, out boolTry); _open = boolTry; int intTry; int.TryParse(responseDocument.SelectSingleNode("/eveapie/result/onlinePlayers").Value, out intTry); if (intTry > int.MinValue) { _onlinePlayers = intTry; } else { throw new MalformedResponseException("Could not find onlinePlayers in response from SeverStatus API."); } dtTry = DateTime.MinValue; DateTime.TryParse(responseDocument.SelectSingleNode("/eveapi/result/cachedUntil").Value, out dtTry); if (dtTry != DateTime.MinValue) { _cachedUntil = dtTry; } else { throw new MalformedResponseException("Could not find cachedUntil in response from SeverStatus API."); } } public IAPIResponse ParseResponse(XmlDocument response) { return new ServerStatusResponse(response); } } } [/code] And now the class trying to wrap all the above together [code] using System; using System.Collections.Generic; using System.IO; using System.Net; using System.Xml; using EVE_API.Interfaces; namespace EVE_API { public class APICaller<T> where T : IAPIResponse { public APICaller() { } public T ExecuteAPICall<T>(IEveAPI api) { List<string> urlParts = new List<string>(); urlParts.Add(api.ApiName); if (api.Arguments != null) { urlParts.Add("?"); var keys = api.Arguments.Keys; foreach (string key in keys) { urlParts.Add(key); urlParts.Add("="); urlParts.Add(api.Arguments[key]); urlParts.Add("&"); } urlParts.RemoveAt(urlParts.Count - 1); } string completeUrl = urlParts.ToString(); WebRequest request = WebRequest.Create(completeUrl); WebResponse response = request.GetResponse(); XmlDocument document = new XmlDocument(); document.Load(response.GetResponseStream()); IAPIResponse apiResponse = new (IAPIResponse)T(); return apiResponse.ParseResponse(document); } } } [/code]
Unity Opinions on C# dynamic
freddyscoming4you replied to Serapth's topic in General and Gameplay Programming[quote name='Serapth' timestamp='1314899384' post='4856389'] Actually, this is exactly my point. Everything you just said is basically no longer true since 4.0. With the inclusion of dynamic, "dynamic" is now a type, but the actual type is determined at runtime. So you can now have a "var of type dynamic" where the type is determined at runtime by inference if possible, or it pukes if not possible. I have no fault with var, its handy and can make code more ( or less) readable, its when you throw dynamic into the mix you present an opportunity for truly typeless code and clueless coders. [/quote] I wasn't aware of the dynamic type. After the reading I've done I don't see why anyone would use it with a purely .NET application. From what I can see the dynamic type itself is for interop purposes only. While you can use it and abuse it I'm guessing if you're on a project where such horrible code can be used then I'd suggest you upgrade to a better place to work.
Unity Opinions on C# dynamic
freddyscoming4you replied to Serapth's topic in General and Gameplay ProgrammingThe point of var is so you can focus on coding. C# isn't nearly as dynamic as you might think. Once the type is set from the instantiation line then that's the type and it can't be changed later on. You will actually get a compiler error. Also, the "right" way to instantiate a var is to either set it right away or do "default(type)". So if you have a var you want to use as an int you would say "var a = default(int);." To me that's very clear. You would be dumb to declare a var way up in your code and initialize it much later where there could be ambiguity issues. Furthermore, once you set a var Visual Studio will examine the type that you're setting the var to and after the instantiation line if you hover over the variable intellisense will report the correct type in the flyout box that appears.
[.net] DllImport Marshaling Issue
freddyscoming4you replied to BTownTKD's topic in General and Gameplay ProgrammingYou should be able to use the same types from VB to C#. It sounds like one of those "if it aint broke don't fix it" situations. One of the reasons the VB code is the way it is may be because that array structures may be different and the DLL you're using parses the incoming and outgoing values in a custom way. I would just use the C# equivalent types that the VB code used.
Unity Opinions on C# dynamic
freddyscoming4you replied to Serapth's topic in General and Gameplay ProgrammingUsing var is ingenious for LINQ. Whether you're using LINQ to SQL or LINQ to objects it removes a lot of guess work in which type you need to cast to/from. I've never had an issue either supporting or writing new code where var messed me up with LINQ. Granted, that's about the only places I've seen it used heavily but it's treated me pretty swell.
- I seriously got negged for that? Wow... someone needs to lighten up. *preps for another neg on this post. weeeee*
Image with text fields? (is this easy programming)?
freddyscoming4you replied to landriGames's topic in General and Gameplay Programming[quote name='Lewis_1986' timestamp='1314782344' post='4855801'] @ApochPiQ, seriously the old addage "if you cannot say anything helpful, be silent!" springs to mind because mocking someone and then telling them what they want to do is trivial helps no-one [/quote] Who pissed in your cheerios? I thought the same thing when I read the OPs question. It sounds very much like an image with text boxes set on top of it.
- [quote name='TTT_Dutch' timestamp='1314762109' post='4855734'] [quote name='Tom Sloper' timestamp='1314761902' post='4855731'] [quote name='TTT_Dutch' timestamp='1314755728' post='4855709']if I can get them to sign a contract that says that the whole game belongs to me and they will recieve a royalty from the sales does that mean its not too late to go back? Because I do believe that they will sign that.[/quote] If they sign it, then you can forget I said it might be too late to expect them to sign it, because, well, because they signed it. [/quote] Alright cool. Now what is the best template contract you think I should use? [/quote] I prefer contracts that use verbiage similar to "I'm in your codes stealin' your rights."
[.net] C# WFA Low FPS when drawing in picturebox
freddyscoming4you replied to reaperrar's topic in General and Gameplay ProgrammingPretty much. Your only other option besides using a 3rd party library is going the unsafe route and copying frame data directly to memory. Which, you could get some speed boosts using native pointers while getting to switch to and from the managed environment of .Net. It could work. | https://www.gamedev.net/profile/186829-freddyscoming4you/ | CC-MAIN-2018-05 | refinedweb | 2,325 | 54.42 |
Fischer alternatives and similar libraries
Based on the "Utility" category.
Alternatively, view Fischer alternatives based on common mentions on social networks and blogs.
SwifterSwift9.8 6.1 L5 Fischer VS SwifterSwift:A handy collection of more than 360 native Swift 3 extensions to boost your productivity.
SwiftGen9.7 8.7 L5 Fischer VS SwiftGenA collection of Swift tools to generate Swift code (enums for your assets, storyboards, Localizable.strings, …)
R.swift9.7 5.4 L3 Fischer VS R.swiftTool to get strong typed, autocompleted resources like images, cells and segues.
SwiftGen-Storyboard9.7 8.7 L5 Fischer VS SwiftGen-StoryboardA tool to auto-generate Swift enums for all your Storyboards, Scenes and Segues constants + appropriate convenience accessors.
Dollar9.3 0.9 L3 Fischer VS Dollara lib similar to Lo-Dash or Underscore in Javascript.
ExSwift9.2 0.0 L2 Fischer VS ExSwifta set of Swift extensions for standard types and classes.
swift-protobuf9.1 7.7 Fischer VS swift-protobufA plugin and runtime library for using Google's Protocol Buffer.
Then9.1 1.1 Fischer VS ThenSuper sweet syntactic sugar for Swift initializers.
Swiftz9.1 0.0 L4 Fischer VS SwiftzFunctional programming in Swift.
EZSwiftExtensions9.0 0.0 L5 Fischer VS EZSwiftExtensionsHow Swift standard types and classes were supposed to work.
DifferenceKit8.8 0.2 Fischer VS DifferenceKit💻 A fast and flexible O(n) difference algorithm framework for Swift collection.
Cache8.7 5.4 L3 Fischer VS CacheNothing but Cache.
Result8.7 0.0 L5 Fischer VS ResultSwift type modelling the success/failure of arbitrary operations.
LifetimeTracker8.6 2.0 Fischer VS LifetimeTrackerLifetimeTracker can surface retain cycle / memory issues right as you develop your application, and it will surface them to you immediately, so you can find them with more ease.
WhatsNewKit8.4 4.0 Fischer VS WhatsNewKitShowcase your awesome new app features.
DeepDiff8.3 1.0 Fischer VS DeepDiffFast diff library.
Closures8.1 0.0 Fischer VS ClosuresSwifty closures for UIKit and Foundation.
Device7.9 0.1 L3 Fischer VS DeviceLight weight tool for detecting the current device and screen size written in swift.
SwiftTweaks7.8 5.5 L4 Fischer VS SwiftTweaksTweak your iOS app without recompiling.
WhatsNew7.8 0.0 Fischer VS WhatsNewShowcase new features after an app update similar to Pages, Numbers and Keynote.
RandomKit7.7 0.0 L2 Fischer VS RandomKitRandom data generation in Swift.
AwesomeCache7.6 0.0 L5 Fischer VS AwesomeCachemanage cache easy in your Swift project.
SwiftLinkPreview7.6 5.0 L4 Fischer VS SwiftLinkPreviewIt makes a preview from an url, grabbing all information such as title, relevant texts and images.
Codextended7.5 0.0 Fischer VS CodextendedExtensions giving Codable API type inference super powers.
Popsicle7.4 0.0 L3 Fischer VS PopsicleDelightful, extensible Swift value interpolation framework.
protobuf-swift7.3 0.0 L1 Fischer VS protobuf-swiftProtocolBuffers for Swift.
PinpointKit7.3 5.5 L5 Fischer VS PinpointKitAn open-source iOS library in Swift that lets your testers and users send feedback with annotated screenshots and logs using a simple gesture.
Sugar7.2 1.4 L5 Fischer VS SugarSomething sweet that goes great with your Cocoa.
SwiftyJSONAccelerator7.1 0.0 L4 Fischer VS SwiftyJSONAcceleratorOSX app to generate Swift 3 code for models from JSON.
Money7.1 0.0 L4 Fischer VS MoneyCurrency formatter in Swift.
Runes6.8 0.9 L5 Fischer VS RunesFunctional operators for Swift
Highlighter6.7 0.0 Fischer Fischer VS CompassCompass helps you setup a central navigation system for your application.
ReadabilityKit6.6 0.0 Fischer VS ReadabilityKitPreview extractor for news, articles and full-texts in Swift
Playbook6.6 6.6 Fischer VS Playbook📘A library for isolated developing UI components and automatically snapshots of them.
ObjectiveKit6.5 0.0 L5 Fischer VS ObjectiveKitSwift-friendly API for Objective C runtime functions.
PDFGenerator6.3 0.0 L2 Fischer VS PDFGeneratorA simple Generator of PDF in Swift. Generate PDF from view(s) or image(s).
LlamaKit6.1 0.0 L5 Fischer VS LlamaKitCollection of must-have functional Swift tools.
Delegated5.9 2.8 Fischer VS DelegatedClosure-based delegation without memory leaks.
SwiftRandom5.8 0.0 L5 Fischer VS SwiftRandomA tiny generator of random data for swift.
Carlos5.7 5.6 L2 Fischer VS CarlosA simple but flexible cache.
Bow5.6 6.3 Fischer VS BowCompanion library for Typed Functional Programming.
Pythonic.swift5.6 0.0 L2 Fischer VS Pythonic.swiftPythonic tool-belt for Swift: a Swift implementation of selected parts of Python standard library.
Curry5.6 1.3 Fischer VS CurrySwift implementations for function currying.
Solar5.4 0.0 L4 Fischer VS SolarCalculate sunrise and sunset times given a location.
SwiftyUtils5.3 2.3 L5 Fischer VS SwiftyUtilsAll the reusable code that we need in each project.
Prototope5.2 0.0 L5 Fischer VS PrototopeSwift library of lightweight interfaces for prototyping, bridged to JS.
AppVersionMonitor5.1 0.0 Fischer VS AppVersionMonitorMonitor iOS app version easily.
Prelude5.1 0.0 L5 Fischer VS PreludeSwift µframework of simple functional programming tools.
Butterfly4.8 0.0 L5 Fischer Fischer or a related project?
Popular Comparisons
README
Deprecated
This project is no longer in development. I am currently developing a chess engine, Hexe. It is written in Rust, which is very similar to Swift in many ways. There also exists Hexe.swift, a Swift wrapper for Hexe.
Sage is not a chess engine; it's a move generator. Hexe, on the other hand, is able to both generate moves and evaluate them.
Sage is a cross-platform chess library for Swift.
Development happens in the
develop branch.
- Build Status
- Features
- Installation
- Usage
- Donation
- License
Build Status
Features
- [x] Chess game management
- [x] Chess board structuring
- [x] Move generation / validation
- [x] En passant and castling
- [x] Pawn promotions
- [x] FEN for games and boards
- [x] PGN parsing and exporting
- [x] Documentation
Installation
Compatibility
- Platforms:
- macOS 10.9+
- iOS 8.0+
- watchOS 2.0+
- tvOS 9.0+
- Linux
- Xcode 7.3 and 8.0
- Swift 2.2 and 3.0
Install Using Swift Package Manager
The Swift Package Manager is a decentralized dependency manager for Swift.
Add the project to your
Package.swift.
import PackageDescription let package = Package( name: "MyAwesomeProject", dependencies: [ .Package(url: "", majorVersion: 2) ] )
Import the Sage module.
import Sage
Install Using CocoaPods
CocoaPods is a centralized dependency manager for Objective-C and Swift. Go here to learn more.
Add the project to your Podfile.
use_frameworks! pod 'Sage', '~> 2.0.0'
If you want to be on the bleeding edge, replace the last line with:
pod 'Sage', :git => ''
Run
pod installand open the
.xcworkspacefile to launch Xcode.
Import the Sage framework.
import Sage
Install Using Carthage
Carthage is a decentralized dependency manager for Objective-C and Swift.
Add the project to your Cartfile.
github "nvzqz/Sage"
Run
carthage updateand follow the additional steps in order to add Sage to your project.
Import the Sage framework.
import Sage
Install Manually
Download and drop the
/Sourcesfolder into your project.
Congratulations!
Usage
Game Management
Running a chess game can be as simple as setting up a loop.
import Sage let game = Game() while !game.isFinished { let move = ... try game.execute(move: move) }
Move Execution
Moves for a
Game instance can be executed with
execute(move:) and its unsafe
(yet faster) sibling,
execute(uncheckedMove:).
The
execute(uncheckedMove:) method assumes that the passed move is legal. It
should only be called if you absolutely know this is true. Such a case is when
using a move returned by
availableMoves(). Otherwise use
execute(move:),
which checks the legality of the passed move.
Move Generation
Sage is capable of generating legal moves for the current player with full support for special moves such as en passant and castling.
availableMoves()will return all moves currently available.
movesForPiece(at:)will return all moves for a piece at a square.
movesBitboardForPiece(at:)will return a
Bitboardcontaining all of the squares a piece at a square can move to.
Move Validation
Sage can also validate whether a move is legal with the
isLegal(move:)
method for a
Game state.
The
execute(move:) family of methods calls this method, so it would be faster
to execute the move directly and catch any error from an illegal move.
Undo and Redo Moves
Move undo and redo operations are done with the
undoMove() and
redoMove()
methods. The undone or redone move is returned.
To just check what moves are to be undone or redone, the
moveToUndo() and
moveToRedo() methods are available.
Promotion Handling
The
execute(move:promotion:) method takes a closure that returns a promotion
piece kind. This allows for the app to prompt the user for a promotion piece or
perform any other operations before choosing a promotion piece kind.
try game.execute(move: move) { ... return .queen }
The closure is only executed if the move is a pawn promotion. An error is thrown if the promotion piece kind cannot promote a pawn, such as with a king or pawn.
A piece kind can also be given without a closure. The default is a queen.
try game.execute(move: move, promotion: .queen)
Pretty Printing
The
Board and
Bitboard types both have an
ascii property that can be used
to print a visual board.
let board = Board() board.ascii // +-----------------+ // 8 | r n b q k b n r | // 7 | p p p p p p p p | // 6 | . . . . . . . . | // 5 | . . . . . . . . | // 4 | . . . . . . . . | // 3 | . . . . . . . . | // 2 | P P P P P P P P | // 1 | R N B Q K B N R | // +-----------------+ // a b c d e f g h board.occupiedSpaces.ascii // +-----------------+ // 8 | 1 1 1 1 1 1 1 1 | // 7 | 1 1 1 1 1 1 1 1 | // 6 | . . . . . . . . | // 5 | . . . . . . . . | // 4 | . . . . . . . . | // 3 | . . . . . . . . | // 2 | 1 1 1 1 1 1 1 1 | // 1 | 1 1 1 1 1 1 1 1 | // +-----------------+ // a b c d e f g h
Forsyth–Edwards Notation
The
Game.Position and
Board types can both generate a FEN string.
let game = Game() game.position.fen() // rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1 game.board.fen() // rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR
They can also be initialized from a FEN string.
assert(Board(fen: game.board.fen()) == game.board) assert(Game.Position(fen: game.position.fen()) == game.position)
Iterating Through a Board
The
Board type conforms to
Sequence, making iterating through its spaces
seamless.
for space in Board() { if let piece = space.piece { print("\(piece) at \(space.square)") } }
Squares to Moves
Sequence and
Square have two methods that return an array of moves that go
from/to
self to/from the parameter.
[.a1, .h3, .b5].moves(from: .b4) // [b4 >>> a1, b4 >>> h3, b4 >>> b5] [.c3, .d2, .f1].moves(to: .a6) // [c3 >>> a6, d2 >>> a6, f1 >>> a6] Square.d4.moves(from: [.c2, .f8, .h2]) // [c2 >>> d4, f8 >>> d4, h2 >>> d4] Square.a4.moves(to: [.c3, .d4, .f6]) // [a4 >>> c3, a4 >>> d4, a4 >>> f6]
Playground Usage
To use
Sage.playground, first open
Sage.xcodeproj and build the OS X target.
You can then use the playground from within the project.
Board Quick Look
Board conforms to the
CustomPlaygroundQuickLookable protocol.
Donation
I work on this in my free time and do my best to make it as great as it can be. If you want to help me keep pushing out awesome libraries like this, a donation would be greatly appreciated. :smile:
License
Sage is published under version 2.0 of the Apache License.
*Note that all licence references and agreements mentioned in the Fischer README section above are relevant to that project's source code only. | https://swift.libhunt.com/fischer-alternatives | CC-MAIN-2021-17 | refinedweb | 1,901 | 50.63 |
Why use enums?
Start with a real time scenario of using enums:
Properties of enums:
Start with a real time scenario of using enums:
package test; import test.GradeTest.Grade; public class GradeTest { public enum Grade { A, B, C, D, F, INCOMPLETE }; public static void main(String[] args){ Student student1 = new Student("John"); Student student2 = new Student("Ben"); student1.setGrade(Grade.B); student2.setGrade(Grade.INCOMPLETE); System.out.println(student1); System.out.println(student2); } } class Student { private String name; private Grade grade; public Student(String name){ this.name = name; } public void setGrade(Grade grade) { this.grade = grade; } public Grade getGrade() { return grade; } public void setName(String name) { this.name = name; } public String getName() { return name; } @Override public String toString() { return "Student: "+name+" got grade "+grade.toString(); } }
Properties of enums:
- enums are declared using enum keyword.
- enums extends java.lang.Enum.
- java.lang.Enum is an abstract class. This is the implicit base class for all enum types.
It is declared as follows:
public abstract class Enum extends Object implements Comparable, SerializableThis clearly means that enums are comparable and serializable implicitly.
- Enumerated types aren't integers.
Each declared value is an instance of the enum class itself; this ensures type-safety and allows
for even more compile-time checking.
- Enums have no public constructor.
- Enum values are public, static, and final.
- The enum itself is effectively final, and so it cannot be subclassed.
In fact, the specification says that you are not allowed to declare an enum as final or abstract,
as the compiler will take care of those details.
- When declared inside a class (like the example above) it becomes a final static inner class.
This explains why we needed to import test.GradeTest.Grade within the same program. (the same goes for inner classes).
Also if you check the generated class files, you will notice that there is a GradeTest$Grade.class file.
Note: Java doesnot have static (top-level) classes but has static inner classes (also known as static member classes).
More here
- As enum is static, you cannot access surrounding classes instance variables.
If enum is defined outside the class.
Then we see that there is no need for the import. (Now after compiling, there is a Grade.class file)
package test; enum Grade { A, B, C, D, F, INCOMPLETE }; public class GradeTest { /*No changes*/ } class Student { /*No changes*/ }
- Enum values can be compared with == or equals().
- Enums override toString().
The toString() method on an enumerated type returns the name of the value.
Grade.A.toString() returns A.
- Enums provide valueOf() method.
The final static valueOf() method internally calls toString().
Grade.valueOf("A") returns A.
- Enums define a final instance method named ordinal().
oridinal() returns the integer position of each enumerated value, starting at zero, based on
the declaration order in the enum.
enum Grade { A, B, C, D, F, INCOMPLETE; public String toString() { return "Name of enum: "+this.name()+"; "+"Ordinal of enum: "+this.ordinal(); } }; public class GradeTest { public static void main(String[] args){ System.out.println(Grade.A.toString()); System.out.println(Grade.valueOf("A")); } }Name of enum: A; Ordinal of enum: 0
Name of enum: A; Ordinal of enum: 0
- Enums define a values() method.
values() return an array of the enum type. So, values() allows for iteration over the values of an enum.
prints
for(Grade grade : Grade.values()) { System.out.println(grade.name()); }
A
B
C
D
F
INCOMPLETE
- Enum constructor
By default, enums do not require you to give constructor definitions and
their default values is always represented by string used in declaration.
You MUST use private constructors when you override the default constructor.
Each enum constant corresponds to an enum object of that given enum type.
enum Grade { A(5), B(4), C(3), D(2), F(1), INCOMPLETE(-1); private int gpa; private Grade(int gpa){ this.gpa = gpa; } public int getGpa(){ return gpa; } };
In our example when the enum class Grade is initialized, the constructors are called and
6 objects created.
- Any method added to an enum are implicitly static.
enum Grade { A(5), B(4), C(3), D(2), F(0), INCOMPLETE(-1); private int gpa; private String comment; private Grade(int gpa){ this.gpa = gpa; } public int getGpa(){ return gpa; } public String getComment() { return comment; } public void setComment(String comment) { this.comment = comment; } }; System.out.println(Grade.A); Grade.A.setComment("You rock! Keep rocking!!"); System.out.println(Grade.A.getComment()); Grade grade1 = Grade.A; grade1.setComment("hihi"); Grade grade2 = Grade.A; grade2.setComment("hoho"); if(grade1==grade2){ System.out.println("grade1==grade2"); } if(grade1==Grade.A){ System.out.println("grade1==Grade.A"); }
- Enums work with switchs
Prior to Java 1.4, switch only worked with int, short, char, and byte values.
Grade grade = Grade.A; switch(grade) { case A: System.out.println("You got top grade"); break; default : System.out.println("Die. The rest of you"); break; }
- Maps of Enums
- Sets of Enums
- Interfaces with Enums
interface GiftMachine { public String sendGift(); } enum Grade implements GiftMachine{ A(5), B(4), C(3), D(2), F(0), INCOMPLETE(-1); private int gpa; private Grade(int gpa){ this.gpa = gpa; } public int getGpa(){ return gpa; } @Override public String sendGift() { System.out.println("sending Gift"); if(this.equals(Grade.A)){ return "You get 10 million dollars"; } return "boo... you get nothing"; } };
- Value specific class bodies
It means is that each enumerated value within a type can define value-specific methods.
This cannot be done exclusively for a specific type.
Instead the method is declared abstract for the enum and each type defines their own implementation.
What happens is, 6 anonymous class definitions are created and instantiated.
And now each type, say "A" now refers to this instance.
If you check your class files, you will find 6 new files: Grade$1.class....Grade$6.class.
enum Grade { A(5){ public void doSomething(){ System.out.println("promote this guy"); } }, B(4) { public void doSomething(){ //don't do anything } }, C(3) { public void doSomething(){ //don't do anything } }, D(2) { public void doSomething(){ //don't do anything } }, F(0) { public void doSomething(){ //don't do anything } }, INCOMPLETE(-1) { public void doSomething(){ System.out.println("fail this guy"); } }; private int gpa; private Grade(int gpa){ this.gpa = gpa; } public int getGpa(){ return gpa; } public abstract void doSomething(); }; Grade grade = Grade.A; grade.doSomething();
- Manually define a custom enum
You cannot do this.
Compiler stops you.
class MyEnum extends Enum { }
- Extending an enum
You cannot do this.
Compiler stops you.
enum StudentGrade extends Grade { }
- Check if an object is an instance of enum or class - using Class.isEnum()
Grade grade = Grade.A; System.out.println(grade.getClass().isEnum());
- Get all existing enum constants from a enum instance - using Class.getEnumConstants();
Grade grade = Grade.A; Grade[] allgrades = grade.getClass().getEnumConstants(); for(Grade g : allgrades) { System.out.println(g.name()); }
- Why is enum declared in the API as follows
public abstract class Enum<e extends Enum<E>> extends Object implements Comparable
, Serializable
Why is generics used here?
A detailed answer is here
But if you need a consise explanation here it is.
abstract class Foo<subclassoffoo extends Foo<SubClassOfFoo>> { /** subclasses are forced to return themselves from this method */ public abstract SubClassOfFoo subclassAwareDeepCopy(); } class Bar extends Foo
{ public Bar subclassAwareDeepCopy() { Bar b = new Bar(); // ... return b; } } Bar b = new Bar(); Foo f = b; Bar b2 = b.subclassAwareDeepCopy(); Bar b3 = f.subclassAwareDeepCopy(); // no need to cast, return type is Bar
The trick going on here is:
Any subclass of Foo must supply a type argument to Foo.
That type argument must actually be a subclass of Foo.
Subclasses of Foo (like Bar) follow the idiom that the type argument they supply to Foo is themselves.
Foo has a method that returns SubClassOfFoo. Combined with the above idiom,
this allows Foo to formulate a contract that says "any subclass of me must implement subclassAwareDeepCopy() and they
must declare that it returns that actual subclass".
Now,
E is used in the return type of getDeclaringClass(), and as an argument to compareTo().
java.lang.Enum is declared as Enum<e extends Enum<E>>.
Which means you can write code like the following that
a) doesn't need to cast and
b) can use methods defined in Enum in terms of the concrete enum subclass.
Rank r = Rank.ACE; Suit s = Suit.HEART; r.compareTo(s); // syntax error, argument must be of type Rank Rank z = Enum.valueOf(Rank.class, "TWO");
- Modify enums during runtime
Older solution: niceideas.ch
A little improved solution: javaspecialists
Note that both solutions use a little bit of hacking using the sun.reflect package. These packages though included in the JDK is not accessible by default to the program. Reason being that these are used internally by JDK and are not part of the supported, public interface.
Oracle explains it further.
A Java program that directly calls into sun.* packages is not guaranteed to work on all Java-compatible platforms. In fact, such a program is not guaranteed to work even in future versions on the same platform.
To make it work, if you are using eclipse then open Java build path -> Libraries -> Expand JRE System Library -> Double click Access rules -> Added access.
Hi John,
In example 25 we could just write
Grade[] allgrades = Grade.class.getEnumConstants();
to get the same result.
Point 12, "The final static valueOf() method internally calls toString()". This is incorrect, it calls name(). | http://www.developers-notebook.info/2014/04/java-enums-tutorial.html | CC-MAIN-2019-13 | refinedweb | 1,561 | 59.4 |
Hey there, welcome to part 4! Today we’ll learn how to mock. Mocking is a process where you create a fake instance of a real class, and test against it. This is so that, you do not have to worry about the real functionality of external dependencies inside a class. This makes unit testing a lot easier and reliable.
Although PHPUnit does have mocking capabilities, it is not as full fledged as that of Mockery’s (). We’ll be using Mockery for all our mocking needs. Another good thing is that Laravel ships with Mockery by default. We can get started straight away without any installation or configuration :)
First, let’s write some sample code that we’ll use. For the purpose of making it simple, we’ll just make a wrapper class. A wrapper class called “Math” that calls the “Calculate” class. The same “Calculate” class that we made in the previous episode. Let’s make the file with the path “app/Math.php”.
Here, on line 16, we require a Calculate class in the same namespace “App”.
Then we assign it to the same object instance on line 18.
Remember to always opt for dependency injections through the constructor instead of creating them through the “new” keyword inside the methods. This makes testing a lot easier when we make the mocks. Yes, you can still mock the “new” keyword instantiation using Mockery, but it’s almost always a bad idea. Another one is statics, it is best to avoid static calls and instead use their equivalent classes through constructor. If you’re using Laravel framework, you can always check the facade class reference to see what class you can use instead of the static calls. The facade class refrence is at:
Line 30 is where we call the areaOfSquare method of the dependency ($this->calculate).
So how would we go about testing this class? Here’s how we would do it:
Let’s go ahead and add this file to ‘/tests/Unit’ folder.
Let’s run this test on the command prompt:
Great, 1 test passed with 3 assertions. Two assertions are the same as before on line 27 and 28. And one more new assertion is that of Mockery on line 19.
On line 5 we declare that we will use Mockery class with reference ‘m’.
Line 6 is a new change. Instead of using the default PHPUnit TestCase, here we use Mockery’s TestCase. This is so that Mockery can carry out Mockery specific assertion verification and cleanup the process after each test call.
For readers who have used Mockery before, you may be confused. Previously you’d have to run m::close() on tearDown() method for each test class. This has changed since Mockery v1.0.0 . You do not need to do that if you instead extend the Mockery’s TestCase class or use it’s trait. More info regarding this here: ()
And now something completely different. A picture of a relaxing red panda. Hey, we all need breaks. The reader deserves one and so does the writer. :D
Aww, isn’t it cute? :) I hope you don’t feel like the panda currently. Snoozy mode. Haha, we’ve still a bit more to go. Ahem, now back to what we were doing... Uhh, what was it? Oh yes, mocking objects left and right! Here we gooo..
Line 12 is where we make a mock object having the namespace of “App\Calculate”. This namespace has to be same as the original “Calculate” class, or else it will throw an error.
Then on line 14 we pass that newly created mock object to the new instance of Math class.
Now on line 19, is where the Mockery specific assertion begins. Now, we assert that the calculate class should receive the ‘areaOfSquare’ method call and we’ll return 4 when it does. And it should only be called once throughout the test execution, or it will fail. If you want it to run twice you can do ->twice() or times({number}) for any number of times.
There are different ways and techniques to declare expectations according to your need. I encourage you to check the official documentation for the full reference at
That was it for the introduction and usage of mocks. Hurray! Now we know how to use mocks! We can now mock them pesky dependencies left and right as we please. :D There are still a lot more to learn regarding mocks, but what we have is enough for our use case. We will be using it extensively on the next episodes as we go on to tackle TDD.
On the next episode, we will go and learn about Integration tests. The tests that do not deal with mocks, but rather call the real implementations.
If you have any questions or queries, please leave them below on the comment section.
Stay tuned. Don’t forget to give some claps to this article! And please subscribe to get notifications to new episodes ;).
| https://hackernoon.com/php-test-driven-development-part-4-enter-the-mock-106b4fdedd00 | CC-MAIN-2019-35 | refinedweb | 838 | 75.4 |
$ cnpm install hyperdrive-daemon
The Hyperdrive daemon helps you create, share, and manage Hyperdrives through a persistent process running on your computer, without having to deal with storage management or networking configuration.
It provides both a gRPC API (see
hyperdrive-daemon-client) for interacting with remote drives, and an optional FUSE interface for mounting drives as directories in your local filesystem.
~/.hyperdrive/storagedirectory.
hyperdriveCLI supports a handful of commands for managing the daemon, creating/sharing drives, getting statistics, and augmenting the FUSE interface to support Hyperdrive-specific functions (like mounts).
Note: The daemon CLI currently requires Node 12 or greater
Temporary Note: We're working out a segfault issue that's causing the daemon to fail with Node 14. If you're on 14, check that issue for updates, but for now try using 12 or 13.
npm i hyperdrive-daemon -g
After installing/configuring, you'll need to start the daemon before running any other commands. To do this, first pick a storage directory for your mounted Hyperdrives. By default, the daemon will use
~/.hyperdrive/storage.
❯ hyperdrive start Daemon started at
If you want to stop the daemon, you can run:
❯ hyperdrive stop The Hyperdrive daemon has been stopped.
After it's been started, you can check if the daemon's running (and get lots of useful information) with the
status command:
❯ hyperdrive status The Hyperdrive daemon is running: API Version: 0 Daemon Version: 1.7.15 Client Version: 1.7.6 Schema Version: 1.6.5 Hyperdrive Version: 10.8.15 Fuse Native Version: 2.2.1 Hyperdrive Fuse Version: 1.2.14 Holepunchable: true Remote Address: 194.62.216.174:35883 Uptime: 0 Days 1 Hours 6 Minutes 2 Seconds
The daemon exposes a gRPC API for interacting with remote Hyperdrives.
hyperdrive-daemon-client is a Node client that you can use to interact with the API. If you'd like to write a client in another language, check out the schema definitions in
hyperdrive-schemas
Hypermount provides an gRPC interface for mounting, unmounting, and providing status information about all current mounts. There's also a bundled CLI tool which wraps the gRPC API and provides the following commands:
hyperdrive fuse-setup
Performs a one-time configuration step that installs FUSE. This command will prompt you for
sudo.
hyperdrive start
Start the Hyperdrive daemon.
Options include:
--bootstrap ['host:port', 'host:port', ...] // Optional, alternative bootstrap servers --storage /my/storage/dir // The storage directory. Defaults to ~/.hyperdrive/storage --log-level info // Logging level --port 3101 // The port gRPC will bind to --memory-only // Run in in-memory mode --foreground // Do not launch a separate, PM2-managed process
hyperdrive status
Gives the current status of the daemon, as well as version/networking info, and FUSE availability info.
hyperdrive stop
Stop the daemon.
If you're on a system that doesn't support FUSE, or you just don't want to bother with it, the CLI provides the
import and
export commands for moving files in and out of Hyperdrives.
To import a directory into a new Hyperdrive, you can run
import without specifying a key:
❯ hyperdrive import ./path/to/directory Importing path/to/directory into aae4f36bd0b1a7a8bf68aa0bdd0b93997fd8ff053f4a3e816cb629210aa17737 (Ctrl+c to exit)... Importing | ======================================== | 100% | 3/3 Files
The command will remain running, watching the directory for any new changes, but you can always stop it with
Ctrl+c
import will save a special file called
.hyperdrive-import-key inside the directory you uploaded. This makes it easier to resume a previous import later, without any additional arguments.
Using the command above as an example,
hyperdrive import path/to/directory subsequent times will always import into drive
aae4f36bd0b1a7a8bf68aa0bdd0b93997fd8ff053f4a3e816cb629210aa17737.
hyperdrive export is just the inverse of
import: Given a key it will export the drive's contents into a directory:
❯ hyperdrive export aae4f36bd0b1a7a8bf68aa0bdd0b93997fd8ff053f4a3e816cb629210aa17737 Exporting aae4f36bd0b1a7a8bf68aa0bdd0b93997fd8ff053f4a3e816cb629210aa17737 into (my working directory)/aae4f36bd0b1a7a8bf68aa0bdd0b93997fd8ff053f4a3e816cb629210aa17737 (Ctrl+c to exit)... Exporting | ======================================== | 100% | 5/5 Metadata Blocks | 0 Peers
Unless an output directory is specified,
export will store files in a subdirectory with the drive's key as its name.
As with
import,
export will store a special file which lets you resume exports easily (just
cd into your previous output directory and run
hyperdrive export), and it will remain running, watching the remote drive for changes.
If you're testing bug fixes or features, some of these commands might be useful for you.
hyperdrive cleanup:remove-readonly-drives
Delete all read-only drives from disk. This will clear up storage, and makes it easier to test networking issues during development (as running this command will force you to re-sync test drives when the daemon is restarted).
This command must not be run while the daemon is running. Since it deletes data, it's intentionally verbose!
Using FUSE, the Hyperdrive daemon lets your mount Hyperdrives as normal filesystem directories on both OSX and Linux. To use FUSE, you need to run the
setup command before you start the daemon the first time:
The setup command installs native, prebuilt FUSE bindings. We currently only provide bindings for OSX and Linux. The setup step is the only part of installation that requires
sudo access:
❯ hyperdrive fuse-setup Configuring FUSE... [sudo] password for andrewosh: Successfully configured FUSE!
You should only need to perform this step once (it will persist across restarts). In order to make sure that the setup step completed successfully, run the
status command. It should contain the following two FUSE-related lines:
❯ hyperdrive status ... Fuse Available: true Fuse Configured: true
If FUSE is both available and configured, then you're ready to continue with mounting your top-level, private drive!
The daemon requires all users to have a private "root" drive, mounted at
~/Hyperdrive, into which additional subdrives can be mounted and shared with others.
Think of this root drive as the
home directory on your computer, where you might have Documents, Photos, or Videos directories. You'll likely never want to share your complete Documents folder with someone, but you can create a shareable mounted drive
Documents/coding-project-feb-2020 to share with collaborators on that project.
After starting the daemon with FUSE configured, you'll find a fresh root drive automatically mounted for you at
~/Hyperdrive. This root drive will persist across daemon restarts, so it should always be available (just like your usual Home directory!).
As with a home directory, you can might want to create directories like
~/Hyperdrive/Documents,
~/Hyperdrive/Videos, and
~/Hyperdrive/Projects. Be careful though -- any directory you create with
mkdir or through the OSX Finder will not be drive mounts, so they will not be shareable with others.
There are two ways to create a shareable drive inside your root drive:
hyperdrive create [path]- This will create a new shareable drive at
path(where
pathmust be a subdirectory of
~/Hyperdrive. This drive will look like a normal directory, but if you run
hyperdrive info [path]it will tell you that it's shareable.
hyperdrive mount [path] [key]- This will mount an existing drive at
path. It's useful if someone is sharing one of their drives with you, and you want to save it into your root drive.
Here are a few examples of what this flow might look like:
To mount a new drive, you can either provide a complete path to the desired mountpoint, or you can use a relative path if your current working directory is within
~/Hyperdrive. As an example, here's how you would create a shareable drive called
Videos, mounted inside your root drive:
❯ hyperdrive create ~/Hyperdrive/videos Mounted a drive with the following info: Path : /home/foo/Hyperdrive/videos Key: b432f90b2f817164c32fe5056a06f50c60dc8db946e81331f92e3192f6d4b847 Seeding: true
Note: Unless you use the
no-seed flag, all new drives will be automatically "seeded," meaning they'll be announced on the Hyperswarm DHT. In the above example, this could be done with
hyperdrive create ~/Hyperdrive/videos --no-seed. To announce it later, you can run
hyperdrive seed ~/Hyperdrive/videos.
Equivalently:
❯ cd ~/Hyperdrive ❯ hyperdrive create Videos
For most purposes, you can just treat this mounted drive like you would any other directory. The
hyperdrive CLI gives you a few mount-specific commands for sharing drive keys and getting statistics for mounted drives.
Mounted subdrives are seeded (announced on the DHT) by default, but if you've chosen to not seed (via the
--no-seed flag), you can make them available with the
seed command:
❯ hyperdrive seed ~/Hyperdrive/Videos Seeding the drive mounted at ~/Hyperdrive/Videos
Seeding will start announcing the drive's discovery key on the hyperswarm DHT, and this setting is persistent -- the drive will be reannounced when the daemon is restarted.
After seeding, another user can either:
~/Hyperdrive/Networkdirectory (can be a symlink target outside the FUSE mount!):
❯ hyperdrive info ~/Hyperdrive/Videos Drive Info: Key: b432f90b2f817164c32fe5056a06f50c60dc8db946e81331f92e3192f6d4b847 Is Mount: true Writable: true ❯ ls ~/Hyperdrive/Network/b432f90b2f817164c32fe5056a06f50c60dc8db946e81331f92e3192f6d4b847 vid.mkv
Or:
❯ hyperdrive mount ~/Hyperdrive/a_friends_videos b432f90b2f817164c32fe5056a06f50c60dc8db946e81331f92e3192f6d4b847 ... ❯ ls ~/Hyperdrive/home/a_friends_videos vid.mkv
If you ever want to remove a drive, you can use the
hyperdrive unmount [path] command.
Network"Magic Folder"
Within your root drive, you'll see a special directory called
~/Hyperdrive/Network. This is a virtual directory (it does not actually exist inside the drive), but it provides read-only access to useful information, such as storage/networking stats for any drive in the daemon. Here's what you can do with the
Network directory:
For any drive that's being announced on the DHT,
~/Hyperdrive/Network/<drive-key> will contain that drive's contents. This is super useful because these paths will be consistent across all daemon users! If you have an interesting file you want to share over IRC, you can just copy+paste
cat ~/Hyperdrive/Network/<drive-key>/my-interesting-file.txt into IRC and that command will work for everyone.
Inside
~/Hyperdrive/Network/Stats/<drive-key> you'll find two files:
storage.json and
networking.json containing an assortment of statistics relating to that drive, such as per-file storage usage, current peers, and uploaded/downloaded bytes of the drive's metadata and content feeds.
Note:
storage.json is dynamically computed every time the file is read -- if you have a drive containing millions of files, this can be an expensive operation, so be careful.
Since looking at
networking.json is a common operation, we provide a shorthand command
hyperdrive stats that prints this file for you. It uses your current working directory to determine the key of the mounted drive you're in.
The
~/Hyperdrive/Network/Active directory contains symlinks to the
networking.json stats files for every drive that your daemon is currently announcing.
lsing this directory gives you a quick overview of exactly what you're announcing.
Note: Always be sure to run
hyperdrive setup and check the FUSE status before doing any additional FUSE-related commands!
hyperdrive create <path>
Create a new drive mounted at
path.
Newly-created drives are seeded by default. This behavior can be disabled with the
no-seed flag, or toggled later through
hyperdrive seed <path> or
hyperdrive unseed <path>
Options include:
--no-seed // Do not announce the drive on the DHT.
hyperdrive mount <path> <key>
Mount an existing Hyperdrive into your root drive at path
path.
If you don't specify a
key, the
mount command will behave identically to
hyperdrive create.
pathmust be a subdirectory of
~/Hyperdrive/home.
keyis an optional drive key.
CLI options include:
--checkout (version) // Mount a static version of a drive. --no-seed // Do not announce the drive on the DHT.
hyperdrive info <path>
Display information about the drive mounted at
path. The information will include the drive's key, and whether
path is the top-level directory in a mountpoint (meaning it's directly shareable).
pathmust be a subdirectory of
~/Hyperdrive/. If
pathis not specified, the command will use the enclosing mount of your current working directory.
By default, this command will refuse to display the key of your root drive (to dissuade accidentally sharing it). To forcibly display your root drive key, run this command with
--root.
CLI options include:
--root // Forcibly display your root drive key.
hyperdrive seed <path>
Start announcing a drive on the DHT so that it can be shared with other peers.
pathmust be a subdirectory of
~/Hyperdrive/. If
pathis not specified, the command will use the enclosing mount of your current working directory.
By default, this command will refuse to publish your root drive (to dissuade accidentally sharing it). To forcibly publish your root drive, run this command with
--root.
CLI options include:
--lookup (true|false) // Look up the drive key on the DHT. Defaults to true --announce (true|false) // Announce the drive key on the DHT. Defaults to true --remember (true|false) // Persist these network settings in the database. --root // Forcibly display your root drive key.
hyperdrive unseed <path>
Stop advertising a previously-published subdrive on the network.
pathmust be a subdirectory of
~/Hyperdrive/. If
pathis not specified, the command will use the enclosing mount of your current working directory.
Note: This command will currently not delete the Hyperdrive from disk. Support for this will be added soon.
hyperdrive stats <path>
Display networking statistics for a drive. This is a shorthand for getting a drive's key with
hyperdrive info and
cating
~/Hyperdrive/Network/Stats/<drive-key>/networking.json.
pathmust be a subdirectory of
~/Hyperdrive/and must have been previously mounted with the mount subcommand described above. If
pathis not specified, the command will use the enclosing mount of your current working directory.
hyperdrive force-unmount
If the daemon fails or is not stopped cleanly, then the
~/Hyperdrive mountpoint might be left in an unusable state. Running this command before restarting the daemon will forcibly disconnect the mountpoint.
This command should never be necessary! If your FUSE mountpoint isn't cleaned up on shutdown, and you're unable to restart your daemon (due to "Mountpoint in use") errors, please file an issue.
MIT | https://developer.aliyun.com/mirror/npm/package/hyperdrive-daemon | CC-MAIN-2020-24 | refinedweb | 2,308 | 54.63 |
How smart must a Java programmer be?
Here's an example. Below is one of the exercises in Richard Baldwin's introductory online Java course. Part 1 is easy, but I found Part 2 very difficult. I usually have to read sentences like that in Part 2 several times before I understand them. I know what all of the elements (parameters, instantiation, etc.)of this problem are, but I still find this difficult. Please be honest. How hard should this be for someone who will do well at programming?
***********************
Q - Write a Java program that meets the following specifications.
/*File SampProg18.java from lesson 42
Without viewing the solution that follows, write a Java
application that illustrates:
1. Instantiating an object by calling the default
constructor.
2. Instantiating an object by calling a parameterized
constructor as a parameter to a function call.
The program should display the following output:
Starting Program
Object contains 100
Terminating, Baldwin
************************
You can find this problem at
Is there a reliable programming aptitude test one can take online?
As a start to an unscientific survey, I'd be interested to know how difficult you find Part 2 of the question above. If you're interested, try to solve it, then go to the URL in my previous post to check your answer. Please let me know how you do.
>>IMAGE."
Sheriff
Such as GUI development, talking to database thru Java etc. I don't think many employers want to deliver a "DOS App". If you feel comfortable with delivering in a Client Server environment, work with databases, and things like that, you shouldn't have a problem. Java is just the middle layer between you and an objective that's been sent in front of you...
Ryan
Ryan Headley<br /><a href="" target="_blank" rel="nofollow"></a>
I'm not looking for encouragement, I'm trying to tell, if it's possible to do so, whether I'm pursuing a realistic objective.
I appreciate these responses. I hope someone will address the question of whether Part 2 of the problem in my original post is easy or difficult. You sound like you have some experience, Ryan. Can you solve that problem easily?
If I had just started java and programming, and been handed that question I would have have found it very difficult.
- Janna
Sheriff
"JavaRanch, where the deer and the Certified play" - David O'Meara
First, to answer your question, I am a greenhorn in my second month of my first Java class, and I found the 2nd part of the problem to be as easy as the first. I'm not "smart", I just understood the jargon.
Second, most people are of average "smartness", and you appear to be at least that or more. Not everyone has logical aptitude, and the fact that you enjoyed the Cattle Drive course and found it easy testifies to your aptitude. Anyone who has that and enough determination and perserverence can master Java on average "smartness."
I'm studying Jave because I now work in an environment where things have been. I want to work in one where things are going. Sounds like you do too -- good luck!
class Baby {
Baby(int i) {
System.out.println("I am baby number " + i);
}
}
public class mybaby {
public static void main(String[] s) {
new Baby_11<<
"Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning."
The main doubt I have is about getting hired. I've read and heard stories (including the JavaRanch forums) about people learning java on their own and passing the Sun Cert. Java Programmer exam and STILL not being able to get a job without experience. I'd like to hear if anyone knows about the likelihood of getting hired without professional programming experience...and how you get around the chicken-and-the-egg issue here.
Ranch Hand
Originally posted by Jamie Cole:
I'd like to earn a living by programming in Java, but I can't tell whether I've got what it takes. I enjoyed and had little trouble with the course here at JavaRanch. I've been studying from other sources on my own, and found some of it easy, some perplexing. If I have the aptitude to succeed as a professional programmer, should I be finding introductory courses quite easy?
Here's an example. Below is one of the exercises in Richard Baldwin's introductory online Java course. Part 1 is easy, but I found Part 2 very difficult. [snip] How hard should this be for someone who will do well at programming?
Jamie, I think it is tough to learn any language when the examples are "terse". I wouldn't let part 2 throw you -- it is a somewhat unusual use of Java syntax.
Java "grows on you" with time -- it is a HUGE language, and you needn't be a master of it all in order to consider yourself competent.
As another example, you might want to take a look at the sample application that I developed as companion code to my book (it is available as a free download from the website) to see whether that code makes sense to you.
Regards,
Jacquie
------------------
author of:
Beginning Java Objects
I posted another problem that perplexed me in the Intermediate forum, because it seemed a little much for this one. I was hoping someone with a good mix of verbal and Java skills would take a look. I hope you can spare a couple of minutes for it. Here's the post:
I understand your predicament which basically seems to arise out of the friction between your urge to learn Java and the seemingly
unsurmountable task ahead. I was in a similar situation a few weeks back but my resolve to get up there has got me past the crossroad. I still have a long long way to go and with every passing day the confidence seems to grow and as Jacquie has rightly said 'Java grows on you'. So, I would advise you to adopt a positive approach and just keep going, do not get bogged down by the enormity of the task but derive inspiration from each step you climb up. GOOD LUCK.
regards,
Rajendra.
- Instantiate an object
- Call a parametized constructor
- As a parameter to a function (I prefer to use the Java term "method"
I worked these three backwards. First I created a method that took in a string parameter -- that took care of part 3. Then I created a Tester class that had one parametized constructor, requiring a string parameter. Lastly I created the main method, taking in a string command-line argument, and in one line, mashing all the requirements together with a call to my test method, passing as a parameter a 'new Tester(args[0])', which accomplished parts 2 and 1 simultaneously.
Now, more to the point, I found Baldwin's answer, like his question, to be a bit confusing. I think both could have been stated a bit more simply. As students, we shouldn't assume that because we don't clearly understand the question, the problem lies with us. I think it is easy to obfuscate the obvious to the point that anyone would have difficulty. In the real programming world, you could ask questions until you understood the requirement. And you're plenty "smart enough" to come up with the answer.
This is the answer I came up with
and in another source file
It's quite similar to the solution and it only took about 15 minutes to write the code. BUT it took me twice as long to decode the question. And I've been working as a C programmer for a couple of years and learning Java for about 6 months - I'd consider myself fairly OK at understanding technical specifications and I found these instructions confusing. I do know it took about 6 months to understand the terminology when I started and it was hard to find books that didn't assume that you knew some of the basic terminology. If you're doing other assignments OK, I wouldn't be too concerned - the aptitude tests are quite good, although the ones I've seen have been a bit procedural orientated - I haven't seen the brainbench one.
Hope this helps,
Kathy
Here's my solution. It took about 10 minutes with the usual family interruptions.
It produces the correct output so now I'll have a look at the site you referred to ..... Hmm. Well, my solution looks fine. It meets the requirements and is actually simpler than his, while covering the essential points. He might complain about me making getABaldwin static to simplify things, but there was no stipulation in the question covering that, so I took a shortcut.
A couple of points. One, there is no single correct answer to this type of question, though there are obviously many incorrect answers. Even people with a "perfect" knowledge of Java can give incorrect answers as you have noticed.
Second, this question sounds more difficult than it actually is. The trick is to break it down into smaller parts, as has been noticed. Breaking complex questions down into smaller problems is part of the skill of a programmer (or any other problem-solver).
Third, I had to read the question twice before it fell into place. Once I understood the question some examples sprang to mind pretty quickly. eg. myFrame.setLayout( new GridLayout(2,0)) is the same type of thing.
I searched in google for "computer programming aptitude test" and found many sites, most commercial, but some free and others with sample questions. eg. I haven't done the test. But the real test is whether you can write programs.
Obviously there is much more to being a programmer than just cutting code ( good design, following standards, following requirements carefully, testing, etc ) but solving problems algorithmically is at the heart of it.
Hope this helps.
I have to agree with Cindy when she said, "they are testing your reading skill more than your programming skill." Almost all examples of using the java.io.* classes are littered with the part 2 of your question.
For example,
File myFile = new File("output.txt");
FileOutputStream fos = new FileOutputStream(myFile);
Part 2 just combines the two statements into one:
FileOutputStream fos =
new FileOutputStream(new File("output.txt"));
If you didn't get this the first time around, don't worry about it. I woudn't have understood the question the first time around either. I've just been reading a lot of JAVA books and gotten use to the jargon. If you're serious about becoming a JAVA programmer, I would recommend:
1. Read a lot of code and try them out. It's really important to type out the code and try things out if they don't make sense to you. Given a choice between downloading the source file and typing the example from the book, I would type the code out because I learn from the mistakes I make this way. Downloading the working source file doesn't teach you anything other than the fact that you can use a mouse.
2. Reading bad code is just as helpful as reading good code. I learn what not to do reading bad code.
3. Ask questions! If it doesn't make sense, ask someone. If they can't answer your question or if the answer doesn't make sense, then say so. Don't accept an answer until it makes sense to you.
Good luck!
-Peter
And to anyone wondering how it is to program java, I would like to say that having programming a variety of languages on and off for about fifteen years, I find java about as easy or difficult as any language, depending on what you want to use it for...
I currently work with web development, where I have written a few applets, a few Windows-components (in VB and C++), but most of my 'programming' these days is scripting.
I found the question mentioned quite easy, but having my experience it would be strange otherwise. Not only java experience helps me here, the question and solution would look more or less the same if it was about c++, as would many java problems due to the similarities in java and c++ when it comes to syntax and OO approach.
I generally tend to find programming quite difficult, that's why I have not yet grown tired of it..
Regards, Marius
Ranch Hand
I'm not 14 and I do not understand the java language perfectly (I hope I never do). People tell me I'm intelligent but, for me, I don't think that it has anything to do with book learnin' or smarts. As Edison said "genius is 99% perspiration and one percent inspiration". Which is really true. I used to play classical guitar professionally and people would say stuff like "Wow, I could never do that. You're so gifted." or whatever. Well, the only gift I had was the six to eight hours a day I practiced for 10+ years. Same thing with programming. Sure I got good marks but that's just a by-product of staying up untill 4am because you just have to make it 'perfect'. However, that one percent inspiration is very important as well, and many 'smart' folks don't have a clue where to find that spark. I get more ideas for 'clever' solutions cooking omlettes that I do reading a white paper. The 99% will come if you love what your doing - that makes it easy. Programming isn't about knowing a language - its about solving puzzles, exploring unique trains of thought and hammering away until you get it right. The programmers that I've known during my degree and here at work have very diverse backgrounds but the common traits seem to be a love of solving problems, a desire to do well at whatever they are confronted with and a confidence that speaks "If anyone can do this, I can do this". I hope this helps you figure out you aptitude for programming. Personally, I'd say that if you've ever felt that glow after seeing you first "Hello world!" appear on the screen, then you're fully qualified to explore programming.
Sean
[This message has been edited by Jamie Cole (edited January 03, 2001).]
------------------_30<<
I consider myself to be in somewhat the same boat as you. I have a fairly extensive schooling background in c++, but no real experience. I've only been looking at Java for about a month and it seems, to me anyway, quite a bit easier than c++.
I was able to answer the question in about 10 minutes. I'm not sure if my solution is formatted correctly, but it does work. The key to me was in how the question was read. I took it to mean that Mr. Baldwin was looking for a demonstration of an overloaded constructor, and I went from there.
class BaldwinTest
{
BaldwinTest()
{
System.out.println("Starting Program");
runParamBaldwin(new BaldwinTest(100));
System.out.println("Terminating, Baldwin");
}
BaldwinTest(int i)
{
System.out.println("Object contains " + i);
}
static void runParamBaldwin(BaldwinTest b)
{
}
public static void main(String[] args)
{
BaldwinTest b = new BaldwinTest();
}
}
My advice would be echo quite a few of the other responses here. The key lies in purchasing a GOOD book (read the reviews both here and at amazon before making a purchase), working through all of the examples, and don't hesitate to ask questions. Take a break when you're stymied and sooner or later you'll break through the "wall".
Good luck,
Pat B.
Originally posted by Jamie Cole:
Thanks, Greg. Yes, it does help. Your solution is simpler, and as far as I'm concerned, simpler is better. Thanks also for pointing out the MyFrame parallel. I've used that type of construction before, but it hadn't occurred to me. To put your response in context, how much programming experience do you have?
Jamie
About 13 years programming, mostly on Oracle/Unix - sql, pl/sql, ksh, awk, some C, various others. I've been writing Java in my spare time for the past few years on and off.
You've done more than enough for me already, but if you're interested, please take a look at my Towers of Hanoi topic in the Intermediate forum. My question there seems to be either too difficult or too time-consuming to answer -- or maybe too obvious. I hope not the latter.
[This message has been edited by Jamie Cole (edited January 03, 2001).]
having patience.
<pre>
public class Baldwin {
public static void main(String args[]){
// 1. Instantiating an pbject by calling the default constructor
String s = new String();
String t;
System.out.println("Starting program");
// 2. Instantiating an object by calling a parametrized constructor
// String(java.lang.String)
// As parameter to a function call
// System.out.println(java.lang.String)
System.out.println(t = new String("Object contains 100"));
System.out.println("Terminating,Baldwin");
}
}
</pre>
you'll be a professional Java programer when you really want it , but like everithing in life , it takes some time
| https://coderanch.com/t/387562/java/java/smart-Java-programmer | CC-MAIN-2016-40 | refinedweb | 2,900 | 63.7 |
Hellooo!
I have an issue with xmonad and menus, if I send an application to a workspace other than the first one the menus in the app (and also in the gnome-panel) tend to appear misplaced! If I move the mouse up and down on a menu the height of the menu varies randomly... or dissapears completely (which I guess that's because the height got equal to 0)
What could be the cause of this problem?
Thanks!
Offline
No suggestions?... No one is having this issue?
How do you launch XMonad + GNOME ? (maybe that makes a difference)...
Offline
Apparently the "gnomeConfig" thing is causing the issue if I make my xmonad.hs look like this it works fine:
import XMonad
import XMonad.Config.Gnome
import XMonad.Hooks.EwmhDesktops
main = xmonad defaultConfig -- gnomeConfig
{ terminal = "gnome-terminal"
, modMask = mod4Mask
, handleEventHook = fullscreenEventHook
, normalBorderColor = "#cccccc"
, focusedBorderColor = "#255a91"
, borderWidth = 2
}
But the gnome panel doesn't work of course... how can i solve that? what am i doing wrong? ...
Thanks
Offline
This is a modified xmonad.hs I got from the internet... it works, but the panel only gets shown after I open a program, and the gnome workspace switcher isn't aware of xmonad's workspaces.... ( What else am I missing from gnomeConfig, and how can I fix the issues ? )
Thanks.
import XMonad
import XMonad.Hooks.DynamicLog
import XMonad.Hooks.ManageDocks
import XMonad.Hooks.EwmhDesktops
import XMonad.Config.Gnome
main = do
xmonad $ defaultConfig
{ manageHook = manageDocks <+> manageHook defaultConfig
, layoutHook = avoidStruts $ layoutHook defaultConfig
, terminal = "gnome-terminal"
, modMask = mod4Mask
, handleEventHook = fullscreenEventHook
, normalBorderColor = "#cccccc"
, focusedBorderColor = "#255a91"
, borderWidth = 2
}
Offline
Somehow the problem is due to the session being started with GDM...
If I start the session manually (gnome-session --session xmonad) everything works perfectly!
I also found out that the menu problems are also present when using awesome (If the session is started using GDM)
Using LightDM the problems are gone.
What could the issue with GDM be?
Thanks!
Offline | https://bbs.archlinux.org/viewtopic.php?id=141573 | CC-MAIN-2016-22 | refinedweb | 324 | 50.73 |
.
Network Code
I have spoken about this at length at conferences around the world.
Putting network code in the view controller is beyond a code smell, it is just wrong.
Where does it go?
All of the network code belongs in
NSOperation subclasses. Ideally one
NSOperation subclass for each network request. That
NSOperation is responsible for creating the network request, receiving the data, parsing it into JSON and storing it in the persistence layer (ideally Core Data).
This creates small, discrete, units of work that are easy to maintain. If a network operation fails, we can isolate the code and find the problem. Further, we can put each call into a unit test very easily and run it in isolation until it is perfect.
But where do we fire these operatons from? I strongly recommend creating a top level data controller that is responsible for maintaining a reference to the persistence engine (again, ideally Core Data). This top level data controller is normally instantiated in the application’s delegate and is passed down to the view controllers via dependency injection.
Using a singleton for this is bad; another code smell. Why is a subject for another discussion.
What does this look like?
The data controller starts out very simply:
import UIKit import CoreData class DataController: NSObject { var managedObjectContext: NSManagedObjectContext let networkQueue = NSOperationQueue() init(completionClosure: () -> ()) { //initialize persistence. NOT LAZY } func refreshRequest() { let op = OperationSubclass() op.dataController = self op.url = ... //Pass in what is needed self.networkQueue.addOperation(op) } }
With this class being passed into the view controller the view controller can react to refresh requests by simply calling:
self.dataController.refreshRequest()
One line of code that can be wired directly from the button’s action directly to the data controller.
Consolidating the networking code into a central location also adds additional benefits beyond just code isolation. Real time reaction to bandwidth changes, resumability, cancellability, and many other features become trivial to implement. When the network code is in the view controllers the same features are nearly impossible to implement.
How does the view know when the data is ready?
There are a few answers depending on on what kind of view we are dealing with.
The UITableViewController
This is the easiest type of view controller to work with. When we are using Core Data as our persistence engine the view controller practically writes itself.
By putting a NSFetchedResultsController between the data controller and the table view controller, we have extremely brief method implementations.
But what about the table view cells?
Each table view cell should be designed in the storyboard and then populated via a
UITableViewCell subclass. No fuss no muss.
With this simple design the population from the view controller can be as easy as:
override func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell { let cell = tableView.dequeueReusableCellWithIdentifier("cellIdentifier", forIndexPath: indexPath) as! CustomTableViewCell let object = self.fetchedResultsController.objectAtIndexPath(indexPath) as! NSManagedObject cell.populateCellFromObject(object) return cell }
Let the view subclass handle populating the labels, images, whatever else is needed to draw the cell.
By keeping the network code out of the view controller and letting the views draw themselves, the view controller becomes very simple. All it really ends up doing is managing the life cycle of the view, exactly what the view controller is meant to do.
Why add more complexity?
Plain View Controllers
Plain view controllers can be a little harder, but generally do not need to be.
The previous view controller is usually responsible for injecting the needed data objects into the view controller. If you aren’t using dependency injection here you are probably doing it wrong.
With the data being injected the view controller just needs to pass that data to the view subclass.
Let the view populate itself from the data!
Let the view controller manage life cycle events.
Build the view in the storyboard.
Use either KVO or core data change notifications to refresh the view. The view controller does not need to handle the data refresh if the data objects do not change.
Use Dependency Injection. The UIKit framework was designed for it. When you avoid it you are fighting the frameworks and just making everything harder on yourself.
Duplicating data
A few of the interesting design patterns that I have seen recently involve duplicating the data in memory. Sometimes there are two copies of the data, one in the cache (aka persistence layer) and another for the views.
This defeats some of the most amazing pieces of Cocoa!
When our UI has a single copy of a data object we can observe changes on that data object.
We can use KVO (Key Value Observing) or we can use notifications from Core Data to detect changes to the data to react and refresh.
Why is this important?
We do not need to write code to watch the cache!
Our network layer simply updates the cache and the view will update itself. We completely avoid tight coupling between the network layer and the view layer.
Our code base gets smaller.
Our code base gets more maintainable.
Our code base gets faster.
When we duplicate the data and/or try to introduce other design patterns into UIKit based applications we are adding unnecessary complexity.
Why subclass the view?
The
UIView class is meant to be subclassed. The documentation on this class is well defined and subclasses of
UIView integrate extremely well into the UIKit framework.
When the
UIView is subclassed and the data is injected into the view, the view can be reused. This is the core of the concept of code reuse. If a piece of data is going to be displayed in multiple places then it makes sense that there should be a view to be reused.
What about data validation?
When we subclass the view and the view is aware of the data, guess where the data validation goes?
In the view subclass!
Why? So that the view can respond to the validation failures!
This is a big difference between OS X and iOS. In OS X, Cocoa Bindings allow the view elements to query the data directly and confirm validation. This is a very cool feature that makes validation borderline magic.
But Cocoa Bindings do not exist on iOS. Therefore we must validate the data as close to the editing view as possible. This allows us a very short path back to the user to notify the user that the data is invalid.
What better place than in the view itself?
If we do this at the persistence layer, it is too late. At best it will cause a tight coupling between the persistence layer and the view layer. A very bad thing. At worst the user will be in a bad state. Unable to save their data and forced to go find the edit view again that is probably off screen, deallocated, gone. A terrible user experience.
When we design our user inteface to have data validation feedback it then becomes trivial to validate the data upon entry and give the user immediate feedback.
What about business logic?
What is business logic?
Business logic is code that is not part of putting data on the screen and is not part of receiving data from the network (generally).
If the business logic is about posting something to a server, then it belongs in the network layer!
If the business logic is about controlling a device, then it belongs in a manager for that device.
Define what your business logic is and then determine where it goes. Very rarely does it belong in the view controller.
Doesn’t this make the views heavy?
No.
The code to populate the view must live somewhere. If we put it in the view controller then the view controller gets too big.
If we create an object to sit between the view and the view controller then we are creating unnecessary additional objects.
The view is already holding onto references to the elements contained in the view that are provided from the storyboard, it makes perfect sense to put the population code with the references to what is being populated.
This does not make our code any heavier than any other design. The code is arguably a class lighter per view.
What is left over
When we remove the view population and the data code from the view controller; the view controller is trimmed down dramatically. Now the view controller is back down to doing its job, view lifecycle events.
Even the view lifecycle events have been dramatically reduced over the past few years with the introduction of storyboards.
Wrap up
We must never forget the K.I.S.S. principal. It absolutely applies to iOS and OS X development. When we introduce multiple layers of indirection, multiple copies of the data, we are adding completely unnecessary complexity to the application.
That complexity will cost us. It will cost us CPU, Battery Life, Memory, and maintainability.
Why do it the hard way?
About the Author
Marcus S. Zarra is best known for his expertise with Core Data, persistence and networking. He has been developing Cocoa applications since 2004 and has been developing software for most of his life.
There are very few developers who have worked in more environments, on more projects or with more teams than Marcus has.
Marcus is currently available for short to medium term development contracts, code reviews and workshops.
If your team is struggling with code structure, networking, or persistence please contact him. He would love to help your team produce the best application possible.
Marcus can be reached via email at marcus@cimgf.com.
[…] few days ago I read this post about Massive View Controllers by Marcus Zarra. Marcus and I have been friends for years. I’ve probably learned more about iOS […] | http://www.cimgf.com/2015/09/21/massive-view-controllers/ | CC-MAIN-2021-04 | refinedweb | 1,634 | 58.08 |
getpwnam()
Get information about the user with a given name
Synopsis:
#include <sys/types.h> #include <pwd.h> struct passwd* getpwnam( const char* name );
Since:
BlackBerry 10.0.0
Arguments:
- name
- The name of the user whose entry you want to find.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:
The getpwnam() function gets information about the user with the given name. It uses a static buffer that's overwritten by each call.
The getpwent(), getpwnam(), and getpwuid() functions share the same static buffer.
The getpwnam_r() function is a reentrant version of getpwnam().
Returns:
A pointer to an object of type struct passwd containing an entry from the group database with a matching name. A NULL pointer is returned on error or failure to find a entry with a matching name.
Examples:
/* * Print information from the password entry * about the user name given as argv[1]. */ #include <stdio.h> #include <unistd.h> #include <stdlib.h> #include <sys/types.h> #include <pwd.h> int main( int argc, char* *argv ) { struct passwd* pw; if( ( pw = getpwnam( argv[1] ) ) == NULL ) { fprintf( stderr, "getpwnam: unknown %s\n", argv | http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/g/getpwnam.html | CC-MAIN-2015-18 | refinedweb | 197 | 61.22 |
GameFromScratch.com
This is part 3, the following are links for part one and part two in this series.
Alright, we've installed the tools, got an editor up and going and now how to run the generated code, both on our computers and on device ( well… Android anyways… I don't currently have an iOS developer license from Apple ), so now the obvious next step is to take a look at code.
Let's get one thing clear right away… I know nothing about ActionScript, never used it, and I didn't bother taking the time to learn how. As unfair as that sounds, frankly when it comes to scripting languages, I rarely bother learning them in advance… I jump in with both feet and if they are good scripting languages, you can generally puzzle them out with minimal effort. This is frankly the entire point of using a scripting language. So today is no different. This may mean I do some stupid stuff, or get impressed by stuff that makes you go… well duh. Just making that clear before we continue… now, lets continue...
Apparently LoomScript is ActionScript with a mashup of C# and a smattering of CSS. ActionScript is itself derived or based on JavaScript. I know and like JavaScript and know and like C#, so we should get along fabulously.
Let's look at the specific changes from ActionScript.
First are delegates, a wonderful feature of C#. What exactly is a delegate? In simple terms it's a function object, or in C++ terms, a function pointer. It's basically a variable that is also a function. This allows you to easily create dynamic event handlers or even call multiple functions at once.
Next was type inference, think the var keyword in C# or auto keyword in C++ 11.
They added support for the struct data type. This is a pre-initialized and copy by value (as opposed to reference) class. I am assuming this is to work around an annoyance in ActionScript programming that I've never encountered.
They also added C# style Reflection libraries. I assume this is confined to the System.Reflection namespaces. If you are unfamiliar with Reflection in C# land, it's a darned handy feature. In a nutshell, it lets you know a heck of a lot about objects at runtime, allowing you to query information about what "object" you are currently working with and what it can do. It also enables you load assemblies and execute code at runtime. Some incredibly powerful coding techniques are enabled using reflection. If you come from a C++ background, it's kinda like RTTI, just far better with less of an overall performance hit.
Finally they added operator overloading. Some people absolutely love this feature… I am not one of those people. I understand the appeal, I just think it's abused more often than used well. This is an old argument and I generally am in the minority on this one.
Now let's take a look at creating the iconic Hello World example.
First is the loom.config file, it was created for us:
{
"sdk_version": "1.0.782",
"executable": "Main.loom",
"display": {
"width": 480,
"height": 320,
"title": "Hello Loom",
"stats": true,
"orientation": "landscape"
},
"app_id": "com.gamefromscratch.HelloLoom",
"app_name": "HelloWorld"
}
This is basically the run characteristics of your application. This is where you set application dimensions, the title, the application name, etc. Initially you don't really even have to touch this file, but it's good to know where it is and to understand where the app details are set.
Pretty much every application has a main function of some sort, the entry point of your application and Loom is no exception. Here is ours in main.ls
package
import cocos2d.Cocos2DApplication;
static class Main extends Cocos2DApplication
{
protected static var game:HelloWorld = new HelloWorld();
public static function main()
{
initialize();
onStart += game.run;
}
}
Here we are creating a Cocos2DApplication class Main, with one member, our (soon to be created) Cocos2DGame derived class HelloWorld.
We have one function, main(), which is our app entry point, and is called when the application is started. Here you can see the first use of a delegate in LoomScript, where you assign the function game.run to the delegate onStart, which is a property of Cocos2DApplication. In a nutshell, this is the function that is going to be called when our app is run. We will look at HelloWorld's run() function now.
Speaking of HelloWorld, lets take a look at HelloWorld.ls
import cocos2d.Cocos2DGame;
import cocos2d.Cocos2D;
import UI.Label;
public class HelloWorld extends Cocos2DGame
override public function run():void
super.run();
var label = new Label("assets/Curse-hd.fnt");
label.text = "Hello World";
label.x = Cocos2D.getDisplayWidth()/2;
label.y = Cocos2D.getDisplayHeight()/2;
System.Console.print("Hello World! printed to console");
//Gratuitous delegate example!
layer.onTouchEnded += function(){
label.text = "Touched";
}
layer.addChild(label);
We start off with a series of imports… these tell Loom what libaries/namespaces we need to access. We added cocos2d.Cocos2D to have access to Cocos2D.getDisplayWidth() and Cocos2D.getDisplayHeight(). Without this import, these methods would fail. We similarly import UI.Label to have access to the label control.
Remember about 20 seconds ago ( if no btw… you may wish to get that looked into… ) when we assigned game.run to the Cocos2DApplications onStart delegate? Will, this is where we define the run method.
The very first thing it does is calls the parent's run() method to perform the default behaviour. Next we create a Label widget using the font file Curse-hd.fnt (that was automatically added to our project when it was created ). We set the text to "Hello World" and (mostly) centre the label to the screen by setting its x and y properties. You may notice something odd here, depending on your background… the coordinate system. When working with Cocos2D, there are a couple things to keep in mind. First, things are positioned relative to the bottom left corner of the screen/window/layer by default, not the top left. Second, nodes within the world are by default positioned relative to their centre. It takes a bit of getting used to, and can be overridden if needed.
Next we print "Hello World was printed to the console" to demonstrate how to print to the console. Then we follow with another bit of demonstrative code. This is wiring a delegate to the layer.onTouchEnded property. This function is going to be called when the screen is released, as you can see, this is an anonymous function, unlike run we used earlier. When a touch happens, we simply change the label's text to Touched. Finally we add the label to our layer, inherited from Cocos2DGame.
Run the code and you will see:
While if you check out your terminal window, you will see:
As you can see, Hello world is also displayed to the terminal.
Now lets take a look at one of the cool features of Loom. Simply edit an .ls file in your text editor of choice and if you are currently running your project, if you flip back to the terminal window you will see:
Loom is automatically updating the code live as you make changes. This is very cool feature. Ironically though, in this particular case, it's a useless one as all of our code runs only when the application is first started. However in more complicated apps, this will be a massive time saver.
On top, this is also how you can easily detect errors… let's go about creating one right now. Instead of label.Text, we are going to make an error, label.Txt. Save your code and see what happened in the Terminal window:
As you can see the error and line number are reported live in the Terminal without you having to stop and run your application again.
Pretty cool over all. In the next part, we will look at more real world code examples.
You can read the next part dealing with graphics right here.
Android, iOS | http://www.gamefromscratch.com/post/2013/03/12/A-closer-look-at-the-Loom-game-engine-Part-Three-Hello-World%E2%80%A6-and-a-bit-more.aspx | CC-MAIN-2017-13 | refinedweb | 1,352 | 66.84 |
Value
A single value that updates a function when a new number is passed to
set.
value(current <Number>, onUpdate <Function>)
value is a good way of managing multiple actions all trying to act on the same property. By passing an instance of
value to another action’s
output method (ie
tween or
physics), that action will register itself as the sole permitted updater of that
value.
This means if a second action tries to update the
value, the first action will be stopped first and there’ll be no conflicts.
Methods
set <Number>: Updates
currentand schedules an update. Returns the
valuepassed to it, useful for functional composition.
Playground
import { value } from 'popmotion';
const ball = document.querySelector('.ball'); const ballRenderer = css(ball); const ballX = value( 0, (x) => ballRenderer.set('x', x) ); ballX.set(150); tween({ from: ballX.get(), to: 300 }) .output(ballX) .start(); | https://popmotion.io/api/value/ | CC-MAIN-2017-51 | refinedweb | 143 | 50.94 |
It's coming:
I've been looking at JavaFX at the JavaFX website, at the demos, code examples, etc. And I just finished downloading it with the sdk and Netbeans integration.
So far, JavaFX is looking pretty killer. I think it's a game changer, and in some ways has leap-frogged ahead of Flex/Flash and Silverlight. At the very least, JavaFX is a huge shot in the arm for client side Java.
It's going to be a boon for Sun's Java business as well, and Sun's Java business is already pretty healthy, profitable, and growing.
I've been following JavaFX for quite a while now. It's been a long time coming, but I'm very happy it's here, for many reasons:
-I'm a Java desktop developer (as unlikely as that may sound
), so I'm excited about all the power I'm getting. Sure, stuff like this was possible with plain Swing, but way harder.
-Java SE media support is finally getting some love (again). It's waaay too late, but at least it's arriving.
-Flash and Silverlight are finally getting some competition. This means I can design/develop Rich Internet Applications for free (no Adobe/MS tax), and on my platform of choice (Mac). Plus I get to use millions of open-source Java libraries. W00t!
-JavaFX should mean that apps on mobile devices have better cross-device compatibility, since they will all share a single runtime environment. (and there was much rejoicing.)
-Theoretically (so rumors have it) JavaFX has a good chance of coming to Blu-ray. This would make Blu-ray developers' lives easier and potentially result in more rich content on Blu-ray discs.
All this said, it will be a while till everything is mature enough for real market-beating capacity. Mac support is lagging as always (no JRE update 10 for us means no drag-to-install feature), and Linux/Solaris support is still missing. Performance could also be a lot better, and I expect it will get a lot better with time.
But the important thing is that it has arrived!
Edited 2008-12-04 23:46 UTC
Starting with the current version of the Java Runtime Environment (Update 10), the parts of the JRE that are needed for a given applet to run are downloaded on-demand. According to Sun, this results in an initial JRE download size of 4-5MB for the average applet. (in comparison to 14+MB to download the full JRE, which was previously the case)... This compares pretty well to Flash's 5.4MB download....
Also, browser integration in Update 10 is much better. If I understand the release notes correctly, the Java plug-in can now be installed directly in-browser without a visit to Sun's site, just like it is for Flash....
As for comparison to Flex... Java has some pluses and minuses. On the definite plus side is power -- there are way more libraries out there written for Java than for Flex, and the Java Standard Edition API is incredibly rich. Java also has the nice ability to display widgets in the native operating system's look and feel if you like, which Flex can't do. On the downside, Flex has *way* better (more mature) UI-building tools. And of course Flash has *way* better design and animation tools. It will take quite a while for Sun to catch up on those (actually they may never catch up to Flash on the ease of use for designers).
However, when the design tools for JavaFX are released, they will be free. (as opposed to FlexBuilder)
Edited 2008-12-05 00:30 UTC
How does this compare?
One area where they don't compare is that Flash is closed, in that Adobe doesn't release the source forcing vendors to distribute binaries. An example is the quality of the binary Flash plugin for Windows versus Linux. The Linux version is usually behind and not up to speed with the Windows version. Java, with OpenJDK, can be distributed with source and compiled for specific platforms. I see this as a significant benefit for Java as a plugin. As others have pointed out, the graphic design and animation aspects of Java are not as WYSIWYG as Flash is, yet.
Edited 2008-12-05 00:47 UTC
The specifications for Flash are open.
"Adobe has this morning announced the Open Screen Project. The Open Screen Project is actually open and is designed to push consistent rich Internet experiences across a plethora of devices and varying screens."
The latest release of 32-bit binaries for flash 10 from Adobe had same-day releases for Windows and Linux.
."
Linux is the ONLY platform that has a 64-bit version of Flash Player 10!!
"Since acquiring Macromedia, Adobe has improved its Flash Player for Linux quite a bit in recent times with the Linux version of Flash being updated in sync with the Windows and Mac OS X versions. For instance, development of Flash Player 10 led to several public alpha and beta releases that brought a number of new features to this platform. The most voiced complaint though about Adobe Flash for Linux is that it's been limited to 32-bit Linux and Adobe has ignored all 64-bit Linux users, but today that has changed. Adobe has started bringing Flash to 64-bit Linux! "
Sorry, but you got that the wrong way around.
Edited 2008-12-05 04:37 UTC
Last time i checked, Java sound was horribly broken on Linux - requiring exclusive access to the sound card.
This just won't fly for browser-based apps - imagine having a tab with a some flash content on it, and finding that this prevented JavaFX apps from playing sound.
I'm not sure if this is actually going to be the case with JavaFX, but if it is, its a loser from day 1.
This is not entirely Sun's fault,rather the general inability to put a decent sound API into Linux (and no, more layers e.g. PulseAudio is not the answer)...
PulseAudio integration for javax.sound
PulseAudio integrations provides all the benefits of PulseAudio to any java application using the javax.sound package.
Edited 2008-12-05 04:50 UTC
Maybe so ;-) ... but at least now it doesn't tie up the sound card.
"PulseAudio is a sound server for POSIX and Win32 systems. A sound server ... allows you to ... mixing several sounds into one ... easily achieved using a sound server. "
Support for multiple audio sources and sinks
It is being distributed now on most major Linux distributions ... recent releases of Mandriva, Ubuntu, Fedora (and as far as I know OpenSuse) all use it now as the default sound server.
Why not? It is the answer they used.
Per-application volume controls[1]
An extensible plugin architecture with support for loadable modules
Compatibility with many popular audio applications[which?]
Support for multiple audio sources and sinks
Low-latency operation[citation needed] and support
BTW ... java on Linux does not necessarily mean "Sun".
Edited 2008-12-05 05:20 UTC
Their are examples on the JavaFX web page (). I tried some but most of them are really really slow on my macbook.
And I thought the flash-plugin for mac sucked!
They load really slow ... and work really slow ... and I have artifacts around the browser window's buttons (Mac OSX)..
"the Java and .net platforms have more features in one namespace than all of flex's platform."
Agreed. Flash is great for video, animation, and games. But for real world, fully functional, desktop applications, even with the niceness that is Flex, it falls way short of both Java and .Net. Really, the Java and .Net runtimes/platforms offer huge APIs that cover pretty much everything any application can ever need.
And Java is fully cross platform, and .Net is not (Moonlight will always be a step behind Silverlight).
So JavaFX is really compelling - bringing a full Java Runtime environment with it's rich APIs, being fully cross platform, being fully open, being fully capable of running across all different types of phones, set-top boxes, desktops, laptops, browsers, blu-ray (all to be supported eventually), and now having very rich video/media/animation support that is comparable to Flash or Silverlight.
It did take around 10 seconds on my machine to start up. And this is the effects playground demo. And for some reason it loaded up IN the browser and also separate from the browser complete in its own window. That was rather odd I thought..
I dont think so, flash apps take often quite a while til they start. But I have not given the current version of javafx a testrun, but last time i tried it was seriously slower than flash :-(.
Well I would not call the OSX VM the best on earth.
I use a Mac myself for java development and I can see that it is slow on OSX, but believe me running the stuff while not being the fastest on earth is fast enough on Windows.
The OSX VM is seriously lacking. First of all, OSX still is on JDK5 as public VM, and that one has bugs, secondly the JDK6 VM is 64 bit only which means problems in supporting 32 bit based browsers and no SWT for now since Apple decided not to move over Carbon towards 64 bits!
The Windows JDK6 update 10 VM however is a huge step in the right direction. First of all they have decoupled the VM from the browser so that it runs in its own process space. That speeds up things in the browser significantly and applets finally are as fast as desktop apps. Secondly, the core VM is reduced down to a few megabytes and additional parts are loaded on demand. Thirdly with the new VM you can drag and drop applets now to the desktop and restart them again from there, which makes software distribution way easier.
All I can say is Apple has a load of work to do.
Don't get me wrong Sun is my favourite tech company but they
1. Spend millions developing, testing, etc the language
2. Spend more optimising the runtime
3. Spend more doing documentation and examples
4. Spend yet more on creating free tools
And how exactly do they get anything back for it? Seems like their plans for generating revenue always far removed from the source of the cost.
I was curious and compared their shares prices last night with IBM, HP, Apple, Google (not all are direct compettors, I know). And the picture is that they don't make much money. They actually lost most of their shares value during last 2 years, and they lost much more than others on recent crisis. That sucks but maybe they finally learn how to do things right and sell some more servers rather than wasting money on half-opened OpenSolaris or half-working Java FX or half-open Java. They never finish anything they started opening - that's why their "open source" products don't gain much attention - and I believe bringing attention to Sun's servers is reason for all the software products existence.
"And how exactly do they get anything back for it? Seems like their plans for generating revenue always far removed from the source of the cost."
That's been my usual perception.
But Jonathan Schwartz has posted their actual recent numbers, and breaking down revenue by category.
It shows that Java, including all the money pours into Java R&D, is actually profitable and growing.
Trouble is for Sun is that their biggest revenue is from hardware, mostly in big iron servers, particularly in the Financial services market (which is currently in the toilet). So they've lost a bunch of revenue from that, and thus the need for big layoffs.
But Java, for Sun, is healthy, profitable, and growing.
Well, from the user's point of view... I don't think I will use apps developed in JavaFX if they use the same Java VM plugin we have right now... the plugin takes some seconds loading that actually freeze the browser during that time. It's a really bad user experience IMHO, you cannot rely on something like that for common media like flash is used right now.
Of course, I hope they just fix it... But I'm a bit septic as in several and several years, the java plugin didn't get much better besides some native widgets and less crashes (again, from user's point of view)...
Here is a very good JavaFX language tutorial.
They've created an entirely new language, and struck out in their own direction by creating an extremely idiosyncratic view layout description. Flex and Silverlight get it right. Define layouts in XML, code in your platform's preferred language.
So instead of coming up with a nice platform of light weight "webified" widgets, with an XML based view description, and Java based code - Sun comes up with some horrible mishmash of code and layout description that uses a language that's not quite java, and not quite javascript. Wonder-friggin-ful.
The Android platform is a much better example of how to properly go about creating a Java based platform that generates light weight Java apps. It probably wouldn't take much to port it to the desktop.
"Flex and Silverlight get it right. Define layouts in XML, code in your platform's preferred language."
Are you serious? I absolutely detest XML, and it's horribly overused. Thank God the Java Enterprise world is getting away from XML for configuration (being replaced by annotations and "configuration by convention").
Really, XML is just noise, and it's tedious, and ugly. XML's only (good) reason for existence is Web Services.
Using it to describe a UI - that's about as much fun as having teeth pulled.
Sun got it right by *not* following the pack, and not going the XML route with JavaFX, preferring a declaritive scripting syntax, which is much, much cleaner, easier, less error prone, less tedious, and much more closely models the actual design/layout of the actual UI.
But if you like XML, more power to you.
Edited 2008-12-05 17:19 UTC
"Using it to describe a UI - that's about as much fun as having teeth pulled."
What exactly do you think HTML is? It's XML. So I guess the entire HTML universe has been wrong this last decade to use XML to describe layout.
So what exactly you do find wanting in this picture:
<VBox width="100%" height="120" borderStyle="solid" borderThickness="1" corderRadius="5" backgroundColor="#AEAEAE" borderColor="#3b3b3b" >
<TextInput width="100" id="username" />
<TextInput width="100" id="password" type="password" />
<Button label="Login" id="submitButton" click="{clickHandlerFunction}" width="50" />
</VBox>
I cannot imagine a more concise and readable way to describe the layout of an application. In Flex, if you really want to pull your hair out, you can manually instantiate each object, set their properties, insert the children into the parent in the right order, and hook the event listeners - in this example, probably about 30-50 lines of code would result. Or you could write a few lines of XML.
"What exactly do you think HTML is? It's XML. So I guess the entire HTML universe has been wrong this last decade to use XML to describe layout."
I've done lot's of DHTML/Ajax/CSS stuff. For me, the weakest part of that equation is the HTML. HTML was never meant to describe UI. CSS is much better for that - cleaner, easier, more efficient. HTML is markup, and is good as an anchor for content, but not great for describing UI.
"So what exactly you do find wanting in this picture:"
I just don't like it. To me, it's noisy, and I find thing like CSS, or declaritive scripting, or even plain old API calls (like Win32, QT, GTK, Swing, Swt, etc) in language of choice, all much better, and less cumbersome.
For me markup is for, well, markup - an anchor for content. For describing UI and presentation, there much better tools.
But if you like the XML based Flex type stuff, more power to you! Different strokes for different folks.
Well, flex also does style sheets, so you can remove many of the property settings to a global style sheet if you so want - I don't tend to find that helps much, and it can actually makes it a little bit harder to read the code.
In flex I typically do a mix of declarative view definition and XML layout. XML is excellent for describe the high level layout of components - and this is not just personal opinion. Compare my simple login box (property indented) to the declarative equivalent - you understand the XML at a glance - the declarative equivalent - not so much. I don't know how you could argue otherwise. I can throw a brand new programmer (new to the language) at a complex layout, and they just "get" it immediately - visually it's very easy to follow the flow of a layout done in XML.
Sure, from a purist perspective, xml is only "supposed" to describe content - but at a higher level xml describes objects, their attributes, and their relationships - which is a perfect fit for describing a visual layout.
I used to think the way you do - and as a result I wasted a lot of time.
What exactly do you think HTML is? It's XML.
No, XHTML is XML, HTML is not. Both a variants of SGML.
I agree with the earlier poster about XML being bad, and much prefer to define my UIs with code for the following reasons:
1) You stick to a single language, no "half my app is in English and the other half in Swahili" mismatch
2) The compiler will help you catch errors during construction time
3) You can deploy your app as a single executable JAR, no need to have text files lying about the place. This simplifies deployment considerably ('xcopy deployability', on all platforms).
4) Your application can't be broken by someone inadvertently deleting the UI definition from the file system.
7) A properly written Java API can be just as 'declarative' as an XML API (which is why people like XML).
8) You can do calculations in Java for your UI that you can't do in XML.
I too am glad that the XML fad is starting to fade. It really annoys me with Java Persistence (JPA) you can do everything with annotations in code but you still need a small XML file to define the storage manager stuff. This is just bad design.
If you find that you require XML in your application (and it is not a webservice) then I would argue that you should reconsider re-designing it. XML is a pain for others to.
Good on the JavaFX team for getting the product out.
"No, XHTML is XML, HTML is not. Both a variants of SGML."
Thanks Mr. Pendantic. The point is that functionally speaking html is XML - there are some semantic differences between HTML and strictly compliant xhtml, but the concepts involved are identical.
Defining my layout in Flex using XML is almost identical to writing a page in HTML - and I don't think anyone would call HTML a failed model.
"1) You stick to a single language, no "half my app is in English and the other half in Swahili" mismatch"
In flex MXML is a strict XML subset. There is no "language" - I define tags, which have a one to one mapping to visual components in the Flex framework, and I set properties, which have a one to one mapping to properties of those objects.
"2) The compiler will help you catch errors during construction time"
The flex compiler fully validates the xml layout at design time.
"3) You can deploy your app as a single executable JAR, no need to have text files lying about the place. This simplifies deployment considerably ('xcopy deployability', on all platforms). "
Flex compiles to a single .swf - this is really implementation dependent and has nothing whatsoever to do with how the layout is specified.
"4) Your application can't be broken by someone inadvertently deleting the UI definition from the file system"
Flex uses the XML document as a source format and compiles it to ActionScript byte code - the same way it would compile a layout defined in the ActionScript language. Define it in code, or define it in XML, the end result is the same.
."
Every code generating visual layout tool I've ever used sucked hard. Sometimes I use the Flex visual design tool, but mostly I code by hand, which is pretty easy when the layout is defined in XML.
."
As mentioned previously the tag names are the UI component names. For example the Flex class TextInput has a tag <TextInput />.
"8) You can do calculations in Java for your UI that you can't do in XML."
Flex allows me to embed code in a CDATA block. I can also embed code or function calls in the tag properties - for example <TextInput width="{parent.width - 30}" />. Though not conceptual very clean, it's damned convenient if you use it sparingly.
"If you find that you require XML in your application (and it is not a webservice) then I would argue that you should reconsider re-designing it. XML is a pain for others to use."
Flex is designed around XML layouts. The Android platform is designed around XML layouts. I use XML layouts because these platforms have first class support for them, and in most cases it makes coding and supporting my applications MUCH easier. Sometimes I define layouts in code, but I usually have very specific reasons for doing it this."
Hibernate XML config files are a mess? They make perfect sense to me.
They make sense to me as well, but this doesn't mean they good from a design point of view. Compare Zune to iPod.
I think the problem is you are so familiar with the technology that you can't see the design alternatives. It is possible to do persistence without requiring developer's to learn Hibernate's XML mapping dialect which is my entire point. Why get humans to do work that machines can do themselves ? (eg. BeanKeeper). In the example I chose, Hibernate makes you add entries for trivial mappings that a machine could figure out from your Java domain model. This is just poor design in my point of view.
So, what I was trying to say is that XML is so familiar to people they never seem to stop and think if there might be a better way to do things.
I keep coming back to Einstein's
"As simple as possible but no simpler"
Is using XML really the simplest way to do things in many cases? Not always.
Edit: typo
Edited 2008-12-06 02:19 UTC
I wasn't aware that I was originally talking about hibernate XML configuration files. I was talking about coding visual layouts in XML. XML may or may not be an appropriate means of specifying configuration for hibernate - I am not exactly sure what this has to do with specifying user interface in XML.
Ok, for the slow learners out there I was trying to respond to an earlier poster who indicated that XML was the best way to configure GUIs. I disagree with this, and with the general malaise of using XML all over the place.
Since some can't think how else it could be done I used Hibernate as an example (contrasting with the simpler pure-Java interface of BeanKeeper). This analogy was intended to show that XML is not the only way things (including GUIs, get it?) can be done, and not even the best way. Do I need to labour the point any longer? I was just trying to point out the blinkered vision of those who believe XML is the "one true way" of doing things (it's not, and is quite a poor design model, and it is one of the reasons why people find Ruby+Rails etc so much more efficient to work with).
"Hibernate XML config files are a mess? They make perfect sense to me."
They make perfect sense to me too, but I like annotations much, much better.
And for GUI, I prefer a declarative scripting syntax, to describe the GUI, over XML.
In both cases (annotations, JavaFX), I find the alternatives to XML simpler, cleaner, and more intuitive.
And another poster made the point that if you describe the GUI in XML, or you config your Hibernate mappings in XML, then the compiler can't catch any of your errors in the XML. If you put that stuff back in the regular code (where it belongs, IMHO), then the compiler can help you. To me, that's huge.
In Flex, if you really want to pull your hair out, you can manually instantiate each object, set their properties, insert the children into the parent in the right order, and hook the event listeners - in this example, probably about 30-50 lines of code would result.
This is exactly the kind of thing that JavaFX saves you as compared to Swing....
Anyone else notice that the argument against browsers was that they're all owned by a big company who competes in the mobile market? While this is true for IE, Safari and Chrome it's not true for Opera. But, Opera is closed source, so OK. What about Firefox? "Firefox is owned by Google," says the article.
Did somebody forget to tell the Mozilla foundation?
I guess it depends on how close one considers 85% of their revenue to be .
"""
Edit: That was for 2006. In 2007 it was up to 88% according to their 2007 financial FAQ. That's getting pretty close to "all", I'd say.
Edited 2008-12-07 21:03 UTC
The article assumes that the Mozilla people lie when they say that they don't allow Google's money to have undue influence on what they do. I believe them, the article says "Well you can't believe them and must assume that they dance to any tune google chooses to play." This is inaccurate and unfair. While I'm sure they take their relationship with Google in to account in some decisions, they certainly don't let Google dictate the direction of their projects!
So no, Google does not own Firefox in any sense!
They do. It might not have been tested yet, because the relationship between Google Inc and Mozilla Corporation has been mutually beneficial.
But any company which gets 88% of its revenue from one source is going to be heavily influenced by that source. If the execs claim they are not, they are lying or fooling themselves. The community could stand to update its views on exactly what Mozilla Corp is. Which is not to say that it is evil. But it is not exactly the same thing as it was when the Mozilla Foundation started out.
You say they are influenced, they say they are not. Who do I believe? They are in a position to know exactly how much influence Google has, you are not.
I don't claim there's no consideration given to Google's wishes, but I don't see any undue influence. They direct people to Google's sites by default; big deal! If Google comes back and demands they change something in the browser as a result of this I am sure they'll tell Google to take a hike. I'm sure Microsoft, Yahoo, or any number of companies would be interested in having the same relationship Google now enjoys.
You're saying, and the article was saying, that no matter what they /say/, they will do exactly as instructed by Google. I just don't believe it, I see no evidence for it, I see no reason why they would feel they need to.
The article should not just assert this, nor should you assume it, unless you can *prove* it! I don't take "It's obvious," as proof. Is there a shred of evidence that Mozilla the corporation has ever altered Firefox at Google's direction any more than at the request of anyone else? I don't know of any such evidence, if you do supply it. If you don't, cease your libelous comments.
Java will allow seamless and easy integration with JavaFX. It will be very interesting to see how JavaFX turns out. Declarative languages have some really nice properties. It is good that JavaFX sticks out, being declarative. And Java is installed on more than one billion times. I think this can be really big, if it as good as it seems.
This kinda irks me... ok, better for those coming from C++ for example, but a strongly justified design decision from Sun like going for Interfaces instead of allowing programmers to extend more than one base class seems oddly reversed here and I am not clear about the benefits it allows or what exactly is the problem with Interfaces.
It is kind of an unneeded discontinuity from Java that JavaFX brings... maybe minor, but still weird...
Edited 2008-12-06 13:48 UTC
I totally get the idea of JavaFX and where this is going...
One thing, I grabbed a few "demo's" like the famous "Clock" and found that CPU went through the roof...
I just downloaded Netbeans 6.5 and ran the displayShelf test app. When it was sitting idol it was using 0%. As soon as I started flicking between the images (just like coverflow) the CPU when up to over 90% at one point.
I'm running this on a iMac 2.16GHz with 2GB ram and Leopard 10.5.5...
Is anyone else getting this kind of performance? | http://www.osnews.com/comments/20609 | CC-MAIN-2016-36 | refinedweb | 5,042 | 71.95 |
I'm seriously considering doing a Boost.Python extension for the FLTK cross-platform GUI (fltk.org). This is a big job but I'm able to devote the time. My qualifications are a very good working knowledge of Python, a hand-coded Python extension of the SQLite database-engine library (sqlite.org), and a crude but working Boost.Python impementation of SQLite as well. FLTK is implemented in C++ and has a well-documented C++ API. I got a basic FLTK window up and running in Python with Boost.Python in only a day's work (95% of that time learning Boost.Python). I'm doing this work on the FLTK V2.0 CVS tree. Just a few days ago the FLTK folks restructured the FLTK 2.0 source code to use C++ namespaces -- they are really taking C++ seriously. So it seems that a Boost.Python extension of FLTK is a "natural" fit. (I say "seems" because any project like this will have lots of forests with many hidden trees.) I know I'm going to need some help with this. So I would appreciate feedback: are my invevitable Boost / Python questions appropriate for this mailing list? [To the FLTK guru's: I may have a question or 2 for you as well; is that appropriate on the FLTK development newsgroup?] Regards, Bill Trenker Kelowna, BC, Canada (This message has been CC'd to the FLTK development newsgroup @) _________________________________________________________________ Protect your PC - get McAfee.com VirusScan Online | https://mail.python.org/pipermail/cplusplus-sig/2002-December/002669.html | CC-MAIN-2016-36 | refinedweb | 249 | 77.33 |
How To Guarantee Malware Detection 410
itwbennett writes "Dr. Markus Jakobsson, Principal Scientist at PARC, explains how it is possible to guarantee the detection of malware, including zero-day attacks and rootkits and even malware that infected a device before the detection program was installed. The solution comes down to this, says Jakobsson: 'Any program — good or bad — that wants to be active in RAM has no choice but to take up some space in RAM. At least one byte.'"
At least one byte (Score:3, Insightful)
While it might be true that any application will take up at least a byte of memory, there is no reason malware couldn't masquerade as another binary down to the exact number of bytes.
Hell, Windows is a whole slew of malware that masquerades as the whole OS.
Re:At least one byte (Score:5, Funny)
Re:At least one byte (Score:5, Funny)
While it might be true that any application will take up at least a byte of memory, there is no reason malware couldn't masquerade as another binary down to the exact number of bytes.
Oh see he didn't finish explaining.
Any program that wants to be resident has to occupy at least one byte of RAM. And that byte should include the Evil Bit, which all malware should set. Then your anti-virus program just checks the Evil Bit and problem solved!
Re: (Score:3, Informative)
You've been modded funny, but this is exactly what this "article" is all about.
If there is no malware in RAM, the results will be the expected result. [...] Or there could be malware in RAM, and the checksum will be wrong. [...] Or malware could divert the read requests [...] . That would result in the right checksum... but a delay.
Or, there could be malware in RAM, not diverting read requests, and the checksum will be the expected result, and without a delay.
The only problem with Markus Jakobsson grand theory is that all malware are of that last kind.
Well, all malware since the memory protection era. I suppose his "product" could work for DOSes (but there is no multitasking there) Windows 3, MacOS9, AmigaOS and some embedded OSes.
And if the malware does vi
Re: (Score:3, Insightful)
I think you missed the following parts
:
Instead of looking for known patterns -- whether of instructions and data, or of actions -- wouldn't it be great if we could look for anything that is malicious? That may sound like a pipe dream.
Not to me.
[...]
This tells us a few interesting things. We can guarantee detection of malware. And that includes zero-day attacks and rootkits.
Even with your interpretation
:
We can even guarantee that we will detect malware that infected a device before we installed our detection program.
You can't detect known malware that way if it virtualizes the computer, because you will only scan for the memory the malware is willing to show you.
By the way, the following assumption is unworkable:
Assume now that we have a detection algorithm that runs in kernel mode, and that swaps out everything in RAM. Everything except itself.
You can't swap out many parts of the kernel.
And I'm pretty sure kernel space parts of a rootkit won't let themselves swap out. Which does not mean uncooperative kernel modules are malware. If you're swapping out the disk driver, how will you
Re: (Score:3, Interesting)
You can't detect known malware that way if it virtualizes the computer, because you will only scan for the memory the malware is willing to show you.
Ah, there's where the bit about "knowing how much RAM you have" matters.
The virus has three choices:
1) Be overwritten, thus being eliminated (and showing you all of the RAM in your system.)
2) Swap part of what you're writing to disk.
3) Present less RAM than you actually have.
If you know how much RAM you have, you can detect choice 3.
If you can detect latency between secondary storage and RAM, you can dectect choid 2.
If the virus doesn't mind disappearing for the rest of the computer's runtime, you can mitig
Re: (Score:2) isol
Re: (Score:3, Informative) isolation between the running system and the verifier.
I'd rather just secure my systems, thanks.
Re: (Score:3, Interesting)
I'd rather just secure my systems, thanks.
How would you do that, then? There are so many clever rootkits these days, and no OS is "immune" (although SE Linux may be so if correctly configured, configuration is hard). Hardware-based rootkits (malware in the firmware of peripherals that attack an already kernel-mode driver process) are very dificult to defend against, you can only really hope they get enough bad press before they get you.
The only way to "just secure your system" against increasingly clever attackers is to have a safe partition, one
Re: (Score:3, Insightful)
There are several weaknesses. Note how he says something about an external verifier, which is delay-sensitive. Note how for the first 2 steps he keeps repeating "malware may of course interfere but this doesn't matter because", and then he stops considering malware interference. That's because at those points, malware interference would be fatal.
Of course, malware could simply take over the entire procedure, computing the keyed hash itself (a process which can run in a lot less memory : it doesn't actually
Re: (Score:2)
Oh, and lets consider that a virus writer would know the basic fundamentals of memory management, reporting back less RAM pages than expected. After all, we're depending on the OS memory allocator to provide correct number of pages, paged/nonpaged, etc..
In other words, we could alter the VM page table and not report all physical pages to the OS / "Guaranteed Virus Detection"
Another problem to consider is that often programs are compiled with buffer overflow detection, which writes a pattern of bytes before
Re:At least one byte (Score:5, Informative). The external verifier would notice and conclude that the device must be infected. Or malware could divert the read requests directed at the place it is stored to the place in secondary storage where it stored the random bits meant for the space it occupies. That would result in the right checksum... but a delay. This delay would be detected by the external verifier, which would conclude that the device is infected.
Why a delay, you ask? Because secondary storage is slower than RAM. Especially if the order of the reads and writes are done in a manner that intentionally causes huge delays if diverted to flash, hard drives, etc.
There's more details in RTFA.
Re: (Score:3, Insightful)
I did, and he doesn't say anything about this point.
Regarding making a keyed hash of the entire memory content, how would that even work? Every program modifies it's memory all the time. Then there's the programs like copy protections and Skype etc that modify it's own code in real-time too.
Refuting the imaginary article in your head (Score:5, Informative).
Re: (Score:2, Interesting).
I'm glad you guys have managed to work out what the article says.
I have one glaring problem with this system, and all other systems designed to detect running malware: no focus on prevention. I'm glad we have a new tool to detect malware executing on a machine that's already compromised, but that's what all of the new tools I read about intend to do. I don't see much progress being made in terms of the design decisions and best practices that prevent (Windows) machines from getting compromised in the f
Re:Refuting the imaginary article in your head (Score:5, Insightful)
Protection from malware should function like the immune system, with many lines of defense and many avenues of detection and counter attack. Prevention will never be perfect by itself.
Re:Refuting the imaginary article in your head (Score:5, Funny)
Wrong! Abstinence is the one and only preventative answer!
Re:Refuting the imaginary article in your head (Score:5, Insightful)
After reading TFA I'm still not seeing how this is supposed to detect unknown malware.
As far as I can see it would decide that a new install of any kind was a virus.
Sure if you know every program which is supposed to be installed and none of them do wierd things in memory(a big if) then you might be able to spot when some kind of change has been made but if you can do that then you have a situation where you might as well just re-image the machine from ROM every now and then.
I don't see any amazing new ideas in TFA
Re: (Score:3, Interesting)
If you have that much control over what's installed on the system then you might as well just take a snapshot of the system running in a safe state, place any data which needs to change on some kind of hardened remote server(like a seperate database) and just periodically re-image the whole machine from ROM.
Probably be more effective than this proposed system as well.
Re: (Score:3, Insightful)
"As good as this technique may or may not be, it's not security."
Ehhh, you could have been a sailor. Back in the day, we had "security alert" drills, to deal with "intruders" all the time. But, that's all we ever had, were drills. The REAL security was focused OUTSIDE! If some boat, ship, helicopter, or even a frogman came close to our ship, we just blew them out of the water or sky. There was never a need to look for an intruder INSIDE the ship!
The most likely time and place any intruder might get abo
Re:Interesting point (Score:4, Insightful)
I have a hard time believing a technique that starts with "swap out everything in RAM" could ever be used for real-time detection.
Re:Refuting the imaginary article in your head (Score:5, Insightful)
I'm not an expert and not sure if I'm missing something obvious here but what is confusing me is the part about "swap everything out except the scanner". Wouldn't you then just be moving the malware too? Into a protected space that you then have to scan and know what to look for?
If it's a zero day infection then you don't know what to look for and you swapped it out of memory for nothing really. I do get that if it tries to protect itself it will look suspicious but what if it looks like a normal program? A service or scheduled task that could be normal. What if it takes on the guise of an adobe update program in size/hash and function until it is time do act? Say a slight change to your systems dns entries. Then goes dormant again.
This may not be possible but I haven't seen why not and it leaves a pretty big hole for zero day infections that this method claims to be able to catch 100%.
Re:Refuting the imaginary article in your head (Score:5, Insightful)
If Dr. Markus Jakobsson has solved that, it is strange he hasn't also announced that he has solved the halting problem
Figuring out whether a bunch of bits is or isn't malware is going to be as hard (or harder[1]) to figure out whether a program given a particular input will halt or not.
[1] Especially when that bunch of bits might later on download a new bunch of bits. Yes I know in theory it is impossible to solve the halting problem (no general solution), but in practice some special cases can be solved.
In contrast it's harder to solve the malware problem when you assume they can fetch new instructions and data, and from an active hostile party.
Sandboxing is the way to go. Sandboxing is analogous to avoiding the halting problem by having a time limit on the program.
No need to figure out whether the program is malware or not. Just make sure it can't do what you don't want it to do. Let the OS sandbox it.
Re: (Score:3, Insightful)
From TFA:
His “solution”:
Create a sandbox consisting of all available RAM which definitely contains no “good” programs exc
Re:Refuting the imaginary article in your head (Score:4, Insightful)
Suffice it to say, you haven't understood it yet.
I think these people have all understood the article quite well, and are pointing out real flaws with this scanning method.
The #1 flaw is the assumption that an exact byte count of what is running can be known if malware is also running on the system.
If malware is actively running, then if scanning code calls any outside functions, the results must be considered tainted. Since there is no way for code that does not query the OS to even guess about what else is loaded in RAM, sufficiently intelligent malware will be able to hide itself from any scan. Hell, you can't even determine how much RAM is in a computer running in x86 protected mode without calling some OS or BIOS function, either of which can be hooked by the malware.
One of the other assumptions is that the time it takes to compute a hash of RAM on a particular machine can be known with enough precision to detect that it is being "delayed" by malware.
Last, there is one piece of code that can never be swapped out to disk, and will likely show up as malware as it refuses to be overwritten: the code that swaps and restores pages to/from disk.
Re: (Score:3, Insightful)
No, YOU are the one who doesn’t understand the concept of what is happening.
Looking for “space” that is occupied by malware in this manner will only detect malware that attempts to hide itself. There are multiple ways of defeating this scan, not the least of which is simply to not attempt to hide, allowing itself to be swapped out of memory like any other application.
In which case, you would not be “guaranteed” to detect all malware, because you’d be scanning it in the sw
Re: (Score:3, Informative)
The article never mentioned scanning for signatures
Read the damn article. I already quoted the part in question:
I.e., scan for its signature. That is how conventional antivirus works, and it is how his idea falls back to working if it doesn’t detect anything that’s trying to hide. So if it doesn’t detect anything trying to hide, it has to scan for it conventionally.
This is just an elaborate scheme to ensure
Re: (Score:3, Insightful)
Ok, so you want serious criticisms?
1. They assume that the read back from external storage doesn't overlap with computation. This is not true of any DMA-based transfer that is asynchronous. This breaks their argument as I can hide the time to do a memory / storage swap by using DMA during calculation of the hash of a different region.
2. It is brittle. It depends on a very narrow margin between computation time and retrieval time. There are many things that could cause this margin to change but their complet
Re: (Score:3, Insightful)
See people? If you read the article, you can offer cogent criticisms. If you don't you can offer irrelevant criticisms you will then have to spend the next several hours massaging and defending.
I think number 4 is the most cogent. The author claims his system can detect 0-day malware that was on the system before the scanner was installed. Maybe, if the malware tries to interfere. But if it doesn't, you have no signatures or checksums to fall back on, how could this system work?
Re: (Score:3, Interesting)
It's getting kind of boring explaining the article over and over again.
You mean the part they say they don't even need to start with a clean machine?
Re: (Score:2)
So the malware will just let itself to be swapped out too? This is especially true if it's running inside another process because you know, if the parent process is getting swapped it can't just continue running there. It will get swapped too.
Even if did run as it's own process and were actively defending it's own memory, wouldn't it be quite trivial to detect that everything is suddenly getting swapped out and go to "sleep" mode?
Re: (Score:3, Insightful)
Unless, of course, the infected system is lying to you about the memory allocations.
Re: (Score:3, Insightful)
If we start from a known clean machine
By doing this, there's really no reason to run any high-tech malware detector, as this assumes you have two machines that are identical in every way except one might have malware while the other is known to be clean. If that's the case, just clone the one that is known to be clean and you now have two known clean machines.
In other words, there is no way to use an external machine to assist you in determining things like memory used by a process because you can't have both a clean machine and one infected
Re: (Score:3, Insightful)
"Still haven't read the article, eh?"
I did. The relevant parts:
"Any program -- good or bad -- that wants to be active in RAM has no choice but to take up some space in RAM. At least one byte, right?"
More news from Captain Obvious Dept.: any program -- good or bad -- that wants to be resilient between cold bootups has not choice but to take up some space in persistent storage. At least one byte, right?
"All we need is the help of an external verifier that knows how much RAM a device we want to protect has,
Re: (Score:2)
That is a different point. And also not valid. If the malware lets itself get swapped out, it can not interfere with a scan.
Re: (Score:2)
If the malware gets swapped out it won't be detected in the scan. Which was sopssa's original point at the top of the thread.
Re: (Score:3, Interesting)
Yes, and what happens when everything gets swapped back? There's the malware again.
If you assume it would be easier to find and delete the malware when it's not resident in memory anymore, then you could do that just fine on another computer and working directly with the hard drive. You would need to do that anyway, since your RAM is swapped out. AV's are already quite integrated in to the system, never can delete anything (so you would need to do it by hand) and in worst case scenario could do the deleting
Re: (Score:3, Informative)
If the malware let's itself get swapped out, then it can't hide it's memory footprint. Assuming we have started from a known clean machine, it is then trivial to figure out what the memory footprint should be. If it is larger than it should be, there is swapped out malware.
The point is, the malware will be detected whatever it chooses to do.
Re: (Score:2)
If the malware lets itself get swapped out, it can not interfere with a scan.
No. It can not. But you still have no way of guaranteeing that, even if the scan is not interfered with, it will find the malware.
In fact, I will guarantee that no scan will detect 100% of malware, even if the scan isn’t interfered with (by masking the malware in RAM or on the secondary memory).
Re:Refuting the imaginary article in your head (Score:5, Informative)
But it does seem to be foolproof if you start from a known clean machine.
That’s the whole point of his system: to scan from a machine that isn’t known to be clean.
The standard way to scan from a “known clean machine” is one of the following:
1) Use a known-clean boot CD. This ensures that the system RAM is clean. Once booted into the clean system, scan the possibly-infected secondary storage devices.
2) Use a separate known-clean computer. Its system RAM is clean. Attach the possibly-infected secondary storage devices from the first computer to the known-clean computer and scan them.
The 3rd way, which is not foolproof, is to attempt to avoid a boot CD or second computer by scanning the system RAM. Once the system RAM is known to be clean, you can proceed from there as in either 1 or 2 above. However, there is no foolproof way to ensure that the RAM of a running system is clean.
He proposed a “foolproof” way to ensure that the RAM of a system is clean (without shutting it down and booting it from a clean boot device). That’s the only thing that makes his suggestion useful.
However, it isn’t foolproof, and even you see that.
FWIW, RootkitRevealer [microsoft.com] uses exactly the same method, so it’s not like this guy came up with anything novel. In fact, RootkitRevealer even has the disclaimer:
It only finds things that are trying to hide. If they aren’t trying to hide, you have to scan them conventionally.
Re: (Score:2)
I would also wonder about false positives on shareware, poorly written apps, custom corporate apps, etc.
Re: (Score:3, Insightful)
As near as I can tell, the article makes a HUGE assumption: the malware is actively trying to hide itself. This is not an unreasonable assumption, but it makes everything more clear. Let's go through his steps:
1) The algorithm swaps out memory.
2) The malware decides to stay active in RAM.
3) Random bits are written to memory.
4) Optional: The malware masks its presence by falsifying the memory reads.
5) A bad hash or delay reveals the presence of something trying to stay hidden.
For the special case that malw
Re: (Score:2)
Virus scanners figured this out years ago, this is why they scan the operating memory!
The difficult part is finding out which "bytes" are bad. The problem is many elements of spy tools are often used for good too. Like VNC and all of those legitimate screen capture and key logger programs for IT.
Re: (Score:2)
It doesn't need to do even that.
They forgot that malware code can reside inside another process and it's memory space, in which case comparing and writing random bytes to free RAM is a moot point.
So you mean his idea can only deal with "Computer Germs" but not "Computer Virus?"
Theory and Reality (Score:5, Insightful)
Seriously, how could this possibly work for ALL (including undocumented, and hereto unknown) threats? And if it does it by reading straight from RAM (through the kernel), wouldn't a rootkit be able to trivially defeat that?
Re: (Score:2)
Re: (Score:3, Insightful)
His whole point was not "this is how you should do it", it was "you could do this, and because you could do this it shows that it's theoretically possible". This is a variant of what is know as a gedanken experiment-- an argument that proves or disproves some fact is true while not actually being somethign you would want to carry out. For example, you could suppose that you could measure the force field is under by running a pole from the earth to the moon and pushing slightly on it. Not that you want
Re: (Score:2)
ya. coming up with a reliable virus detection scheme for unknown viruses is pretty much in the same area as the halting problem.
Even detecting polymorphic viruses has been proven to be NP complete.
Re: (Score:2)
Re: (Score:2)
A rootkit that is AWARE of this detection mechanism ought to be able to defeat it easily by just overwriting the computed and expected keys in the detectors memory space with a random number.
LOL, good point.
Re: (Score:2)
I read the article and I wasn't convinced. I don't think one can guarantee malware detection. Any detection approach has false positives and/or false negatives. Typically we err on the side of false negatives, while some other approaches (host-based IDS-type approaches) err on the side of false positives.
The method addressed here does not deal with all possible attacks, but only the problem of malware interfering with the scan. Hence even with such a mechanism, all you can use it for is guaranteeing the
Wrong from the getgo! (Score:2)
Not only that, but his initial premise is already wrong! Most people conceptualize a program like an application - it's launched, loads into memory, and then does stuff. And while that's typical, it's a grave mistake to think that's the ONLY way to go!
Off the top of my head, I can think of registering malware as a callack handler for a system event. In this case, you have an infected computer without any code running at all, in a context and namespace different than running applications!
Winows just wasn't
Re: (Score:2)
what if your external verifier was hardware based? build a little device with hardened rom and bios, give it a usb interface, or maybe even something proprietary - let the detection take place off-board.
Re: (Score:2)
So it has to be in RAM (Score:2)
Does it have to be in RAM? (Score:3, Informative)
It could be in ROM.
It could be in processor cache.
It could be in the video card's memory.
Could it be in pipelined instructions waiting to be executed?
Re: (Score:2)
Processor cache would eventually result in execution, which takes place in RAM. Video card memory would just do funky things to the display. A pipelined instruction would also eventually end up in RAM once it was executed.
Perhaps the key here is "actively executing" malware. I suppose it can lay d
Re: (Score:3, Funny)
The hard part is actually finding it.
That reminds me of a signature I've seen around here (Sorry, I don't remember who was using it)
cat
/dev/ram | strings | grep llama
OMG, my RAM is full of llamas!
In case anybody was wondering... (Score:5, Informative)
Theory and hand-waving (Score:4, Interesting)
<sarcasm>Punting the problem to an "external verifier" is pretty neat. I wish I could do that with my next hard problem.</sarcasm>
That whole bit about swapping, though.... If I write malware and hide it somewhere in execution space, do I really care if it gets swapped out? So the code that steals keystrokes or sniffs for credit card numbers doesn't get executed for short while. Big deal. At some point it will get loaded again (if written properly, that is).
Or am I missing something obvious?
Re: (Score:2)
If you want it to work, with a software update, on today's general purpose x86 office boxes, your "external verifier" might as well be a magic pony that sneezes rainbows and poops out the factors of any arbitrarily large primes that you feed it. Not Happening.
On the other hand, if your target is "Paranoid embedded architectures, 2-5 years from now" you can posit pretty substantial hardware changes at onl
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
This proposal isn't to detect what malware is present, or to remove it. It is onl to detect that there is some malware present, which can then lean to more thorough scanning to detect and remove. Knowing that something is there is half the battle.
Re: (Score:2)
No, he claimed in the article that:
> This tells us a few interesting things. We can guarantee detection of malware.
> And that includes zero-day attacks and rootkits. We can even guarantee that we will
> detect malware that infected a device before we installed our detection program.
To me _guaranteeing_ detection of malware (especially zero-day ) is similar to solving the halting problem (without having the source code and knowing all the p
Re: (Score:2)
Detecting the malware depends on the malware trying to stay in memory. My point was that "properly written" malware wouldn't necessarily care if it is was swapped. Allow the swap, get a clean bill of health from the "external verifier," then get reloaded and continue Bad Activities. Downtime for the malware is negligible.
Re: (Score:2)
Finally (Score:2)
A valid criticism. And if the malware is actively resisting the scan, by moving the random bits back in from secondary storage before the hash, the external verifier knows about it because it takes even longer. By design. So, unless you are running a load balanced cluster and can afford to take a server offline for a few minutes when you want to scan, yes, this is a problem with this approach.
Re: (Score:2)
Punting the problem to an "external verifier" is pretty neat. I wish I could do that with my next hard problem.
It may be worth doing right. Look for malware from a hypervisor (memory, disk, network, etc.). Running this all inside the insecure machine is just asking for trouble, though, but is the best currently available. But even today there are cpu's shipping without virt support, so this can't be done for every machine yet or for a while. Still, I think many would spend the extra $50 if it worked wel
Still a needle (Score:5, Insightful)
A needle in a haystack wants roughly the same amount of space as a straw - doesn't make it any easier to find (indeed, that's part of the reason it's so hard to find).
Even if this technique has merits, it does nothing to correct the primary reason for computer infection - stupid users.
Re: (Score:2)
Even if this technique has merits, it does nothing to correct the primary reason for computer infection - stupid users.
As with most things in life, stupidity is the leading cause of problems.
Except death. I think god has a monopoly on causing death in that department. (Take that last sentence however you will. Just remember: however you take it is how I meant it.)
Re:Still a needle (Score:5, Insightful)
I think god has a monopoly on causing death in that department. (Take that last sentence however you will. Just remember: however you take it is how I meant it.)
You're sleeping with your mother? Gross.
Which one is the detector? (Score:5, Insightful)
How about a malware that masquerades as this detector and reports the RAM checksum is OK?
Re:Which one is the detector? (Score:4, Insightful)
Detecting a malware detector is just as hard as detecting malware. In general, detecting software of a specific type is halting-equivalent. In practice, the goal is to take shortcuts so that your adversary has a halting-equivalent problem and you don't. At present, the malware authors are winning. If we could force them to detect the malware detectors, that would be a huge advance.
My skepticism about this is the obvious one: what if the malware just lets itself get swapped out, and relies on stealth to survive the process?
"Guarantee" (Score:5, Insightful)
Re:"Guarantee" (Score:5, Funny)
Re: (Score:2)
I guarantee there's at least one thing that can be guaranteed.
Re: (Score:2)
I guarantee there's at least one thing that can be guaranteed.
You would be wrong about that.
Re: (Score:2)
I guarantee you're going to die someday.
-- Not a death threat.
Re: (Score:2)
Can you guarantee that?
Okay (Score:2)
And what if the malware lets itself be swapped out of RAM the same as all of the other apps?
I'd love to have an approach to malware that could always detect unwanted processes, I'm just trying to find holes here.
False positives? (Score:2)
Yeah you can detect that SOMETHING is there, but how do you determine whether that something is supposed to be there or not?
If you assume all "somethings" are not supposed to be there, you'll have a worse situation than UAC with users being prompted all the time and getting conditioned to click "yes".
After reading the article, it seems no different from doing an offline scan using ClamAV from a LiveCD except maybe slightly more convenient. You boot a "secure" detection mechanism in place of whatever is nor
Since I actually read the article (Score:2)
I note that he seems to have missed a rather obvious possibility: there's malware in RAM, but it allows itself to be swapped out with all the other processes. Why wouldn't it? If it got loaded into RAM once, it'll get loaded again by the same vector. In fact, it has to rely on that happening, since at some point the RAM is going to be physically powered down. There's no point in trying to dig in like a tick.
So as far as I can see, his magic technique will only catch malware that attempts to protect it
So what I'm getting... (Score:2)
That and some test for checking behavior.
The problem which he doesn't seem to resolve is, "How do we know everything that isn't malware?". I mean, I see that he goes on about running this using only in kernel mode, so there should only be kernel memory in RAM, but what if the malware exists hooked in (as many seem to be that I've found) in o
Ludicrous (no, not the rapper). (Score:2).
Ah ha! All you need is a kernel-mode algorithm that knows exactly what should be in RAM, at all times! In other words, it has to emulate all of the legitimate software you’ll use, because otherwise how would it tell the difference between legitimate software’s use of RAM and malicious software’s use of RAM?
What an idiot.
won't work (Score:2)
either
* the malware is in the kernel, in which case it can provide a false checksum of the memory to the external verifier
or
* the malware is in userspace in which case it gets swapped out, the verifier determines there is no malware in the system, then it gets swapped back in and carries on performing its malicious activities with its user privileges
Ok, so you have verified there's no malware in RAM (Score:2)
If the malware is
/sbin/halt, you've still got a problem.
Common mistake... (Score:2)
Everyone who claims to solve a long-standing problem "guaranteed", does not know all the possibilities that could thwart their solution. Guaranteed.
Easy (Score:2, Flamebait)
What I don't get. (Score:2)
Is why didn't Microsoft make the OS files read only way back when?
Make the user give explicit permission to over right system files?
It wouldn't make it impossible to get malware but it sure a shooting could make getting ride of it easier.
Re: (Score:2)
Time to market.
Malware detection is Bogus. (Score:5, Informative)
How about we change things in Windows so it actually prevents infection in the first place?
1. Educate users. Microsoft does a piss-poor job of this.
2. STOP DEPENDING ON 3 MAGIC LETTERS TO DETERMINE IF SOMETHING IS CODE OR DATA. COME ON, SERIOUSLY. THIS SHOULD HAVE DIED WITH CP/M.
3. Kill ActiveX - I know of no legitimate website besides Microsoft.com that requires ActiveX.
4. If a file comes in from the outside world - STRIP ITS PERMISSION TO EXECUTE. MAKE THE USER UNPACK IT FROM AN ARCHIVE OR SET ITS PERMISSION.
Really. Seriously.
No, the above won't cover every situation, but it's a pretty good start.
--
BMO
Re: (Score:2)
No, the above won't cover every situation, but it's a pretty good start.
You say those as if Microsoft isn't aware of the problems with their design decisions. They by-and-large don't bear the costs of their poor security but would bear additional costs if people had to learn new ways of interacting with files (support costs, engineering, etc.).
So far they're right - their market position hasn't been adversely affected by the malware crapfest they've foisted onto their users. A cookie for he who figures ou
register (Score:5, Interesting)
Some amazingly bad assumptions (Score:5, Insightful)
Sure, malware has to occupy memory. That doesn't mean it has to be its own memory. Buffer overflows are all about corrupting another application's memory space.
His basic argument is that if you want to scan RAM, the kernel can halt all processing except its RAM scanner, and have a go at the RAM safely. If it's particularly insidious malware, it'll try to hide itself in various ways, one of which would be to masquerade the portion of RAM it was using with something legitimate looking (maybe erase that portion of memory). But you know it did this because you can see that memory which was supposed to be free is no longer free. Except the hardware has no concept of free or occupied memory. It just has memory, and the OS keeps track of what's free and not. The OS - the same space where malware is running.
OR, the malware could simply not do this, then its behavior is no different from any legitimate program. So how do you detect it now? You still need definitions that say, "When running in memory, this virus looks like X," then look through memory for that pattern.
Besides, who's to say that the kernel space is guaranteed free of malware itself? Even if you would have successfully identified the threat in RAM, you have no guarantee that the malware hasn't corrupted the identification routine.
It's like someone came along and said, "Hey, you guys are looking for malware wrong. You have to look for it! And I mean really look for it!"
Wow (Score:2)
Someone has discovered the white-list.
Please take a number and stand behind the perpetual motion people. When I'm done with them, I will explain the few finite set of cases where this method DOES work, and you can assume that in the infinite number of OTHER cases, this method does NOT work.
Its easy. (Score:2)
2) report that there is malware installed
Redeeculous idea. (Score:5, Interesting)
I tried reading TFA a few times. First time, utter confusion. Second, third times, no better. I can't make any sense out of these points:
>1) There are absolutely only three things malware can do when you scan for it. One: be active in RAM, maybe trying to interfere with the detection algorithm. Two: not be active in RAM, but store itself in secondary storage. It cannot interfere with the detection algorithm then, quite obviously. And option number three: erase itself.
Absolutely, not. There are many other things malware could be doing. Inactive in RAM, compressed and inactive in RAM, encoded as plausible-looking entries in the File Name Table or the Virtual Memory map.
>2) Any program -- good or bad -- that wants to be active in RAM has no choice but to take up some space in RAM. At least one byte, right?
No, it could be sleeping, existing only as an entry in the swapped-out process table. Or in unused space below a thread stack.
>Assume now that we have a detection algorithm that runs in kernel mode, and that swaps out everything in RAM. Everything except itself.
Whoah there fella. Everything? Are you going to turn off all timers and interrupt enables so their service routines don't get called?
Hard to do without mucking up all the device drivers. Are you going to swap out the kernel too, as malware is quite capable of infesting kernel space. And what about device drivers? They're constantly mucking with their internal tables and I/O buffers.
And if you turn off all device drivers, you lose, as there's nothing stopping malware from masquerading as a device driver. Many do.
>>But if we know how big RAM is, we know how much space should be free.
Whoa there again, big guy. There are plenty of machines with RAM at places not generally known to the OS, such as video RAM, graphics polygon RAM, network card RAM buffers, and kernel stacks.
>> Assume we write pseudo-random bits over all this supposedly free space. Again, a malware agent could refuse to be overwritten.
You don't need a checksum test to do this-- each page of virtual memory has R/W control bits.
And you're foiled here again, as there are plenty of system areas that are write-protected, such as pre code areas and the VM tables themselves.
>.
Nooo, that just tells you that either you overwrote the malware, so you'll never find it, or the malware during your two sweeps did not change any RAM contents. Quite possible as most malware just sits around most of the time.
>> Or there could be malware in RAM, and the checksum will be wrong.
Well, no, unless you disabled all interrupts and stopped all kernel tasks, there will still be system timers and interrupts and device drivers changing their state in RAM.
>> The external verifier would notice and conclude that the device must be infected.
Or some part of the system or some device driver is still running. Huge chance of false positives.
This essay seems to have been written by someone with only a glancing familiarity with hardware and system software.
Snake Oil, part 2... (Score:3, Insightful)
From the Our Solutions [fatskunk.com] page:
A technique known as software-based attestation can provide an alternative defense against malware by performing infection scans periodically and detect the presence of any program that refuses to be inactivated – as well as any inactivated program that is known to be malicious.
So, it can detect malware that refuses to be inactivated which is a tiny (vanishingly-tiny?) percentage of malware, as well as inactivated software that is known to be malicious (eg, because of a known virus signature.)
So what's the advantage over signature-based virus-scanners? Well, you get to detect completely hypothetical software that (somehow) refuses to allow the kernel to swap it out (and how that is possible is never explained) at the cost of hugely-expensive computations.
Great.
Re: (Score:2)
Yes; In fact, I’m going to walk out on shaky ice and claim that every program that wants to be active in RAM uses more than one byte! | https://it.slashdot.org/story/10/03/15/1540234/How-To-Guarantee-Malware-Detection | CC-MAIN-2016-36 | refinedweb | 7,209 | 70.84 |
Lauren continued to watch her new lover as he pressed his face in between her thighs and began to let his tongue tease her. Chapai nawabgonj atse school SheridanLove bikini porn
She twisted the switch, dimming the light. Desi village josila old mom sex imge 3gp pill pack reviews
Mimi xxx video But, maybe it was nothing. Roxy raye fisting opened anal. 3gp video download
Maybe it'll help me relax. Bf mp4 2016 कुआर लडकी
Sindiya sexx vidio daumlod However our records show not much is in there. WWW CAINA XXX WA. GOOGLE DATKAM 3gp savdhaan india 7th july 2013 episode
I rose, stuck my blood streaked fingers up to Lila's mouth and waited. WWW CAINA XXX WA. GOOGLE DATKAM
Nothing, I replied, For all intents and purposes, this never happened. 3gp savdhaan india 7th july 2013 episode buy generic viagra online usa
Bf mp4 2016 कुआर लडकी Not laces, not a thong or G-string, just plain white cotton.
I heard jenny's soft footsteps and soothing words to her baby as she headed for their room.
Desi village josila old mom sex imge 3gp We caught two more keepers and then it was time to head in. Chapai nawabgonj atse school
I got settled in to my seat and everyone quieted down as a stern looking older lady approached the lectern and welcomed us and then began giving a brief history of the university, which I actually found interesting. XXX CATHERINE ZETA
Would you like something to eat? You must be hungry, if you've been sitting there all night, waiting for me to wake up. revatio vs viagra
PunjabSexx BF I pounded into her some more and drove her orgasm higher.
Dowld bker ank perawan ml We fished for the rest of the morning.
blondeteen jel le I'm pregnant. xvideostudant
These problems were ones the class hadn't even tackled yet.' housewives from another world 2010
River, I began, with a stern voice, I am going to introduce you to some people. new ed drugs
.cute.indian.vdovies Be at the house at eight on Halloween night. Sindiya sexx vidio daumlod
tvbdy x suag rat I'm not in horrible shape but my regimen is mostly tennis and walking so my practical knowledge isn't great. free 10 min pornebony porn photo
I ve decided to cut a deal with you, kid, and give you a chance to play hero&for once. savannah fox nude
. danielle fox ts celebrity free pornlincauknab
Kathryn uses the water closet first as I wait. generic ed pills
import model pornbikini porn pics The Queen provided Alice with a new blouse and skirt, exactly like the one she already had.
hot kannada collage girrl sexy romance in college download I have no right and no desire to risk your lives in this battle. বাংলা ছাত্রি xxx poti
Assti spat in Tantka's face, several of her men immobilized Tantka's hands, finally placing an electric বাংলা ছাত্রি xxx | https://webdevart.ru/index.php?n=25&id=282714 | CC-MAIN-2019-43 | refinedweb | 507 | 71.95 |
” and “getNumber” and you would like to check whether the resulting object
has the correct values.
In my opinion the best way to verify this is to create two matchers and
combine them. Before we can combine these two matchers let’s see how to
create them.
For our examples we will test an instance of the following class:
public class Foo { public String getName() { return "Foo"; } public int getNumber() { return 41; } }
If you have a look at the Matcher interface you will see some hints
which tell you not to implement the Matcher itself. So have a look at
BaseMatcher which is referred to in the Matcher interface.
Let’s check if the number is 42 and the name is “Bar”. The samples show
possible solutions without every time creating its own class files. I just
write a small factory method instead of creating a matcher class. This
will be the first step while coding your test/matcher. If you see the
matcher can be used somewhere else, it should be very easy to create an
own class. It would also be helpful to create a factory class which
will centralize your matchers.
Using BaseMatcher
If we use BaseMatcher we will recognize that we have to implement the
following two methods:
- public boolean matches(Object item) : here we will do the check.
The item will be our object under test (Should be an instance of
Foo).
- public void describeTo(Description description) : here we will
have to return a description of our matcher. This will simplify our
life if there are failures.
Now let’s see how the implementation of a BaseMatcher could look like for
checking the value of getNumber().); } }; }
And the usage would look like this:
@Test public void numberIs42() { final Foo testee = new Foo(); assertThat(testee, hasNumber(42)); }
This results in a failing test. All right we want a failing test
otherwise we could not have a look at the features of our new matcher
since this will only be apparent while the test is failing 😉
So the test will give us this output:
java.lang.AssertionError: Expected: (getNumber should return <42>) got: <ch.adrianelsener.MatcherSampleTest$Foo@136070f0>
You will agree with me that this is not very useful. But it is a good
demonstration why to do a failing test first, to demonstrate what can happen when
the test fails, otherwise you might never have noticed that there is room for
improvement.
There are two possible solutions to get a more powerful message:
- Implementing toString on Foo would be one of them but not what we want.
- Implement the method “public void describeMismatch(final
Object item, final Description description)“ in our
matcher
If we add “describeMismatch”, our matcher would now look like this:); } @Override public void describeMismatch(final Object item, final Description description) { description.appendText("was").appendValue(((Foo) item).getNumber()); } }; }
Now the result of the failing test tells us:
Expected: getNumber should return <42> but: was<41>
This will work great, but you will have to do casts all the time even hough you
define the type of the object to test in the signature. To
solve this you can use the TypsafeMatcher.
TypesafeMatcher
This matcher is almost identical to BaseMatcher. Almost 😉 The
differences are:
- protected boolean matchesSafely(final T item)
- protected void describeMismatchSafely(final T item, final
Description mismatchDescription)
Now the item already is of the known type and it is verified by
TypesafeMatcher to be what it should be. If you use the assertThat
method it would change nothing, since there are generics to define
what you can match and what not but the casts are no longer necessary.
Our sample matcher will change into something like this:
private Matcher<Foo> hasNumber(final int i) { return new TypeSafeMatcher<Foo>() { @Override public void describeTo(final Description description) { description.appendText("getNumber should return ").appendValue(i); } @Override protected void describeMismatchSafely(final Foo item, final Description mismatchDescription) { mismatchDescription.appendText(" was ").appendValue(item.getNumber()); } @Override protected boolean matchesSafely(final Foo item) { return i == item.getNumber(); } }; }
If you let the test run you will see the same result as with the
matcher extended from BaseMatcher. But we don’t have to cast the
object all the time. Now you’ll ask what will the
TypesafeDiagnosingMatcher bring more. Let’s have a look.
TypesafeDiagnosingMatcher
With the TypesafeDiagnosingMatcher you only have to implement two
methods. These are:
- public void describeTo(final Description description) : This
has still the same function: Describe what the matcher is doing
- protected boolean matchesSafely(final T item, final Description
mismatchDescription) : Here we do the check AND the
error description.
So we change the creation of our matcher to the following:
private Matcher<Foo> hasNumber(final int i) { return new TypeSafeDiagnosingMatcher<Foo>() { @Override public void describeTo(final Description description) { description.appendText("getNumber should return ").appendValue(i); } @Override protected boolean matchesSafely(final Foo item, final Description mismatchDescription) { mismatchDescription.appendText(" was ").appendValue(item.getNumber()); return i == item.getNumber(); } }; }
There is one point you have to keep in mind. If the match fails, the method
matchesSafely will be called at least twice. This is because the
method will be called to do the match AND to write
the failure message.
FeatureMatcher
There is one abstract matcher which will be very easy to use for our
problem. When implementing FeatureMatcher we just have to implement
“protected abstract U featureValueOf(T actual);” (T will be our item
and U the type that has to be checked) and all the rest will be done
by FeatureMatcher. No more failuretext no more description. Nice,
isn’t it?
Here how our matcher looks like after implementing it with FeatureMatcher:
private Matcher<Foo> hasNumberFeatureMatcher(final Integer i) { return new FeatureMatcher<Foo, Integer>(equalTo(i), "number", "number") { @Override protected Integer featureValueOf(final Foo actual) { return actual.getNumber(); } }; }
After executing we will have this StackTrace:
java.lang.AssertionError: Expected: number <42> but: number was <41>
As you see, we have to change something. The FeatureMatcher is
designed to return a value (feature) of an object. Because of this we
have something special in the constructor. Have you seen it? Yes,
the equalTo matcher. This is because the feature matcher is
designed to take an other matcher. So it would also be possible to
compare with lessThan/greaterThan etc. Nice isn’t it?
Combining two matchers
Now let’s do the rest. Our goal was to compare two getters. To do this
the CombinableMatcher will be helpful. Just have a look at Matchers
and you will find the following way to combine two matchers:
“Matchers.both(MatcherA).and(MatcherB)”. In my opinion this is a very
readable way to combine two matchers. But there is one problem with
it: The message will not be as good as it should. If we let this run
assertThat(testee, both(hasName("Foo")).and(hasNumber(42)));
we will se something like this:
java.lang.AssertionError: Expected: (getName should return "Foo" and getNumber should return <42>) but: was <ch.adrianelsener.Foo@2484e723>
if we do not implement toString on our result we can’t really get
enough information to get a conclusion what happend. Here I would do a
little manual work.
class MyCombinableMatcher<T> extends BaseMatcher<T> { private final List<Matcher<? super T> matchers = new ArrayList<>(); private final List<Matcher<? super T> failed = new ArrayList<>(); private MyCombinableMatcher(final Matcher matcher) { matchers.add(matcher); } public MyCombinableMatcher and(final Matcher matcher) { matchers.add(matcher); return this; } @Override public boolean matches(final Object item) { for (final Matcher<? super T> matcher : matchers) { if (!matcher.matches(item)) { failed.add(matcher); return false; } } return true; } @Override public void describeTo(final Description description) { description.appendList("(", " " + "and" + " ", ")", matchers); } @Override public void describeMismatch(final Object item, final Description description) { for (final Matcher<? super T> matcher : failed) { description.appendDescriptionOf(matcher).appendText(" but "); matcher.describeMismatch(item, description); } } public static <LHS> MyCombinableMatcher<LHS> all(final Matcher<? super LHS> matcher) { return new MyCombinableMatcher<LHS>(matcher); } }
If we now execute this line:
assertThat(testee, all(hasNameTypesafeMatcher("Bar")).and(hasNumber(42)));
java.lang.AssertionError: Expected: (getName should return "Foo" and getNumber should return <42>) but: getName should return "Foo" but was "Bar" getNumber should return <42> but was <41>
assertThat or assertThat ?
If you see just something like this
java.lang.AssertionError: Expected: (getName should return "Foo" and getNumber should return <42>) got: <ch.adrianelsener.Foo@21a79b48>
Then it might be because you have used the assertThat from JUnit and not the one
from hamcrest. This is because JUnit delivers an assertThat to get
small support for hamcrest. But it does not use all of the “new” features
of hamcrest. They deliver an old implementation. All you have to
do is use MatcherAssert.assertThat within the hamcrest library. It
would be better to get the JUnit jar without hamcrest and replace it
completely by the hamcrest jar itself.
[…] about this topic I’d recommend the following excellent article from Adrian Elsener posted on planetgeek.ch. […]
In MyCombinableMatcher class the method mathes should be inplemented in this way. In other way it will be break when first match will be not matched,
@Override
public boolean matches(final Object item) {
for (final Matcher matcher : matchers) {
if (!matcher.matches(item)) {
failed.add(matcher);
}
}
if(failed.size()>0)
return false;
return true;
}
Great article! Exactly what I needed.
Thank you very much for the example!
[…] […]
Thanks for the post. It was great for a fast understanding of how to create customer matchers.
[…] own matchers by implementing the Matcherinterface or extending any of its implementing classes. Here you can find a few good tutorials. Also you can refer to this […] | http://www.planetgeek.ch/2012/03/07/create-your-own-matcher/ | CC-MAIN-2016-44 | refinedweb | 1,585 | 57.16 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.