text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
My project is soooooo close to being ready for implementation, but there is one thing stopping me...... getting the Pi to shutdown immediately after the script is closed.
In a nutshell, my project is a sports timer for a dog sport called flyball. After months of work I have successfully ran my script with a prototype model...... the laser tripwires are working thanks to the use of an Op-Amp and some IR Phototransistors.
I've used the code from the GPIOZERO documentation to create a poweroff procedure:-
If I run the script either from the lxterminal or from Geany, it works fine..... the pi will switch off when I close my script.
Code: Select all
from subprocess import check_call -------- lots of code -------- -------- class InputEnd(): def finish(): global enabled enabled = False leds.close() gobutton.close() frontpole.close() crosspole.close() tk.destroy() check_call(['sudo', 'poweroff'])
I have the script automatically running after boot up now (that was a mission and a half to get working), but when I close the script it goes back to the PIXEL desktop and doesn't shut down.
Any advice I can get to resolve this would be much appreciated.
Some additional information:-
Because my script uses a GUI written with Tkinter, I needed the script to load after the Pi had booted in Desktop mode.
I got the autorun to work first by creating a launch.sh script which contains this:-
Code: Select all
#!/bin/bash cd FlyballTimer sudo python3 FlyballTimerPi.py
Permissions were changed to make the launch.sh executable, then I modified the lxsession autostart file:-
sudo nano /etc/xdg/lxsession/LXDE-pi/autostart
Code: Select all
@lxpanel –profile LXDE-pi @pcmanfm –desktop –profile LXDE-pi /home/pi/launch.sh @xscreensaver -no-splash point-rpi | https://www.raspberrypi.org/forums/viewtopic.php?p=1537412 | CC-MAIN-2019-43 | refinedweb | 292 | 64 |
I’ve recently been doing a lot of work with backbone.js, a JavaScript MVC framework that’s excellent for building clean applications that are backed by RESTful JSON web services. It offers a nice separation of concerns, supports your favourite JavaScript templating engine and is well-documented. What else could you need?
Well, sometimes the answer is a little bit less than what’s on offer. What about cases where you want to use just part of the framework? That’s also possible – in this case, I’m going to demonstrate how you can use backbone models to interact with your server-side code, but without the associated views and controllers.
Backbone models extend the framework’s Model class, and so you can get RESTful behaviour for free. On the server side (which, unsurprisingly, I’m using the Play! framework for), we have a couple of models:
@Entity public class User extends Model { public String userName; public String displayName; public String fullName; @OneToMany(cascade = CascadeType.ALL) public List
friends; public static User findByUserName(String userName) { return find("byUserName", userName).first(); } // builder methods not shown }
@Entity public class Friend extends Model { @OneToOne(optional = false) public User user; public Friend(User user) { this.user = user; } }
The User class is only going to be used on the server side, but Friends will be retrieved using a RESTful service that returns JSON.
window.Friend = Backbone.Model.extend ({ urlRoot: "", defaults: { "id": null, "user": null } }); window.FriendCollection = Backbone.Collection.extend ({ model: Friend, url: "" });
Once client-side instances of Friend are created, calling methods such as fetch() will result in a RESTful call based on the URL given in the model declaration. To integrate this with the server side, we add entries to the routes file. In this case, we’re only going to GET all the friends of the current user, but the same principle applies to other HTTP methods such as PUT and DELETE.
GET /api/friends Friends.all
The server-side implementation is about as simple as it gets:
public class Friends extends Controller { public static void all() { User user = User.findByUserName(session.get("userName")); renderJSON(user.friends); } }
At this point, we now have everything in place to make RESTful calls and react to the asynchronous results.
<script type="text/javascript"> function showFriends() { var friends = new FriendCollection(); // Retrieve the data from the server, do something with it on a successful return friends.fetch({ success: function() { // do something with the result. All friends in the collection can be accessed by iterating over (or directly accessing) friends.models } }); } </script> Who are my <a href="javascript:showFriends()">friends</a>?
In just a few lines of code, you have smooth, clean access to your web services and a declarative model that can easily be kept in sync with your server-side models.
A complete working example can be found here. Unzip it, cd into the bbm directory, type “play run” and point your browser at.
EDIT: Please note this is written using Play 1.x. I guess I need to get in the habit of clarifying which version of Play I’m using now that Play 2.0 is nearly ready!
EDIT 2: Just to keep things even, here’s an simplified implementation in Play 2.
1 thought on “backbone.js – M without the VC”
Great example Steve, have been trying to come up with a good way to update an existing Play Framework/jQuery website using backbone/underscore and replacing my jQuery AJAX/JSON requests this way will be a good start. | https://www.objectify.be/wordpress/2012/02/28/backbone-js-m-without-the-vc/ | CC-MAIN-2021-21 | refinedweb | 586 | 63.49 |
.
Cinemachine PackageCinemachine Package
Let’s start by downloading the Cinemachine package – it’s an excellent extension for Unity that provides us with outstanding AAA like camera controls.
In the menu bar, go to
Window > Package Manager. Wait a few moments for the list to update, find Cinemachine, and click the install button.
After the installation, we have a new menu with predefined Cinemachine objects.
- Select the Main Camera object, and add the CinemachineBrain component to it.
With Cinemachine, we only use one Camera, and instead of modifying its state directly, we use multiple virtual cameras and transition between them.
Virtual CameraVirtual Camera
- Go to the Cinemachine menu, and select Create Virtual Camera.
- Name the new object
vcam Main.
- Drag the Character from the Hierarchy Window to the Follow field – this will set the camera to follow our character.
- Set Body to Framing Transposer.
- Set Aim to Do Nothing.
- Set distance to 13.
- Make sure the virtual camera’s rotation is set to
45degrees in the X-axis and
0for the rest.
We can now play the game and have the camera follow after a character. But we can leverage Cinemachine to have a smoother feeling movement:
- When the camera zooms in and out, we want it to have a bit of more damping, so increase the Y and Z damping to 2.
- Since we generally want the character in the center of the screen, we will set the soft zone to 0.4.
- We can also set the dead zone to 0.2, so the camera won’t move if the characters only move by a bit.
Let’s run the game to see the changes we made in action.
There is one thing we need to do before we start coding the script. Go to the Character object and set its tag to Player – we will use it in the script soon.
CamZone ScriptCamZone Script
Next, let’s create our camera control zones. These are zones that while the character is inside them, transition to another virtual camera that better suites the zone’s environment or gameplay purpose.
Create a new MonoBehaviour script and call it
CamZone.
using Cinemachine; using UnityEngine; [RequireComponent(typeof(Collider))] public class CamZone : MonoBehaviour { #region Inspector [SerializeField] private CinemachineVirtualCamera virtualCamera = null; #endregion #region MonoBehaviour private void Start () { virtualCamera.enabled = false; } private void OnTriggerEnter (Collider other) { if ( other.CompareTag("Player") ) virtualCamera.enabled = true; } private void OnTriggerExit (Collider other) { if ( other.CompareTag("Player") ) virtualCamera.enabled = false; } private void OnValidate () { GetComponent<Collider>().isTrigger = true; } #endregion }
We want disable the virtual camera until its needed.
Only when the colliding object has the tag Player we enable the camera. We disable it whrn the player gets out.
We validate the script has access to a collider and ensure it is a trigger.
The script is ready, we can now create Camera Control Zones with it.
Control ZonesControl Zones
- Create an object, name it
CamZone Gate.
- Add a collider of your choice, mark it as a trigger.
- Add the CamZone component to it.
- Duplicate the vcam Main object, and name it
vcam Gate.
The gate camera will activate as the character goes near the gate, we still want it to track the character, but it should zoom in.
- Change the virtual camera distance from 13 to 8.
- Select the Gate Zone object, and drag vcam Gate into its camera field.
We can now run the game and see how the camera transitions between the virtual cameras as we approach the gate and move away from it.
Notice that we still have a single camera, it’s just the active virtual camera that changes – we can see in the scene view how it works behind the scenes.
One thing we can change before we move on is the transition duration between the virtual cameras. Select the Main Camera GameObject, and in the Inspector, go to the CinemachineBrain component. Change the default blend to 1.2 seconds.
Let’s add a zoom out zone:
- Duplicate the objects CamZone Gate and vcam Gate.
- Name the zone object
CamZone Finaland the virtual camera object
vcam Final.
- Move the zone to its proper position and change its camera property to vcam Final.
- Set the zoom out, change the virtual camera’s distance from 8 to 16.
We can run the game and see that when we reach the end, the camera zooms out.
Now, let’s do something else.
Instead of changing the camera’s distance, let’s set a zone where the camera is static and doesn’t follow the character.
- Once again, duplicate the CamZone Gate and vcam Gate objects.
- Name the zone object
CamZone Startand the virtual camera object
vcam Start.
- In the virtual camera, change the Body from Framing Transposer into Do Nothing.
Position the camera as you wish and run the game. Now, when we start the game, we will have a static camera that only starts following the character once we exit the CamZone we created.
DollyDolly
To create a dolly cam, first, duplicate one of the cam zones, and name it
CamZone Dolly. Adjust the trigger to contain the whole dolly zone.
- Go to the Cinemachine menu, and select Create Dolly Camera with Track.
- Name the camera object
vcam Dolly, and the track should be called
track Dolly.
- Drag the new virtual camera to the CamZone we have created.
Now let’s configure the camera:
- Set Follow to follow our character.
- Make sure the Body is set to Tracked Dolly.
- Add some damping.
Open the Auto Dolly settings:
- Tick the Enabled checkbox
- Set Position Offset to 5
- Set Search Resolution to 10
Last for the camera, make sure Aim is set to Do Nothing.
Go to the track object, and set the path you desire, you can add as many waypoints you need. Run the game, we now have a dolly following our character. The advantage we have with a dolly is that we can control some near-camera objects for a parallax effect that works great with this style.
MultiplayerMultiplayer
Tracking multiple players instead of one is not much different then what we have done until this point. It requires a target group to follow and a distance range instead of a static distance. To add a target group, go to the Cinemachine menu, and select Create Target Group Camera. In the targets, drag all the characters and set an appropriate radius depending on the size of the characters. The virtual camera is already set with the target group as the Follow Target. A difference to note is that we no longer have a static distance, we instead define a distance range for the camera.
Hey! Before you move on...
If you want to ask, comment or suggest — I would love to hear from you! Follow @Nirlah on Twitter for more, and be sure to subscribe to NotSlot on YouTube for the latest tutorials. | https://notslot.com/tutorials/2020/03/25d-game-in-unity-part-5 | CC-MAIN-2022-05 | refinedweb | 1,146 | 65.42 |
Greetings,
i have connected Clock module DS3231 with UC3A3-EK via TWI on Header J1 and this are the signals that I got:
this is my code:
#include "board.h" #include "sysclk.h" #include "twi_master.h" #include "led.h" #include "conf_board.h" #include "gpio.h" #include "conf_twim.h" #ifndef EEPROM_BUS_ADDR #define EEPROM_BUS_ADDR 0x68 //!< TWI slave bus address #endif #define EEPROM_MEM_WRITE_ADDR 0b11010000 #define EEPROM_MEM_READ_ADDR 0b11010001 #define TWI_SPEED 400000 //!< TWI data transfer rate int main(void) { board_init(); static const gpio_map_t TWI_GPIO_MAP = { {AVR32_TWIMS0_TWCK_0_0_PIN, AVR32_TWIMS0_TWCK_0_0_FUNCTION}, {AVR32_TWIMS0_TWD_0_0_PIN, AVR32_TWIMS0_TWD_0_0_FUNCTION}, }; gpio_enable_module(TWI_GPIO_MAP, sizeof(TWI_GPIO_MAP) / sizeof(TWI_GPIO_MAP[0])); // TWI master initialization options. twi_master_options_t opt = { .pba_hz = FOSC0, .speed = TWI_SPEED, .chip = EEPROM_BUS_ADDR }; twim_master_init(TWI_EXAMPLE, &opt); twim_master_enable(TWI_EXAMPLE); twi_package_t packet_write = { .addr = EEPROM_MEM_WRITE_ADDR, // TWI slave memory address data MSB .addr_length = sizeof (uint8_t), // TWI slave memory address data size .chip = EEPROM_BUS_ADDR, // TWI slave bus address .buffer = (void *)test_pattern, // transfer data source buffer .length = sizeof(test_pattern) // transfer data size (bytes) }; while (twi_master_write(TWI_EXAMPLE, &packet_write) != TWI_SUCCESS); twi_package_t packet_read = { .addr = EEPROM_MEM_READ_ADDR, // TWI slave memory address data MSB .addr_length = sizeof (uint8_t), // TWI slave memory address data size .chip = EEPROM_BUS_ADDR, // TWI slave bus address .buffer = data_received, // transfer data destination buffer .length = 10 // transfer data size (bytes) }; while (twi_master_read(TWI_EXAMPLE, &packet_read) == TWI_SUCCESS); }
This code is from ASF and slightly modified for my needs. I think that my problem is somwhere in my code. As you can see, SDA signal are the Slave device addresses/commands for read and write and there is nothing in between where the written and readed data should be. I am trying to get this:
Any help is more tha welcome!
Cheers!
1)
Why do you mix the ASF twim_...() routines with twi_....() routines ?
The UC3A3/4xxx and UC3Cxxx and UC3Lxxx have a TWIM module, the UC3A0/1xxx and UC3Bxxx have a TWI module., those modules are NOT the same.
2)
It is a good idea to check the return values from all ASF routines.
eg. twim_master_init(,) should return STATUS_OK, etc.
3)
The addresses of the internal registers in the DS3231 are 0x00 to 0x12
...
Top
- Log in or register to post comments
Thank you for the help. Its working now but as a result i am gettin
3h:15m:45s
5/4/1906
and counting
I found this code online to convert the data from DS3231
struct clock
{
uint8_t seconds;
uint8_t minute;
uint8_t hours;
uint8_t day;
uint8_t datum;
uint8_t month;
uint16_t year;
};
struct clock time;
time.seconds = (tbuffer[0] & 0x0f)+((tbuffer[0] &0xf0)>>4)*10;
time.minute = (tbuffer[1] & 0x0f)+((tbuffer[1] &0xf0)>>4)*10;
time.hours = (tbuffer[2] & 0x0f)+((tbuffer[2] &0x10)+(tbuffer[2]&0x20)>>4)*10; //noch
time.day = (tbuffer[3] & 0x07);
time.datum = (tbuffer[4] & 0x0f)+((tbuffer[4] & 0xf0)>>4)*10;
time.month = (tbuffer[5] & 0x0f)+((tbuffer[5] & 0x10)>>4)*10;
time.year = (tbuffer[6] & 0x0f)+((tbuffer[6] &0xf0)>>4)*10+((tbuffer[5] & 0x80)>>7)*100 + 1900 ;
Top
- Log in or register to post comments
1) I have no idea of what you did to make it 'work'.
2) That 'online' code should be ok if you can read at least 6 bytes from the DS3231, starting at register 0x00, into tbuffer[].
3)The DS3231 must be initialised with a time and date at some point., it does not have non-volatile storage to remember the time/date when the main power (VCC) is removed.
Top
- Log in or register to post comments
Magic :D
I had to change the whole code a little bit (read: a lot). I will post the whole project later when i finnish. Till now I have successfully connected UC3-A3 Xplained with: Inclinometer SDA8301 via SPI, Clock DS3231 via TWI and currently trying to connect LCD Display 2004A 4x20 also via TWI but for now without success.
I am not quite sure how to initialies the DS3231. Can you give me some example?
Thank you very much for the help and patience!
Top
- Log in or register to post comments | http://www.avrfreaks.net/comment/2063801 | CC-MAIN-2018-09 | refinedweb | 645 | 68.57 |
Peter,
Looks fine, these are only new methods and some accessor modifier
changes, so it shouldn't break any existing code.
I'm reading the PDF (nicely done!) now, but this looks good and
harmless, so +1.
Yummy.
Otis
--- Peter Carlson <carlson@bookandhammer.com> wrote:
> Hi All,
> I didn't get any feedback + or - on making these changes.
> I'll make the changes if people think it's a good idea.
> The changes essentially provide public methods to get the query or
> the term
> from the query.
>
> Thanks
>
> --Peter
>
> On 2/7/02 12:41 PM, "carlson@bookandhammer.com"
> <carlson@bookandhammer.com>
> wrote:
>
> > Hi,
> >
> > I just added the Highlight terms functionality by Maik Schreiber to
> the
> > contributions section.
> > This code essentially gets all the terms for a given query, and
> then
> > provides an interface to highlight the text.
> >
> > In order to provide this functionality he had to make some changes
> to
> > the core Lucene code.
> >
> > I am suggesting providing the a query.getTerms() method. After
> review
> > and Maik approval, we might even want to use his code.
> >
> > I would suggest making these changes after 1.2 is released.
> >
> > To check out his code and PDF describing the changes, go to
> >
> >
> > Please provide feedback.
> >
> > Thanks
> >
> > --Peter
> >
> > ----LIST OF SUGGESTED CHANGES----
> > He documented it very well. Here are the list of changes.
> >
> > org.apache.lucene.search.BooleanQuery - add the following method:
> > public BooleanClause[] getClauses()
> > {
> > return (BooleanClause[]) clauses.toArray(new BooleanClause[0]);
> > }
> >
> > org.apache.lucene.search.MultiTermQuery - mark getQuery() public
> >
> > org.apache.lucene.search.PhraseQuery - add the following method:
> > public Term[] getTerms()
> > {
> > return (Term[]) terms.toArray(new Term[0]);
> > }
> >
> > org.apache.lucene.search.PrefixQuery - mark getQuery() public
> >
> > org.apache.lucene.search.RangeQuery - mark getQuery() public
> >
> > org.apache.lucene.search.TermQuery - add the following method:
> > public Term getTerm()
> > {
> > return term;
> > }
__________________________________________________
Do You Yahoo!?
Got something to say? Say it better with Yahoo! Video Mail
--
To unsubscribe, e-mail: <mailto:lucene-dev-unsubscribe@jakarta.apache.org>
For additional commands, e-mail: <mailto:lucene-dev-help@jakarta.apache.org> | http://mail-archives.apache.org/mod_mbox/lucene-dev/200202.mbox/%3C20020215203354.8410.qmail@web12706.mail.yahoo.com%3E | CC-MAIN-2016-30 | refinedweb | 334 | 62.24 |
3
A look at Dataset and easy and Pythonic way to create a database.
Other candidates in this categorie,i will mention pyDAL and Peewee.
I did some test of Dataset in this post to look at.
So here a common task some web-scraping of food recipes(just a task i did help someone with).
A task you may not use a database for(to much work).
So let see how Dataset will work for this task.
Requirement Requests and BeautifulSoup.
First clean code without Dataset.
import requests from bs4 import BeautifulSoup start_page = 1 end_page = 3 for page in range(start_page, end_page+1): url = '[]=13&%3Btag[]=28&%3B=&sort=title&order=asc&page={}'.format(page) url_page = requests.get(url) soup = BeautifulSoup(url_page.text) tag_div = soup.find_all('div', {'class': "content-item tab-content current"})[0]\ .find_all('div', {'class': 'story-block'}) print('--- Page {} ---'.format(page)) for content in tag_div: print(url_page.status_code, content.find('a')['href'])
In code snippet i bring in Dataset,and take out some data.
Edited 1 Year Ago by snippsat | https://www.daniweb.com/programming/software-development/code/490502/database-for-lazy-people-dataset | CC-MAIN-2017-04 | refinedweb | 173 | 63.15 |
- Explore
- Create
- Tracks
Abbas Taher suggested an idea · May 16, 2017 at 03:30 PM · scalaSparkspark-sqlhortonworktutorial-100
In this post, I would like to share a few code snippets that can help understand Spark 2.0 API. I am using the Spark Shell on Hortonworks 2.5 to execute the code, but you can also compile the code on Scala IDE for Eclipse and submit it as a Spark job on Hortonworks 2.5 Sandbox as described in a previous article.
For illustration purposes, I am using a text file that contains the 4 lines of the Humpty Dumpty rhyme.
Humpty Dumpty sat on a wall, Humpty Dumpty had a great fall. All the king's horses and all the king's men Couldn't put Humpty together again.
All examples start by reading the file, then separating the words in each line, filtering out all other words except for the two words Humpty & Dumpty, and last performing the count. In each snippet the result is printed on the console rather than saving it into an hdfs file. The result of the 7 examples is always Dumpty occurring 2 times and Humpty 3 times:
[Dumpty,2] [Humpty,3]
Each of the snippets illustrates a specific Spark construct or API functionality related to either RDDs, DataFrames, Datasets or Spark SQL.
So let's start ...
Example 1: Classic Word Count using filter & reduceByKey on RDD
val dfsFilename = "/input/humpty.txt" val readFileRDD = spark.sparkContext.textFile(dfsFilename) val wcounts1 = readFileRDD.flatMap(line=>line.split(" ")) .filter(w => (w =="Humpty") || (w == "Dumpty")) .map(word=>(word, 1)) .reduceByKey(_ + _) wcounts1.collect.foreach(println)
In this example each line in the file is read as an entire string into an RDD. Then each line is split into words. The split command generates an array of words for each line. The flatMap command flattens the array and groups them together to produce a long array that has all the words in the file. Then the array is filtered and only the two words are selected. Then each of the two words is mapped into a key/value pair. Last the reduceByKey operation is applied over the key/value pair to count the words’ occurrence in the text.
Example 2: Word Count Using groupBy on RDD
val dfsFilename = "/input/humpty.txt" val readFileRDD = spark.sparkContext.textFile(dfsFilename) val wcounts2 = readFileRDD.flatMap(line=>line.split(" ")) .filter(w => (w =="Humpty") || (w == "Dumpty")) .groupBy(_.toString) .map(ws => (ws._1,ws._2.size)) wcounts2.collect.foreach(println)
This example is similar to the first example. The two only differ in the usage of groupBy operation which generates a key/value pair that contains the word as a key and a sequence of the same word repeated as a value. Then a new key/value pair is produced that uses the sequence size as a count of the occurrence of the word. It is important to note that the filter function (predicate) is applied on each word and only the words that satisfy the condition are passed to the groupBy operation.
Example 3: Word Count Using Dataframes, Rows and groupBy
import org.apache.spark.sql.Row val dfsFilename = "/input/humpty.txt" val readFileDF = spark.sparkContext.textFile(dfsFilename) val wordsDF = readFileDF.flatMap(_.split(" ")).toDF val wcounts3 = wordsDF.filter(r => (r(0) =="Humpty") || (r(0) == "Dumpty")) .groupBy("Value") .count() wcounts3.collect.foreach(println)
This example is totally different from the first two examples. Here we use DataFrames instead of RDD to work with the text as indicated with the “toDF” command. The returned DataFrame is made of a sequence of Rows, for in Spark 2.0, DataFrames are just Datasets of Rows. Because of the split operation, each row is made of one element. The columns of a row in the result can be accessed by field index=0 because we only have one column. Also, similar to 2nd example we are using the gourpBy operation which is followed by count to perform the word count. The count command gives DataFrames their edge over RDDs.
If you are wondering how can we use the column name "Value" in the groupBy operation, the reason is simple; when you define a Dataset/DataFrame with one column the Spark Framework on run-time generates a column named "Value" by default if the programmer does not define one. The filter operation above can also be written in another way and the first element in the array within the row can be accessed via “r.getString(0)”.
val wcounts3 = wordsDF.filter(r => (r.getString(0) =="Humpty") || (r.getString(0) == "Dumpty")) .groupBy("Value") .count()
To read full article and examine the other 4 ways to write the wordcount program please check:
Help us make things better. Share your great idea or vote for other people. | https://community.hortonworks.com/content/idea/102991/the-7-ways-to-code-wordcount-in-spark-20-understan.html | CC-MAIN-2018-22 | refinedweb | 797 | 65.73 |
Concatenate PDF documents
Concatenate PDF files using file paths
PdfFileEditor is the class in Aspose.Pdf.Facades namespace which allows you to concatenate multiple PDF files. You can not only concatenate files using FileStreams but also using MemoryStreams as well. In this article, the process of concatenating the files using MemoryStreams will be explained and then shown using the code snippet.
Concatenate method of PdfFileEditor class can be used to concatenate two PDF files. The Concatenate method allows you to pass three parameters: first input PDF, second input PDF, and output PDF. The final output PDF contains both the input PDF files.
The following code snippet shows you how to concatenate PDF files using file paths.
In some cases, when there are a lot of outlines, users may disable them with setting CopyOutlines to false and improve performance of concatenation.
Concatenate multiple PDF files using MemoryStreams
Concatenate method of PdfFileEditor class takes the source PDF files and the destination PDF file as parameters. These parameters can be either paths to the PDF files on the disk or they could be MemoryStreams. Now, for this example, we’ll first create two files streams to read the PDF files from the disk. Then we’ll convert these files into byte arrays. These byte arrays of the PDF files will be converted to MemoryStreams. Once we get the MemoryStreams out of PDF files, we’ll be able to pass them on to the concatenate method and merge into a single output file.
The following code snippet shows you how to concatenate multiple PDF files using MemoryStreams:
Concatenate Array of PDF Files Using File Paths
If you want to concatenate multiple PDF files, you can use the overload of the Concatenate method, which allows you to pass an array of PDF files. The final output is saved as a merged file created from all the files in the array.The following code snippet shows you how to concatenate array of PDF files using file paths.
Concatenate Array of PDF Files Using Streams
Concatenating an array of PDF files is not limited to only files residing on the disk. You can also concatenate an array of PDF files from streams. If you want to concatenate multiple PDF files, you can use the appropriate overload of the Concatenate method. First, you need to create an array of input streams and one stream for output PDF and then call the Concatenate method. The output will be saved in the output stream.The following code snippet shows you how to concatenate array of PDF files using streams.
Concatenating all Pdf files in Particular folder.
Concatenate PDF Forms and keep fields names unique
PdfFileEditor class in Aspose.Pdf.Facades namespace offers the capability to concatenate the PDF files. Now, if the Pdf files which are to be concatenated have form fields with similar field names, Aspose.PDF provides the feature to keep the fields in the resultant Pdf file as unique and also you can specify the suffix to make the field names unique. KeepFieldsUnique property of PdfFileEditor as true will make field names unique when Pdf forms are concatenated. Also, UniqueSuffix property of PdfFileEditor can be used to specify the user defined format of the suffix which is added to field name to make it unique when forms are concatenated. This string must contain
%NUM% substring which will be replaced with numbers in the resultant file.
Please see the following simple code snippet to achieve this functionality.
Concatenate PDF files and create Table Of Contents
Concatenate PDF files
Please take a look over following code snippet for information on how to merge the PDF files.
Insert blank page
Once the PDF files have been merged, we can insert a blank page at the beginning of document on which can can create Table Of contents. In order to accomplish this requirement, we can load the merged file into Document object and we need to call Page.Insert(…) method to insert a blank page.
Add Text Stamps
In order to create a Table of Contents, we need to add Text stamps on first page using PdfFileStamp and Stamp objects. Stamp class provides
BindLogo(...) method to add FormattedText and we can also specify the location to add these text stamps using
SetOrigin(..) method. In this article, we are concatenating two PDF files, so we need to create two text stamp objects pointing to these individual documents.
Create local links
Now we need to add links towards the pages inside the concatenated file. In order to accomplish this requirement, we can use
CreateLocalLink(..) method of PdfContentEditor class. In following code snippet, we have passed Transparent as 4th argument so that the rectangle around link is not visible.
Complete code
Concatenate PDF files in folder
PdfFileEditor class in Aspose.Pdf.Facades namespace offers you the capability to concatenate the PDF file. from Aspose.PDF:
// The path to the documents directory. string dataDir = RunExamples.GetDataDir_AsposePdfFacades_TechnicalArticles(); // Retrieve names of all the Pdf files in a particular Directory string[] fileEntries = Directory.GetFiles(dataDir, "*.pdf"); // Get the current System date and set its format string date = DateTime.Now.ToString("MM-dd-yyyy"); // Get the current System time and set its format string hoursSeconds = DateTime.Now.ToString("hh-mm"); // Set the value for the final Resultant Pdf document string masterFileName = date + "_" + hoursSeconds + "_out.pdf"; // Instantiate PdfFileEditor object Aspose.Pdf.Facades.PdfFileEditor pdfEditor = new PdfFileEditor(); // Call Concatenate method of PdfFileEditor object to concatenate all input files // Into a single output file pdfEditor.Concatenate(fileEntries, dataDir + masterFileName); | https://docs.aspose.com/pdf/net/concatenate-pdf-documents/ | CC-MAIN-2022-27 | refinedweb | 918 | 55.44 |
Also see Part 1 and Part 3.
When tools like the bounds checking GCC, Purify, Valgrind, etc. first showed up, it was interesting to run a random UNIX utility under them. The output of the checker showed that these utility programs, despite working perfectly well, executed a ton of memory safety errors such as use of uninitialized data, accesses beyond the ends of arrays, etc. Just running grep or whatever would cause tens or hundreds of these errors to happen.
What was going on? Basically, incidental properties of the C/UNIX execution environment caused these errors to (often) be benign. For example, blocks returned by malloc() generally contain some padding before and/or after; the padding can soak up out-of-bounds stores, as long as they aren’t too far outside the allocated area. Was it worth eliminating these bugs? Sure. First, an execution environment with different properties, such as a malloc() for an embedded system that happens to provide less padding, could turn benign near-miss array writes into nasty heap corruption bugs. Second, the same benign bugs could probably, under different circumstances, cause a crash or corruption error even in the same execution environment. Developers generally find these kinds of arguments to be compelling and these days most UNIX programs are relatively Valgrind-clean.
Tools for finding integer undefined behaviors are less well-developed than are memory-unsafety checkers. Bad integer behaviors in C and C++ include signed overflow, divide by zero, shift-past-bitwidth, etc. These have become a more serious problem in recent years because:
- Integer flaws are a source of serious security problems
- C compilers have become considerably more aggressive in their exploitation of integer undefined behaviors to generate efficient code
Recently my student Peng Li implemented a checking tool for integer undefined behaviors. Using it, we have found that many programs contain these bugs. For example, more than half of the SPECINT2006 benchmarks execute integer undefined behaviors of one kind or another. In many ways the situation for integer bugs today seems similar to the situation for memory bugs around 1995. Just to be clear, integer checking tools do exist, but they do not seem to be in very widespread use and also a number of them operate on binaries, which is too late. You have to look at the source code before the compiler has had a chance to exploit — and thus eliminate — operations with undefined behavior.
The rest of this post explores a few integer undefined behaviors that we found in LLVM: a medium-sized (~800 KLOC) open source C++ code base. Of course I’m not picking on LLVM here: it’s very high-quality code. The idea is that by looking at some problems that were lurking undetected in this well-tested code, we can hopefully learn how to avoid writing these bugs in the future.
As a random note, if we consider the LLVM code to be C++0x rather than C++98, then a large number of additional shift-related undefined behaviors appear. I’ll talk about the new shift restrictions (which are identical to those in C99) in a subsequent post here.
I’ve cleaned up the tool output slightly to improve readability.
Integer Overflow #1
Error message:
UNDEFINED at <BitcodeWriter.cpp, (740:29)> : Operator: - Reason: Signed Subtraction Overflow left (int64): 0 right (int64): -9223372036854775808
Code:
int64_t V = IV->getSExtValue(); if (V >= 0) Record.push_back(V << 1); else Record.push_back((-V << 1) | 1); <<----- bad line
In all modern C/C++ variants running on two’s complement machines, negating an int whose value is INT_MIN (or in this case, INT64_MIN) is undefined behavior. The fix is to add an explicit check for this case.
Do compilers take advantage of this undefined behavior? They do:
[regehr@gamow ~]$ cat negate.c int foo (int x) __attribute__ ((noinline)); int foo (int x) { if (x < 0) x = -x; return x >= 0; } #include <limits.h> #include <stdio.h> int main (void) { printf ("%d\n", -INT_MIN); printf ("%d\n", foo(INT_MIN)); return 0; } [regehr@gamow ~]$ gcc -O2 negate.c -o negate negate.c: In function `main': negate.c:13:19: warning: integer overflow in expression [-Woverflow] [regehr@gamow ~]$ ./negate -2147483648 1
In C compiler doublethink, -INT_MIN is both negative and non-negative. If the first true AI is coded in C or C++, I expect it to immediately deduce that freedom is slavery, love is hate, and peace is war.
Integer Overflow #2
Error message:
UNDEFINED at <InitPreprocessor.cpp, (173:39)> : Operator: - Reason: Signed Subtraction Overflow left (int64): -9223372036854775808 right (int64): 1
Code:
MaxVal = (1LL << (TypeWidth – 1)) – 1;
In C/C++ it is illegal to compute the maximum signed integer value like this. There are better ways, such as creating a vector of all 1s and then clearing the high order bit.
Integer Overflow #3
Error message:
UNDEFINED at <TargetData.cpp, (629:28)> : Operator: * Reason: Signed Multiplication Overflow left (int64): 142998016075267841 right (int64): 129
Code:
Result += arrayIdx * (int64_t)getTypeAllocSize(Ty);
Here the allocated size is plausible but the array index is way out of bounds for any conceivable array.
Shift Past Bitwidth #1
Error message:
UNDEFINED at <InstCombineCalls.cpp, (105:23)> : Operator: << Reason: Unsigned Left Shift Error: Right operand is negative or is greater than or equal to the width of the promoted left operand left (uint32): 1 right (uint32): 63
Code:
unsigned Align = 1u << std::min(BitWidth – 1, TrailZ);
This is just an outright bug: BitWidth is set to 64 but should have been 32.
Shift Past Bitwidth #2
Error message:
UNDEFINED at <Instructions.h, (233:15)> : Operator: << Reason: Signed Left Shift Error: Right operand is negative or is greater than or equal to the width of the promoted left operand left (int32): 1 right (int32): 32
Code:
return (1 << (getSubclassDataFromInstruction() >> 1)) >> 1;
When getSubclassDataFromInstruction() returns a value in the range 128-131, the right argument to the left shift evaluates to 32. Shifting (in either direction) by the bitwidth or higher is an error, and so this function requires that getSubclassDataFromInstruction() returns a value not larger than 127.
Summary
It is basically evil to make certain program actions wrong, but to not give developers any way to tell whether or not their code performs these actions and, if so, where. One of C’s design points was “trust the programmer.” This is fine, but there’s trust and then there’s trust. I mean, I trust my 5 year old but I still don’t let him cross a busy street by himself. Creating a large piece of safety-critical or security-critical code in C or C++ is the programming equivalent of crossing an 8-lane freeway blindfolded.
The analysis tool for undefined integer operations sounds very interesting — are there any plans to release it publicly?
Hi Neil– Yes, definitely! Not sure when, but hopefully in the next month or two.
I’m pretty sure that war is peace, not peace is war. I’m pretty sure that in doublespeak, “is” is double-plus unsymmetric.
Old Russian proverb: “trust by verify”.
Ugh, those pesky shifts. I recently altered the Virgil specification to mandate that shifts larger than the bitwidth of the type produce zero, as if all the bits were shifted out. Java chose differently: only the lower 5 bits of the shift value are used for int shift (6 for a long shift). This typically means generating a branch for most targets. But most shift amounts are constants and the check can be eliminated. I’m crossing my fingers that this won’t be a big deal in the future.
Ben– I think your approach is definitely better. I didn’t know Java masked off the higher bits, that’s not semantically clean.
“In C/C++ it is illegal to compute the maximum signed integer value like this. There are better ways, such as creating a vector of all 1s and then clearing the high order bit.”
According to C99 6.2.6.2 it’s undefined whether this gives you a valid integer value. An implementation might choose to represent signed integers using padding bits in such a way that this approach creates a trap value.
Hi Sebastian- The value of padding bits is unspecified (not undefined) and the presence or absence of padding bits is implementation defined. Of course I could have been more precise and said “There are better ways on C implementations whose implementation-defined behavior for integer representations is conventional…” But this didn’t seem to need to be said…. | https://blog.regehr.org/archives/226/comment-page-1 | CC-MAIN-2020-34 | refinedweb | 1,419 | 53.81 |
From: Jens Maurer (Jens.Maurer_at_[hidden])
Date: 2002-01-01 15:32:13
dwalker07 wrote:
>
> It's at:
>
>
>
> It is just a bunch of state-saving classes for I/O streams. The
> latest version moves the classes to the boost::io namespace.
This appears to be reasonable, since we may have a bunch of other
I/O stuff soon.
I think the classes are useful; I've often had the need to
temporarily switch the precision (or the fmtflags) and restore
the old value afterwards. Often, I don't do it in an
exception-safe way out of lazyness. These classes help me
here.
I've had a quick look at the implementation. I'd really
like to see these macros go away. I believe they can be
replaced by templates that take (among other things)
member function pointers as value parameters.
I'd like to see this investigated.
Jens Maurer
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2002/01/22116.php | CC-MAIN-2021-49 | refinedweb | 173 | 69.79 |
Dear all,
I'm dead new in OSM. I gave a try with the GPX import(from my garmin 800). My trace is properly imported in blue (nice mountain bike tracks that do not exist so far in OSM). Is there any trick to avoid creating node for my track (I'm using potlatch2 in firefox) ? It is really time consuming and there is no added value. I would prefer to concentrate on the proper tags to give to the tracks (which is already a though job since within few kilometres the category is changing from montain bike, to pedestrian or all terrain vehicule ...).
asked
06 Jul '13, 11:09
SdW74
11●1●1●2
accept rate:
0%
In Potlatch 2, you can load a track, then press Alt and click on the track to convert it to a way (Windows/Mac OS) or Control+Shift click on Linux. That way can then be edited as usual.
Most GPS tracks have an excessive number of points, especially if you are recording at one point per second, so it is helpful to simplify this. In Potlatch 2, you can press Y to automatically simplify a way.
You should also make sure the new way is connected to any existing ways as required. And you can check to see if there any clearly inaccurate points from the GPS, eg obvious 'spikes', or clusters of points if you stopped somewhere, then delete or move those.
But still, it is often helpful to trace GPS tracks manually. This lets you place nodes where they are needed, ie an appropriate number to make curves realistic. And it lets you spot places where the GPS appears inaccurate, and avoid drawing those. You can load several GPS traces for the same route, and use those along with aerial imagery, to draw a more accurate way. So this does 'add value', though it can be time consuming for a long twisty path.
answered
06 Jul '13, 12:30
Vclaw
9.1k●8●91●140
accept rate:
22%
thanks, it works fine (on my Mac, it is "z" instead of "y" to simplify). You are right, a GPS trace is not accurate enough, it still requires some manual post-processing.
Once you sign in you will be able to subscribe for any updates here
Answers
Answers and Comments
Markdown Basics
learn more about Markdown
This is the support site for OpenStreetMap.
Question tags:
gpx ×223
import ×166
node ×72
question asked: 06 Jul '13, 11:09
question was seen: 3,212 times
last updated: 06 Jul '13, 16:25
[closed] How to import a gpx without time tags?
Don't manage loading a gpx and then editing it to a path in OSM
Why does a GPX import report "Issue while inserting job into database"?
Wie kann ich .itm-Daten in GPX-Dateien umwandeln?
"Found no good GPX points in the input data"
Errorneous GPX zip upload
Uploading a GPX from TTGpsLogger doesn't work
how to import a twl file to osm
How should I go about importing a city?
Is there a way to undelete a deleted node (POI) ?
First time here? Check out the FAQ! | https://help.openstreetmap.org/questions/24029/gpx-import-how-to-avoid-manual-creation-of-node | CC-MAIN-2019-39 | refinedweb | 529 | 79.09 |
Hi Guys, I'm wondering if someone can help me out with a problem I've been having with VC++. I'm obviously just beginning to learn C++ and have been working with CodeBlocks and VC++ 2010 Express. My problem is that the following code compiles in CodeBlocks but gives me numerous errors and dies with VC++ (for the sake of space, I've simplified it as much as I can down to the problem area)...
In Main.cpp:
#include "Head.h" int main() { cout << strFunction(12) << endl; return 0; }
In Main2.cpp:
#include "Head.h" string strFunction(int x) { return "blah blah blah."; }
In Head.h:
#pragma once #include <iostream> using namespace std; string strFunction(int x);
Since I'm only dealing with two source files and a miniscule program, obviously a header here isnt really necessary, but I really want to learn how to work with them for later use. The maddening thing about this is:
1) The header works for some things. I use it to #include <iostream> without any trouble, I can declare global variables just fine, etc. Its really just something with any function prototypes I put in there.
2) Function prototypes work fine if I just manually copy the prototype into the other source file. I only get the errors when I reference it in the header.
Here is (a small part) of the errors I receive when I try and run the above project:
Main.cpp 1>c:\users\michael\documents\visual studio 2010\projects\project7\project7\main.cpp(5): error C2679: binary '<<' : no operator found which takes a right-hand operand of type 'std::string' (or there is no acceptable conversion) 1> c:\program files\microsoft visual studio 10.0\vc\include\ostream(679): could be 'std::basic_ostream<_Elem,_Traits> &std::operator <<<char,std::char_traits<char>>(std::basic_ostream<_Elem,_Traits> &,const char *)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> c:\program files\microsoft visual studio 10.0\vc\include\ostream(726): or 'std::basic_ostream<_Elem,_Traits> &std::operator <<<char,std::char_traits<char>>(std::basic_ostream<_Elem,_Traits> &,char)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> c:\program files\microsoft visual studio 10.0\vc\include\ostream(764): or 'std::basic_ostream<_Elem,_Traits> &std::operator <<<std::char_traits<char>>(std::basic_ostream<_Elem,_Traits> &,const char *)'
----
I'd be tremendously appreciative of any help you guys could offer me. I've tried everything I could think of and experimented with everything I can think of but nothing seems to work. FWIW, I've tried this code in a completely empty VC++ project as well as inputting it into their custom made console app. Same errors.
I'm at my wits end. Please help.
(p.s. first post, great to be on board here). | https://www.daniweb.com/programming/software-development/threads/308325/vc-headers | CC-MAIN-2017-17 | refinedweb | 469 | 63.39 |
Opened 10 years ago
Closed 9 years ago
#5066 closed (fixed)
Calling "manage.py reset <app>" on an app that has no models causes crash
Description
python manage.py --noinput reset <app>, where the specified app does not have any classes that extend models.Model, results in this error:
Traceback (most recent call last): File "manage.py", line 12, in <module> execute_manager(settings) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/django/core/management.py", line 1744, in execute_manager execute_from_command_line(action_mapping, argv) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/django/core/management.py", line 1701, in execute_from_command_line output = action_mapping[action](mod, options.interactive) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/django/core/management.py", line 699, in reset sql_list = get_sql_reset(app) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/django/core/management.py", line 384, in get_sql_reset return get_sql_delete(app) + get_sql_all(app) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/django/core/management.py", line 370, in get_sql_delete app_label = app_models[0]._meta.app_label IndexError: list index out of range
app_label on line 370 in django/core/management.py doesn't seem to be used anywhere, it's at the end of the method. Removing that line prevents the crash and appears not to have any side effects.
Even though it doesn't necessarily make sense to call reset on an app with no models, it is a hindrance to have this crash occur if you are, for instance, writing scripts to reset all your apps (since a reset_all isn't provided in django.core.management), or doing anything else where you might reset multiple apps at once without knowing their contents.
Attachments (2)
Change History (10)
comment:1 Changed 10 years ago by
Changed 10 years ago by
comment:2 Changed 10 years ago by
Patch now attached.
comment:3 Changed 10 years ago by
I agree that we should fix this, but what an odd patch...Removing that line seems like it would have strange side effects.
Let's explore the repercussions of this change before committing the patch.
comment:4 Changed 10 years ago by
I've run all the unit tests, it does not create any new errors or failures.
As far as side effects, what might not be obvious from the patch is that the variable is not used in that method. In fact, here's what the applicable portion of the function looks like (the ellipsis is just because the function is rather long before it gets here):
def get_sql_delete(app): ... app_label = app_models[0]._meta.app_label # Close database connection explicitly, in case this output is being piped # directly into a database client, to avoid locking issues. if cursor: cursor.close() connection.close() return output[::-1] # Reverse it, to deal with table dependencies. get_sql_delete.help_doc = "Prints the DROP TABLE SQL statements for the given app name(s)." get_sql_delete.args = APP_ARGS
That's it. That variable is never used anywhere.
As far as the question of maybe it initializes something in the model through a specialized getter, I don't see this being the case. Take a look at django/db/models/base.py, in ModelBase.new (lines 47-51):
if getattr(new_class._meta, 'app_label', None) is None: # Figure out the app_label by looking one level up. # For 'django.contrib.sites.models', this would be 'sites'. model_module = sys.modules[new_class.__module__] new_class._meta.app_label = model_module.__name__.split('.')[-2]
So _meta.app_label is always ensured to exist and be initialized. As far as the usage of _meta, there's nothing special about the _meta dict anywhere that would take advantage of accessing this variable to do more initialization than what happens in ModelBase and Model's init, and this SQL delete management call is far past that point.
So, with all that, I think this change looks pretty safe.
comment:5 Changed 10 years ago by
Taking ownership of this ticket for the sprint.
Changed 9 years ago by
comment:6 Changed 9 years ago by
Updated the patch, although it's still fundamentally the same, and still has the same crash error on an app with no models. I still haven't found any reason for this line to exist, does anyone else have an idea?
comment:7 Changed 9 years ago by
All I can suggest is raising it in django-dev again, gav.
comment:8 Changed 9 years ago by
This appears to have been fixed as a side-effect of some other change prior to 1.0. With 1.0 code, running manage.py reset on an app with no models doesn't produce an error.
Agreed. Care to attach the simple patch? | https://code.djangoproject.com/ticket/5066 | CC-MAIN-2017-22 | refinedweb | 796 | 57.57 |
Recently, when writing an ID generator, we need to compare the speed difference between UUID and the popular NanoID. Of course, we also need to test the ID generator created according to the rules.
Such code belongs to the most basic API. Even if the speed is reduced by a few nanoseconds, it can add up considerably. The key is, how can I evaluate the speed of ID generation?
1. How to count performance?
The common method is to write some statistical code. These codes are interspersed in our logic to perform some simple timing operations. For example, the following lines:
long start = System.currentTimeMillis(); //logic long cost = System.currentTimeMillis() - start; System.out.println("Logic cost : " + cost); Copy code Copy code
This statistical method is not necessarily a problem when used in business code, even in APM.
Unfortunately, the statistical results of this code are not necessarily accurate. For example, when the JVM executes, it will JIT compile and inline optimize some code blocks or frequently executed logic. Before obtaining a stable test result, it needs to cycle for tens of thousands of times to warm up. The performance difference before and after preheating is very large.
In addition, there are many indicators to evaluate performance. If these index data have to be calculated manually every time, it must be boring and inefficient.
JMH(the Java Microbenchmark Harness) is such a tool that can do benchmarking. If you locate the hot code through our series of tools, you can give it to JMH to test its performance data and evaluate the improvement. Its measurement accuracy is very high, up to the nanosecond level.
JMH has been included in JDK 12. maven needs to be introduced in other versions. The coordinates are as follows.
<dependencies> <dependency> <groupId>org.openjdk.jmh</groupId> <artifactId>jmh-core</artifactId> <version>1.23</version> </dependency> <dependency> <groupId>org.openjdk.jmh</groupId> <artifactId>jmh-generator-annprocess</artifactId> <version>1.23</version> <scope>provided</scope> </dependency> </dependencies> Copy code Copy code
Next, let's introduce the use of this tool.
2. Key notes
JMH is a jar package, which is very similar to the unit test framework JUnit. Some basic configurations can be made through annotations. Many of these configurations can be set through the OptionsBuilder of the main method.
The figure above shows the execution of a typical JMH program. By starting multiple processes and threads, first perform warm-up, then perform iteration, and finally summarize all test data for analysis. Before and after execution, some pre and post operations can also be processed according to granularity.
A simple code is as follows:
@BenchmarkMode(Mode.Throughput) @OutputTimeUnit(TimeUnit.MILLISECONDS) @State(Scope.Thread) @Warmup(iterations = 3, time = 1, timeUnit = TimeUnit.SECONDS) @Measurement(iterations = 5, time = 1, timeUnit = TimeUnit.SECONDS) @Fork(1) @Threads(2) public class BenchmarkTest { @Benchmark public long shift() { long t = 455565655225562L; long a = 0; for (int i = 0; i < 1000; i++) { a = t >> 30; } return a; } @Benchmark public long div() { long t = 455565655225562L; long a = 0; for (int i = 0; i < 1000; i++) { a = t / 1024 / 1024 / 1024; } return a; } public static void main(String[] args) throws Exception { Options opts = new OptionsBuilder() .include(BenchmarkTest.class.getSimpleName()) .resultFormat(ResultFormatType.JSON) .build(); new Runner(opts).run(); } } Copy code Copy code
Next, let's introduce the key annotations and parameters one by one.
@Warmup
Sample.
@Warmup( iterations = 5, time = 1, timeUnit = TimeUnit.SECONDS) Copy code Copy code
We mentioned warm-up more than once. The warm up annotation can be used on classes or methods for warm-up configuration. As you can see, it has several configuration parameters.
- timeUnit: the unit of time. The default unit is seconds.
- Iterations: iterations of the warm-up phase.
- Time: the time of each preheating.
- batchSize: batch size, which specifies how many times the method is called for each operation.
The above note means that the code is preheated for a total of 5 seconds (five iterations, one second each time). The test data of preheating process do not record the measurement results.
We can see the effect of its implementation:
# Warmup: 3 iterations, 1 s each # Warmup Iteration 1: 0.281 ops/ns # Warmup Iteration 2: 0.376 ops/ns # Warmup Iteration 3: 0.483 ops/ns Copy code Copy code
Generally speaking, benchmarks are aimed at relatively small code blocks with relatively fast execution speed. These codes are likely to be compiled and inlined. Keeping the methods concise during coding is also good for JIT.
When it comes to preheating, we have to mention service preheating in a distributed environment. When publishing a service node, there is usually a warm-up process, and the volume is gradually increased to the corresponding service node until the service reaches the optimal state. As shown in the figure below, load balancing is responsible for this volume process, which is generally conducted according to the percentage.
@Measurement
An example is as follows.
@Measurement( iterations = 5, time = 1, timeUnit = TimeUnit.SECONDS) Copy code Copy code
The parameters of Measurement and Warmup are the same. Unlike preheating, it refers to the number of real iterations.
We can see the execution process from the log:
# Measurement: 5 iterations, 1 s each Iteration 1: 1646.000 ns/op Iteration 2: 1243.000 ns/op Iteration 3: 1273.000 ns/op Iteration 4: 1395.000 ns/op Iteration 5: 1423.000 ns/op Copy code Copy code
Although the code can show its optimal state after preheating, it is still somewhat different from the actual application scenario. If the performance of your testing machine is very high, or your testing machine resource utilization has reached the limit, it will affect the value of the test results. Usually, I will give the machine sufficient resources to maintain a stable environment during testing. When analyzing the results, we also pay more attention to the performance differences of different implementation methods, rather than the test data itself.
@BenchmarkMode
This annotation is used to specify the benchmark type, corresponding to the Mode option, and can be used to modify both classes and methods. The value here is an array, and multiple statistical dimensions can be configured. For example:
@BenchmarkMode({Throughput,Mode.AverageTime}). The statistics are throughput and average execution time.
The so-called modes can be divided into the following types in JMH:
- Throughput: overall throughput, such as QPS, call volume per unit time, etc.
- Average time: average elapsed time, which refers to the average time of each execution. If this value is too small to be recognized, you can reduce the unit time of statistics.
- SampleTime: random sampling.
- SingleShotTime: if you want to test the performance only once, such as how long it takes for the first initialization, you can use this parameter. In fact, it is no different from the traditional main method.
- All: calculate all indicators. You can set this parameter to see the effect.
Let's take the average time to see a general implementation result:
Result "com.github.xjjdog.tuning.BenchmarkTest.shift": 2.068 ±(99.9%) 0.038 ns/op [Average] (min, avg, max) = (2.059, 2.068, 2.083), stdev = 0.010 CI (99.9%): [2.030, 2.106] (assumes normal distribution) Copy code Copy code
Since the time unit we declare is nanoseconds, the average response time of this shift method is 2.068 nanoseconds.
We can also look at the final time-consuming.
Benchmark Mode Cnt Score Error Units BenchmarkTest.div avgt 5 2.072 ± 0.053 ns/op BenchmarkTest.shift avgt 5 2.068 ± 0.038 ns/op Copy code Copy code
Since it is an average, the Error value here means Error (or fluctuation).
It can be seen that when measuring these indicators, there is a time dimension, which is configured through the * * @ OutputTimeUnit * * annotation.
This is relatively simple. It indicates the time type of benchmark results. Can be used on classes or methods. Generally choose seconds, milliseconds, microseconds and nanoseconds, which is a very fast method for.
For example, @ BenchmarkMode(Mode.Throughput) and @ OutputTimeUnit(TimeUnit.MILLISECONDS) are combined to represent the throughput per millisecond.
As shown below, the throughput is calculated in milliseconds.
Benchmark Mode Cnt Score Error Units BenchmarkTest.div thrpt 5 482999.685 ± 6415.832 ops/ms BenchmarkTest.shift thrpt 5 480599.263 ± 20752.609 ops/ms Copy code Copy code
The OutputTimeUnit annotation can also modify classes or methods. By changing the time level, you can get more readable results.
@Fork
The value of fork is generally set to 1, which means that only one process is used for testing; If this number is greater than 1, it means that a new process will be enabled for testing; However, if it is set to 0, the program will still run, but it runs on the user's JVM process. You can see the following tips, but it is not recommended.
# Fork: N/A, test runs in the host VM # *** WARNING: Non-forked runs may silently omit JVM options, mess up profilers, disable compiler hints, etc. *** # *** WARNING: Use non-forked runs only for debugging purposes, not for actual performance runs. *** Copy code Copy code
So does fork run in a process or thread environment? We trace the source code of JMH and find that each fork process runs separately in the progress process, so that we can completely isolate the environment and avoid cross impact. Its input and output streams are sent to our execution terminal through the Socket connection mode.
Share a tip here. In fact, the fork annotation has a parameter called jvmArgsAppend, through which we can pass some JVM parameters.
@Fork(value = 3, jvmArgsAppend = {"-Xmx2048m", "-server", "-XX:+AggressiveOpts"}) Copy code Copy code
In the ordinary test, the number of fork s can also be increased appropriately to reduce the test error.
@Threads
fork is process oriented, while Threads is thread oriented. When this annotation is specified, the parallel test will be started.
If threads is configured Max, the same number of threads as the number of processing machine cores is used.
@Group
@Group annotations can only be added to methods to classify test methods. If you have many methods in a single test file, or you need to classify them, you can use this annotation.
The associated @ GroupThreads annotation will set some thread settings based on this classification.
@State
@State specifies the scope of the variable in the class. It has three values.
@State is used to declare that a class is a "state", and the Scope parameter can be used to represent the shared range of the state. This annotation must be added to the class, otherwise the prompt cannot be run.
Scope has the following three values:
- Benchmark: indicates that the scope of the variable is a benchmark class.
- Thread: each thread has a copy. If Threads annotation is configured, each thread has a variable, which does not affect each other.
- Group: contact the @ group annotation above. In the same group, the same variable instance will be shared.
In the JMHSample04DefaultState test file, it is demonstrated that the default scope of variable x is Thread. The key codes are as follows:
@State(Scope.Thread) public class JMHSample_04_DefaultState { double x = Math.PI; @Benchmark public void measure() { x++; } } Copy code Copy code
@Setup and @ TearDown
Similar to the unit test framework JUnit, @ TearDown is used for initialization before the benchmark and for post benchmark actions to make some global configurations.
These two annotations also have a Level value, indicating the operation time of the method. It has three values.
- Trial: the default level. That is, the Benchmark level.
- Iteration: each iteration runs.
- Invocation: each method call will run, which is the most granular.
@Param
@Param annotation can only modify fields to test the impact of different parameters on program performance. With the @ State annotation, you can set the execution range of these parameters at the same time.
The code example is as follows:
@BenchmarkMode(Mode.AverageTime) @OutputTimeUnit(TimeUnit.NANOSECONDS) @Warmup(iterations = 5, time = 1, timeUnit = TimeUnit.SECONDS) @Measurement(iterations = 5, time = 1, timeUnit = TimeUnit.SECONDS) @Fork(1) @State(Scope.Benchmark) public class JMHSample_27_Params { @Param({"1", "31", "65", "101", "103"}) public int arg; @Param({"0", "1", "2", "4", "8", "16", "32"}) public int certainty; @Benchmark public boolean bench() { return BigInteger.valueOf(arg).isProbablePrime(certainty); } public static void main(String[] args) throws RunnerException { Options opt = new OptionsBuilder() .include(JMHSample_27_Params.class.getSimpleName()) // .param("arg", "41", "42") // Use this to selectively constrain/override parameters .build(); new Runner(opt).run(); } } Copy code Copy code
It is worth noting that if you set a lot of parameters, these parameters will be executed many times, usually for a long time. For example, if there are 1 M parameters and 2 N parameters, a total of M*N times will be executed.
The following is a screenshot of the execution result.
@CompilerControl
This can be said to be a very useful function.
The overhead of method calls in Java is relatively large, especially when the number of calls is very large. Take the simple getter/setter method, which exists in a large number in Java code. When accessing, we need to create the corresponding stack frame. After accessing the required fields, we pop up the stack frame to restore the execution of the original program.
If the access and operation of these objects can be included in the call scope of the target method, there will be less method call and the speed will be improved. This is the concept of method inlining. As shown in the figure, the efficiency will be greatly improved after the code is JIT compiled.
This annotation can be used on classes or methods to control the compilation behavior of methods. There are three common modes.
Force the use of INLINE, prohibit the use of INLINE, and even prohibit method compilation (EXCLUDE).
2. Graphical results
The results of JMH test can be processed twice and displayed graphically. Combined with chart data, it is more intuitive. By specifying the output format file at runtime, you can obtain the performance test results in the corresponding format.
For example, the following line of code specifies to output data in JSON format.
Options opt = new OptionsBuilder() .resultFormat(ResultFormatType.JSON) .build(); Copy code Copy code
JMH supports results in the following five formats:
- TEXT exports a TEXT file.
- csv export csv format file.
- scsv exports files in formats such as scsv.
- json is exported as a json file.
- latex export to latex, a method based on ΤΕΧ Typesetting system.
Generally speaking, we can export to CSV file, operate directly in Excel and generate corresponding graphics.
In addition, several tools for drawing are introduced:
JMH Visualizer has an open source project here( jmh.morethan.io/) , by exporting json files, you can get simple statistical results after uploading. Personally, I don't think its display is very good.
jmh-visual-chart
In comparison, the following tool( deepoove.com/jmh-visual-... , it is relatively intuitive.
meta-chart
A general online chart generator. ( /), after exporting the CSV file
Some continuous integration tools such as Jenkins also provide corresponding plug-ins to directly display these test results.
END
This tool is very easy to use. It uses exact test data to support our analysis results. In general, if you locate hot code, you need to use benchmarking tools for special optimization until the performance has been significantly improved.
In our scenario, we found that using NanoID is indeed much faster than UUID.
Author: little sister taste
Link: juejin.cn/post/703100... | https://programmer.ink/think/top-java-benchmark-jmh.html | CC-MAIN-2022-05 | refinedweb | 2,570 | 58.89 |
*
A friendly place for programming greenhorns!
Big Moose Saloon
Search
|
Java FAQ
|
Recent Topics
|
Flagged Topics
|
Hot Topics
|
Zero Replies
Register / Login
JavaRanch
»
Java Forums
»
Java
»
Java in General
Author
cannot resolve a symbol with a method.
Brian Walsh
Greenhorn
Joined: Feb 06, 2002
Posts: 17
posted
Apr 14, 2003 15:13:00
0
Hi I am having problems with a program in that I am supposed to be sorting some objects with a general objects sort that I was given, and I am now recieving this compile error:
E:\Spring Bench\Bench\Chapter 6 part 2\6.12\Tunes.java:30: cannot resolve symbol symbol : method insertionSort (CDCollection) location: class Sorts Sorts.insertionSort(music); ^ 1 error Tool completed with exit code 1
Here is the Program that I having proplems with.
public class Tunes { //----------------------------------------------------------------- // Creates a CDCollection object and adds some CDs to it. Prints // reports on the status of the collection. //----------------------------------------------------------------- public static void main (String[] args) { CDCollection music = new CDCollection (); music.addCD ("Storm Front", "Billy Joel", 14.95, 10); music.addCD ("Come On Over", "Shania Twain", 14.95, 16); music.addCD ("Soundtrack", "Les Miserables", 17.95, 33); music.addCD ("Graceland", "Paul Simon", 13.90, 11); System.out.println (music); music.addCD ("Double Live", "Garth Brooks", 19.99, 26); music.addCD ("Greatest Hits", "Jimmy Buffet", 15.95, 13); Sorts.insertionSort(music);//LINE INPUTED WITH PROBLEM System.out.println (music); } }
Here is the Sorts program I am working with.
public class Sorts { //----------------------------------------------------------------- // Sorts the specified array of integers using the selection // sort algorithm. //----------------------------------------------------------------- public static void selectionSort (int[] numbers) { int min, temp; for (int index = 0; index < numbers.length-1; index++) { min = index; for (int scan = index+1; scan < numbers.length; scan++) if (numbers[scan] < numbers[min]) min = scan; // Swap the values temp = numbers[min]; numbers[min] = numbers[index]; numbers[index] = temp; } } //----------------------------------------------------------------- // Sorts the specified array of integers using the insertion // sort algorithm. //----------------------------------------------------------------- public static void insertionSort (int[] numbers) { for (int index = 1; index < numbers.length; index++) { int key = numbers[index]; int position = index; // shift larger values to the right while (position > 0 && numbers[position-1] > key) { numbers[position] = numbers[position-1]; position--; } numbers[position] = key; } } //----------------------------------------------------------------- // Sorts the specified array of objects using the insertion // sort algorithm. //----------------------------------------------------------------- public static void insertionSort (Comparable[] objects) { for (int index = 1; index < objects.length; index++) { Comparable key = objects[index]; int position = index; // shift larger values to the right while (position > 0 && objects[position-1].compareTo(key) > 0) { objects[position] = objects[position-1]; position--; } objects[position] = key; } } }
Here is the program that I pulled that line of code from to create an insertion sort...
public class SortPhoneList { //----------------------------------------------------------------- // Creates an array of Contact objects, sorts them, then prints // them. //----------------------------------------------------------------- public static void main (String[] args) { Contact[] friends = new Contact[7]; friends[0] = new Contact ("John", "Smith", "610-555-7384"); friends[1] = new Contact ("Sarah", "Barnes", "215-555-3827"); friends[2] = new Contact ("Mark", "Riley", "733-555-2969"); friends[3] = new Contact ("Laura", "Getz", "663-555-3984"); friends[4] = new Contact ("Larry", "Smith", "464-555-3489"); friends[5] = new Contact ("Frank", "Phelps", "322-555-2284"); friends[6] = new Contact ("Marsha", "Grant", "243-555-2837"); Sorts.insertionSort(friends); for (int index = 0; index < friends.length; index++) System.out.println (friends[index]); } }
I just added that last bit for people to look at so they could compare it. I figured too much info is better than too little. Thanks in advance for your help on the matter.
-Thanks in Advance
Greg Charles
Sheriff
Joined: Oct 01, 2001
Posts: 2853
11
I like...
posted
Apr 14, 2003 15:22:00
0
It seems to me that your Sorts class has two versions of insertionSort(). One takes an array of ints and the other takes an array of Comparable objects. You are trying to pass in a CDCollection. You either need to overload insertionSort() to handle a CDCollection, or get the data from CDCollection into an array of Comparables, and pass that to insertionSort().
Layne Lund
Ranch Hand
Joined: Dec 06, 2001
Posts: 3061
posted
Apr 14, 2003 16:21:00
0
There are some small, but major, differences between the example and your own program. First of all, the example declares the variable "friends" as an array of Contact objects, but your "music" variable is a single instance of CDCollection, not an array. I suspect that the Contact class extends the Comparabe interface, which allows it to use the coresponding version of insertionSort(). To do something similar, you should probably create a CD class which extends Comparable. Then you just have to create an array of CD objects which you can pass to the Sorts.insertionSort() method.
If CDCollection has such an array already, you may just want to have a CDCollection.sort() method which in turn callsl Sorts.insertionSort().
After that long-winded answer, the main point here is that insertionSort() takes an array as its argument, but you haven't created an array in your program. You can fix this problem with one of the suggestions provided by Greg or myself. Or if you can think of something else, please feel free to use it.
Keep coding!
Layne
Java API Documentation
The Java Tutorial
Brian Walsh
Greenhorn
Joined: Feb 06, 2002
Posts: 17
posted
Apr 14, 2003 17:20:00
0
thanks for the help.
I agree. Here's the link:
subject: cannot resolve a symbol with a method.
Similar Threads
help with sort code
More Array sorting trouble
im trying to sort my program with insertionSort and selectionSort
method overloading in sort program
does this sort an array of integers?
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/370966/java/java/resolve-symbol-method | CC-MAIN-2014-42 | refinedweb | 941 | 56.76 |
Opened 5 years ago
Closed 3 years ago
Last modified 2 years ago
#10917 closed New feature (fixed)
admin/base.html should contain messages block
Description
The admin/base.html template should surround messages with a block to allow customization in base_site.html.
{% block messages %}
{% if messages %}
<ul class="messagelist">{% for message in messages %}<li>{{ message }}</li>{% endfor %}</ul>
{% endif %}
{% endblock %}
Attachments (2)
Change History (12)
comment:1 Changed 5 years ago by thejaswi_puthraya
- Component changed from Uncategorized to django.contrib.admin
- Needs documentation unset
- Needs tests set
- Patch needs improvement unset
comment:2 Changed 4 years ago by lukeplant
- Needs tests unset
- Patch needs improvement set
Changed 4 years ago by Philomat
svn diff
comment:3 Changed 4 years ago by Philomat
- Has patch set
I second that. Really useful for integrating apps like django_notify
comment:4 Changed 4 years ago by anonymous
- Patch needs improvement unset
comment:5 Changed 4 years ago by russellm
- Triage Stage changed from Unreviewed to Accepted
comment:6 Changed 3 years ago by julien
- milestone set to 1.4
- Patch needs improvement set
The patch looks good, though it doesn't apply any more. This new feature comes too late for 1.3 but I'd like to see it in 1.4.
comment:7 Changed 3 years ago by SmileyChris
- Severity set to Normal
- Type set to New feature
- Version 1.1-beta-1 deleted
Agreed.
Changed 3 years ago by julien
comment:8 Changed 3 years ago by julien
- Triage Stage changed from Accepted to Ready for checkin
Updated the patch so it applies to current trunk. I think it's ready to go.
comment:9 Changed 3 years ago by lukeplant
- Resolution set to fixed
- Status changed from new to closed
comment:10 Changed 2 years ago by jacob
- milestone 1.4 deleted
Milestone 1.4 deleted
Patch needs to be "unified" style. Using svn diff from base django directory is the easiest. | https://code.djangoproject.com/ticket/10917 | CC-MAIN-2014-10 | refinedweb | 323 | 62.38 |
Are you having issues with ReSharper 10, particularly in terms of unit testing, performance in web applications and/or red code in UWP applications?
If you are, please download ReSharper 10.0.1, a bugfix update which addresses all the major issues that we acknowledged last week.
As usual, the ReSharper Ultimate installer contains compatible updates to ReSharper C++, dotTrace, dotMemory, dotCover, and dotPeek.
Please comment to let us know whether the update improves ReSharper experience compared to the initial v10 release, and if there are any outstanding hot issues so that we can address them in further releases.
Just installed 10.0.1 and DO NOT BOTHER, I had to go back to 9.2 again. I am getting exact same behavior in Unity Testing. .The test goes green “Pending” for about a half-second, then turns orange with message “Inconclusive, Test Not Run”.
If you use NUnit and app config contains references to external configs – please follow.
Are you testing an async method? make sure it returns Task and not void.
Sorry to say but Resharper 10 is worst release yet..(in term of bugs)
Any chance you can elaborate on that? As you can imagine, not knowing what’s wrong for you leave us no chance to make it work better. Thanks
Just to mention the most annoying ones:
– VS 2015 is freezing randomly (waiting for a bg operation forever to complete)
– IntelliSense quite often does not find existing objects, popup window is flickering
– Executing one test-case often starts executing all test-cases for the given test
Thanks Daniel.
As a wild guess, the freezes might have to do with RSRP-450181 although I certainly can’t be sure.
Can you probably try the latest ReSharper 10.0.2 EAP build? I’m pretty certain this should fix problem 3 unless you’re running NUnit 3.0 tests, and more changes might contribute to a better experience in other aspects.
“Any chance you can elaborate on that? As you can imagine, not knowing what’s wrong for you leave us no chance to make it work better. Thanks”
Here is the problem with that statement. While it is true that reporting bugs does help you to resolve them, no doubt there, your customers are not your QA staff. I agree that R# is as buggy as ever, especially the test runner. Every major release is unstable and just when things start to get stable, new major release and the instability starts all over again. With the subscription model we were told that it would allow you to deliver a better product. But what I’ve experienced is the product is still released in an unstable state, your customers QA it and you pocket the extra $$ you made with your new pricing model. So this isn’t about specific bugs, it’s about a trend.
Same issue here… Trying to run tests from the UI simply hangs VS badly… Had to kill it to be able to go back to work…
@Marcelo Are you able to reproduce the issue in a demo project? What test framework do you use – MSTest or NUnit?
Same issue. Seems inconsistent as to when it crashes. VS2013 framework 4.51, NUnit 2.6.4. I’ve not got “run all from solution” to run without crashing yet, individual tests are a crapshoot.
@Ian If VS just hangs, please collect a dump- and file a request here
Using xUnit tests, xUnit test runner and VS 2015 the unit testing works for me. The experience is so far the same as R# 9.2.
Thanks, Rory, that’s great to know.
I also haven’t had any major problems using nunit on a daily basis and mstest on occasion.
But i’m using mostly VS 2010.
Thanks for letting us know, Jeffrey
I am using nUnit 2.6.4, and it does not work at all. All tests hang in test harness, and give error “Inconclusive, Test Not Run”.
Charles, please follow Alexander’s instructions given above. Thanks
Thanks – my intellisense is back to normal speeds now!
Me too!
Cool, thanks for letting us know!
Pingback: ReSharper Ultimate 10.0.1 リリース | OPC Diary
10.0.1 cashes VS 2015 to crash a lot. I had to return to 10.0.0
Interesting. In what circumstances does VS2015 crash in your case? Do I understand correctly that while 10.0.1 does seem to cause crashes, 10.0 does not?
Also, there’a comment by JetBrains QA inviting you to get contact with them to learn more details. I’d appreciate if you do contact them so that we have a chance to figure out what’s going on. Thanks
I’ve had to disable Resharper altogether with the latest update. It continues to lock up VS2013.
Giorgio, can you clarify what do you mean lock up? How do we reproduce this problem? Can you possibly contact ReSharper support to investigate what’s going on? Thanks
@Ian Could you please mail me directly Kirill.Falk@jetbrains.com I need more details to replicate it on our side. Sorry for any inconveniences.
Code cleanup in VS 2013 still not working
Perfectly working here. How do you run Code Cleanup and what kind of projects are you trying to apply it to?
Same for me, since upgrading from 9.x. Any c# file in any solution (existing or new) I get
when attempting cleanup it tells me that Resharper_CleanupCode is not currently available.
Toby, do you have the ReSharper.StyleCop plugin installed? If you are, then this is what is known to block Code Cleanup. Please uninstall the StyleCop plugin to work around the problem. Let’s hope the plugin author is quick enough to update the plugin to address this problem.
I try to run it using shortcut (Ctrl+F) and nothing happens. Then i try to find Cleanup code menu item and see nothing too
In addition settings window when i try to change something in VS 2013 and apply it don’t work too.
Maybe i need to reinstall R#
Please see my comment above: the problem is most probably caused by having ReSharper.StyleCop plugin installed.
You are right. The plugin installed. I’ll remove it and try again
ReSharper.StyleCop removing helped. Thx
10.0.1 is a little bit better than 10.0.0… but my razor Views continue to mark as red code tags such as “Html”, “Url”, “ViewBag”, “ViewData”…
I’m very dissapointed with v10+
10.0.1 is a little bit better than 10.0.0… but my razor Views continue to mark as red code tags such as “Html”, “Url”, “ViewBag”, “ViewData”… It’s automatically corrected by resharper 20 seconds later, but it’s very annoying and unproductive
I’m very dissapointed with v10+
I wonder if clearing caches can solve your problem, Xavi.
Same issue here. I’ve been banging my head against this one for a while. At least it is not just me, as I never could repro red squigles in Razor files.
Anyone who experiences red code in Razor view – any chance to replicate such behavior to a demo project? Please file a new request to then.
Pingback: The Morning Brew - Chris Alcock » The Morning Brew #1970
Hi,
A few days ago I installed Resharper C++ 10.0.0 and now updated to 10.0.1 but my very same problem persists: Whenever I try to apply a suggestion which Reshaper presents (say turning a function into const function) everything freezes for 30-35 seconds. After that the changes are made but often it takes a little while longer after that before I can use the VS editor again.
What I have tried:
– Disabled VS PowerTools and other extensions
– Google for help
– Tried to make a performance trace via Resharper->Help->Submit bug… but the download of the performance profiler fails.
– Made sure there is no Entire Solution Analysis enabled
PC Setup:
4-core (8 thread Xeon)
12GB RAM
VS2010 SP1
Windows 7 x64
Please follow the steps mentioned here to collect a snapshot via dotTrace.
Alright, I have just now done this and submitted an inssue in the tracker:
Thanks a lot, Leo
At first look 10.0.1 fixes the bugs that annoyed me most.
For now this seems to be an improvement from 10.0.0
Thanks, Draupnir, that’s good to know.
NUnit 2.6.4 works for me just fine now in 10.0.1 (mostly for integration and visual tests, using NCrunch for all the rest), NUnit 3 support would be nice, but I guess I have to wait for 10.0.2.
Thanks, Benjamin, good to know that NUnit 2.x works well for you.
As to NUnit 3.0 final, yes, we’re planning to enable its support in 10.0.2. AFAIK RSRP-449816 currently represents the major problem with 3.0.
What has happened to automatic reference import? When clicking on a suggested project, R# does something – but the reference does NOT show up!?
Thomas, can you please provide more details on this behavior (as well as Visual Studio version and project types used in solution), here or via ReSharper support? Thanks
Sure: VS 2015 (german) with PowerCommands on Win 10 x64 (also german), all projects in solution C# .net 4.5, all class libraries except startup project which is a Winforms application.
Also, all projects compile to one output directory which is a subfolder of the solution. I tried adding references in startup project.
Thomas, please collect a R# log when calling “automatic reference import” (run VS with ‘devenv /ReSharper.LogFile C:\log\resharper_log.txt /ReSharper.LogLevel Verbose’) and then file a request to
10.0.1 is reporting my solution has 14 thousand errors when it doesn’t have any.
Ronnie, can you try clearing ReSharper caches to see if it helps? Thanks
Same here and no, clearing the cache did not help.
I downgraded to version 10.0.0 and everything works for me and my colleagues.
Test output window is still not working. Using xUnit.
Hi Erik. Could you post a little more detail, please? What are you doing and what’s failing? The tests I’m running locally are updating the output pane as expected.
I use the “Stack Trace Explorer” window to show the errors on NUnit tests: the normal windows doesn’t get anything after the first output (successful or failed)
Could you provide more details, please? The output pane is updating properly for me. What are the exact steps that cause it to fail? Do you have any other VS/ReSharper extensions installed? Do you see a completely blank output pane, or do you see the old result – do you see the red “failed” or green “passed” banner?
Matt,
The output pane is updated only the first time a test is called: if I call it again, it retains the old result.
In case of an exception, I can analyze it through the Stack Trace Explorer for the updated version, but if the previous result was a success and it didn’t print anything, the pane remains blank.
The banners are updated correctly tho.
Do you have a solution we could use to reproduce this? Also, have you tried a) closing the unit test session (each tab, not just the window) and b) clearing the caches for the solution (the General pane in ReSharper -> Options)?
I did close all the tabs and cleared the cache.
You can use DeveelDB as test solution to find it out.
I’m afraid I can’t reproduce the error – the output updates as expected. Are you sure you’re on 10.0.1? If so, please can you log an issue so that we can explore further?
Maybe in next century a resharper team will use TDD (also for plugin tests) ?
I really appreciate great and colorful installer but value of software comes from stability ..
I have right or what ?
Sorry, Ech, but I fail to understand what you’re trying to say, specifically how using TDD directly relates to stability and what you’re referring to when you say “plugin tests”.
Pingback: 1 – ReSharper Ultimate 10.0.1 (bugfix update) - Exploding Ads
Pingback: Dew Drop – November 17, 2015 (#2134) | Morning Dew
The Build & Run is still not working for me after this update.
I have an Azure Host project which has it’s ServiceConfigurat.cscfg files. The Resharper build&run doesn’t work for this since we don’t have a default ServiceConfiguration.cscfg. For Visual Studio build this doesn’t matter and I would therefore presume it shouldn’t for Resharper either, it is a very conscious choice not to have these.
To replicate this, simply have an Azure Host project for a cloud service (with it’s roles and profiles), delete the default service configuration (by adding a new one and removing the generic one). This should run in VS but not in Resharper Build&Run
The sad thing is this is how most projects I work on are set up, so it basically renders this otherwise super useful feature useless.
Another little idea for this: make the output in the build & run popup copy-able so when browsing for errors like this one, you don’t have to manually type it 😉
Exact error:
Error WAT200: No default service configuration “ServiceConfiguration.cscfg” could be found in the project. (0, 0)
Jan-Pieter, we’re going to fix this problem (see RSRP-450390) in a future release, hopefully 10.0.2 that’s expected in December.
Issues seen so far with 10.0.0.1:
1. Numbers of tests passed/failed/ignored at the top not does not match numbers in the tree.
2.Generally very slow to run the tests.
3.While trying to expand and see all the xunit InlineData theory tests, it won’t show them all.
4.Right pane does not tell any more where exactly the test failed.
Hi Ramesh. Can you give me any more details, please? And if you have a repro, that would be very helpful. I can recreate no. 3, but not the others.
Ramesh, can you tell if the numbers are different because it’s not counting theory tests? It looks like it’s deleting theories after a successful run, and that can cause the numbers to be wrong – the top numbers show the actual tests run (i.e. minus the theories) while the tree is still showing 1 test for each method that is also a theory (so instead of showing e.g. 3 tests because there were 3 rows of data, all of the theories have gone, and the tree is counting the theory node as 1 test). Does this make sense, and does it describe your scenario?
Its intermittent, sometimes they match, some times they don’t, that’s what baffles me.
first of all, I wish to thank you for granting me the Open-Source license for my project DeveelDB, but…
Since I updated to version 10 of R# everything screwed up: the amount of errors escalated without control and basically any function is not usable anymore.
The most annoying thing, anyway, is about the dotTrace integration: when trying to run the profiler on NUnit tests (no matter which kind of TestFixture), after the dotTrace popup appears and lets me chose the strategy, the NUnit tests are always marked as “Inconclusive: Test Not Run”.
I tried all the configurations, and even changing the build platform, setting explicitly the NUnit installation, etc. Nothing worked.
Antonello,
First of all, is your feedback up to date in its entirety upon installing 10.0.1?
As to the inconclusive problem with NUnit, is it possible that you have references to external configs as described in RSRP-450410?
If you use NUnit and app config contains references to external configs – please follow .
Jura,
Thanks for the reply.
I tested with both versions 10.0.0 10.0.1 without results. I also downgraded to version 9.2 but with the same result.
The NUnit test projects have no app.config what’s so ever, so the referenced assemblies to be tested.
The assemblies to be tested are all libraries (a SQL database engine and some of its components): they use external references, restored from NuGet, but as far as I understand it doesn’t fall into the case described by the issue above.
Regards
Thanks for clarifying, Antonello.
Do I understand correctly that the problems that you outlined above can be reproduced with the DeveelDB solution?
Also, can you clarify whether you’re using Visual Studio 2015 or a different version of Visual Studio?
Thanks
Sorry for not giving out the full environment information:
– Visual Studio 2015 Professional
– R# 10.0.1 Ultimate
– dotTrace 10.0.1
– NUnit 2.6.4 (installed in the local machine)
– No extensions to R# (I removed them all)
Fails for all platforms configured (x86, x64, Any CPU).
Is there any local log file I can explore to find out the cause of the issue?
dotTrace actually displays the attempted runs of the NUnit tests from VisualStudio: it says they can be run only from Visual Studio, so I cannot launch them.
Also, using the R# NUnit Test Runner works (a bit slow, but works).
As a side note, when I launch the profiling of any NUnit test, the popup window of the dotTrace options automatically selects the usage of the “Profiler API”, that I haven’t referenced in the NUnit project: I tried the “Advanced” configuration and deselecting the option, but with no results.
Jura,
Yes, you can use DeveelDB solution to test: it’s the one causing this issue actually
Thanks Antonello, we’ll be looking at your solution shortly to see whether the problems described can be reproduced.
Antonello, thanks a lot for providing elaborate info about the issue.
We have raised a bug, RSRP-451078, based on your report. Hopefully we can fix it for the next bugfix update in December.
Pingback: Docker-Sicherheit, WebRTC und Googles AMP-Projekt
When I ran the downloaded installer, it would always fail to install (I don’t have the error unfortunately on hand). I had to select Check for Updates from Resharper > Help menu. That updated Resharper to 10.0.1.
Please file a request to with all files from C:\Users\{User Name}\AppData\Local\JetBrains\Shared\v04
Still shit. Don’t use it.
Can you release just simply FULL TESTED program??
and:
I’m not really sure what kind of reply you’re expecting, Mateusz.
I’m running VS2015, with resharper 10.0.1 and the “Find Code Dependent On Module” along with the two options next to it would not show up when right clicking on references. When I first loaded VS, they would appear only for the first right click I made on an assembly, meaning only the very first assembly I right clicked on would get the option, and only the first time right clicked on it. I tried clearing the Resharper Cache, rebooted, and it was still not working.
Going back down to 9.2 the options appear every time.
Interesting. There’s a known problem (RSRP-450647) whereby “Find Code Dependent on Module” doesn’t show for assemblies that are not strong-named. However, AFAIK this problem also affected the 9.x family.
Can you confirm that for those assemblies where “Find Code Dependent on Module” is not available, the “Strong Name” property is set to “False”? If this is not the case, then please let us know whether any other properties are different among referenced assemblies that do and do not show “Find Code Dependent on Module”.
Thanks
In mine solution (all assemblies have strong names, value is True) this option is missing too. R# ver 10.0.1
What more, all R# option available under context menu are missing in VS’15. Same R# ver for VS’13 works fine
I have the same problem with no R# context menu showing up for References (VS2015/R# Ultimate 10.0.2). However I have a mix of StrongNamed and NotStrongNamed. Context menu is present for Strong Name = True, but MISSING for Strong Name = False
Resharper doesn’t seem to be honouring namespace restrictions of [SetUpFixture]s in NUnit, in Resharper 10, unless you run tests individually.
Example is the unit tests from (they launch ChromeDriver when they shouldn’t).
Hi Chris,
I have inquired the team to see whether this is a known and reproducible problem, and if there’s a workaround available.
Thanks
Chris,
We have managed to reproduce the problem in the following scenario: when you have a session that includes tests from namespaces that are decorated with the ChromeDriver-launching [SetUpFixture] as well as tests from namespaces that aren’t supposed to launch ChromeDriver, then when you remove the former, ReSharper still launches ChromeDriver. This is however most likely a bug with the way ReSharper removes tests from a session rather than with not honoring namespace restrictions of [SetUpFixture]s.
If what I’m describing if similar to what happens on your machine, then a workaround would be to set up separate test sessions for running [SetUpFixture]-decorated tests and other tests.
We’ll be looking to investigate and fix the problem with removing tests as soon as possible.
Should the problem display without having to remove tests from any sessions, then please provide step-by-step guidelines to reproduce it.
Thanks
After I have updated to V10.0.1, I have a few thousand errors “Cannot resolve symbol xxx” for types not in System namespace. Clear cache did not help.
I could only fix it by rolling back to V10.0.0.
Same here! Any workaround, JetBrains folks?
@Demian and @João Pedro Lopes Any chance to reproduce the issue in a sample demo solution?
Sorry, I couldn’t reproduce this behavior in another solution. However, after setting the target framework of all the projects in the solution to the same version this issue stopped!
Thank you, hope this info helps somehow.
Same here
also going back to 10.0.0.0.
After 10.0.1, I’m getting 700+ typescript errors for “Cannot find symbol” and “Cannot resolve symbol”.
Clearing the cache only works temporarily, after which the errors reappear.
@Rusty Do you find any specific actions which lead to “Cannot resolve symbol” errors reappearing? Or it just randomly happens.
If you got TypeScript 1.7 update, please specify TypeScript version 1.6 in ReSharper settings. ReSharper fallbacks to 1.3 now if it cannot detect version, known issue, will be fixed in 10.0.2.
10.0.1 is an improvement over 10, many thanks for that. However, I’m still experiencing some odd issues with the test runner.
Running:
* Visual Studio 2015 (Community Edition)
* NUnit 3.0.0
Issue 1:
Debug output is not appearing in the test runner output window. The VS output is displaying debug output as per normal.
Issue 2
When debugging a single test, all tests are debugged! To elaborate: If I place a break point in “TestA” and in “TestB”, when debugging “TestB” only the breakpoint in “TestA” is also hit!
This is not affected by creating a single test session. I’ve tried debugging tests from the runner, test explorer and the in-line editor context option. Same behaviour each time.
Many Thanks,
Thanks for the detailed report, Crispin.
We’ll be looking to reproduce this but I should note that ReSharper 10.0.1 doesn’t yet support NUnit 3.0 RTM (it only supports NUnit 3.0 Beta 5) because RTM had been released just a day before 10.0.1 release day. We’ll be fine-tuning NUnit 3.0 support to match RTM for 10.0.2 that is expected in December.
Intellisense is still incredibly laggy for me. Here’s my setup:
* ReSharper Ultimate 10.0.1
* VS 2015 Pro
* Windows 8.1 Enterprise running in VMWare Fusion 8.0.2 on OS X 10.11.1
** 7 GB RAM (with R# enabled, Task Manager shows 4.1 GB total memory in use). The physical machine has 16 GB total memory.
** 2 processor cores dedicated to VM (out of 4 total)
Any ideas on how I can speed things up to make R# usable again?
Thank you,
Jon
@Jon Please collect a performance snapshot ReSharper | Help | Profile a Visual Studio.
When I try to attach a Memory Snapshot, I get an error that says, “Failed to attach using self-profiling API.” Any ideas?
NM – I figured it out. Snapshot sent. Thanks.
I cannot use 10.0.1. My code is full of red-colored symbols (“missing references” and similar), while in fact it was completely OK in 10.0.0 and builds without problems. This is in a solution with many interrelated C# projects. Most of them are just code libraries, with no dependence on particular technology (Forms, Web etc.). Interestingly, the lowest level project (that does not depend on any other) behaves correctly. But all others have this problem. It looks like that ReSharper does not “see” the project dependencies/references.
I have the same problem here, but only for unit-test classes.
A clear of the resharper cache did not help, since everything is red again, after resharper processed the source files.
This means i have to disable resharper, if i want to write tests productively.
Same problem here with 10.0.1 (red-colored namespaces and attributes mainly in test classes). Productivity sink.
Any chance this will be fixed in the next update?
Please try installing R# 10.0.2 EAP build. Let us know about results.
I had the same issue, even with the 10.0.2 EAP release
Have R# 10.0.2 EAP thrown any exceptions like “Sequence contains more than one element” when you open the solution? Does the solution have modelproj or wixproj projects?
Yes, I was getting that same error. We do have a number of modelproj’s in our solution.
I installed the latest EAP build (12/15) and it resolved the issue for me.
R# 10.0.1, VS 2015 Update 1. There is no menu item “Find code dependent on module” in the context menu for a referenced assembly. The main Resharper menu (Resharper | Find) also doesn’t have this function at all.
Maxim, please see my earlier comment above. This is most likely a known issue with assemblies that are not strong-named. Not sure if this is going to be fixed in the scope of 10.0.2 to be honest.
I’ve updated to R# Ultimate 10.0.2 EAP 4 but I still got the “Inconclusive” Warning when running tests with NUnit 2.6.2.12296
The first test project runs fine but the next ones a skipped with an “Inconclusive” warning.
This does also happen when I try to use continuous testing.
The app.config file of the test projects doesn’t include other config files (as I’ve read this caused a similar issue).
Hello Kev,
Sorry for inconvenience! Could you please provide some more details:
1) what Visual Studio version you’re using?
2) what type of project is your project containing tests (usual class library, DNX…)?
3) are there any postbuild steps? Do you move your test dlls anywhere? Is there a chance that test runner doesn’t find it?
4) please launch Studio in R# Internal mode (with a key /ReSharper.Internal), go to ReSharper | Options | Unit Testing | Enable logging; run your tests; go to %temp%\JetLogs and send me (alexandra.rudenko@jetbrains.com) the log file – it’s called something like JetBrains_ReSharper_TaskRunner_CLR45_x64.2015-12-10T14-14-07#35272.log.
Thanks in advance.
Hi Asya,
I’ve sent you an email with the log file and further information.
Using VS 2015 with R# 10.0.1 – Win-7(64bit):
Behavior:
When we have been editing for a period of time, especially in XAML designer but not exclusively, and then move to C# code, every time we type the opening curly brace “{” and then press “Enter” to put them on separate lines,
Result:
Visual studio 2015 IDE freezes for between 5 and 30 seconds and sometimes longer and is completely unresponsive . The longer one has been editing, the worse the problem. Also them memory display in the bottom status line shows that memory usage continues to increase over time even when the code base is not expanding.
This all stops when we disable Resharper.
Same problem exactly.
I’m using R# 10.0.2 and I still have the following issues with test runner:
– Runs tests randomly, or maybe just ones marked Explicit. Says it isn’t running them but does, or I try to run explicitly and it doesn’t.
– Shows no console output until the test has finished.
Hi,
I’m using Resharper for years now. But i can not get familar with the new look of the unit test session results. It just confuses me and makes them useless to me, even if it has the same meaning. But it is so much line breaked that it is unreadable to me. Is there any way to turn it back to the old style, but changing back to resharper 9?
Thanks and Kind Regards
Chris, can you try playing with ReSharper > Options > Tools > Unit Testing > Wrap long lines in Unit Test Session output? | https://blog.jetbrains.com/dotnet/2015/11/16/resharper-ultimate-10-0-1/?replytocom=448052 | CC-MAIN-2019-51 | refinedweb | 4,919 | 74.79 |
GETLOGIN(3P) POSIX Programmer's Manual GETLOGIN(3P)
This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux.
getlogin, getlogin_r — get login name
#include <unistd.h> char *getlogin(void); int getlogin_r(char *name, size_t namesize);. The getlogin() and getlogin_r() functions may make use of file descriptors 0, 1, and 2 to find the controlling terminal of the current process, examining each in turn until the terminal is found. If in this case none of these three file descriptors is open to the controlling terminal, these functions may fail. The method used to find the terminal associated with a file descriptor may depend on the file descriptor being open to the actual terminal device, not /dev/tty.
Upon successful completion, getlogin() shall return a pointer to the login name or a null pointer if the user's login name cannot be found. Otherwise, it shall return a null pointer and set errno to indicate the error. The application shall not modify the string returned. The returned pointer might be invalidated or the string content might be overwritten by a subsequent call to getlogin(). The returned pointer and the string content might also be invalidated if the calling thread is terminated. If successful, the getlogin_r() function shall return zero; otherwise, an error number shall be returned to indicate the error.
These functions may fail if: EMFILE All file descriptors available to the process are currently open. ENFILE The maximum allowable number of files is currently open in the system. ENOTTY None of the file descriptors 0, 1, or 2 is open to the controlling terminal of the current process. ENXIO The calling process has no controlling terminal. The getlogin_r() function may fail if: ERANGE The value of namesize is smaller than the length of the string to be returned including the terminating null character. The following sections are informative.
Getting the User Login Name S‐1988(3p), getpwuid(3p), geteuid(3p), getuid(3p) The Base Definitions volume of POSIX.1‐2017, limitsLOGIN(3P)
Pages that refer to this page: unistd.h(0p), logname(1p), endgrent(3p), endpwent(3p), getpwuid(3p) | https://man7.org/linux/man-pages/man3/getlogin_r.3p.html | CC-MAIN-2021-04 | refinedweb | 377 | 55.13 |
So I'm trying some stuff out with selenium and I really want it to be quick.
So my thought is that running it with headless chrome would make my script faster.
First question is that Is it the correct assumption, or it does not matter if I run my script with a headless driver or not?
Anyway I still want to get it to work to run headless, but I somehow can't, I tried different things.
But when I try that, I get weird console output and it still doesn't seem to work.
Any tips appreciated.
To run chrome-headless just add --headless via chrome_options.add_argument, i.e.:
from selenium import webdriverfrom selenium.webdriver.chrome.options import Optionschrome="....
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
chrome="....
Try using chrome options like --disable-extensions or --disable-gpu and benchmark it, but I wouldn't count with much improvement.
References: headless-chrome
Note: As of today, when running chrome headless, you should include the --disable-gpu flag if you're running on Windows. | https://intellipaat.com/community/30397/headless-chrome-selenium-running-selenium-with-headless-chrome-webdriver | CC-MAIN-2020-05 | refinedweb | 178 | 67.45 |
TELLDIR(3) Linux Programmer's Manual TELLDIR(3)
telldir - return current location in directory stream
#include <dirent.h> long telldir(DIR *dirp); Feature Test Macro Requirements for glibc (see feature_test_macros(7)): telldir(): _XOPEN_SOURCE || /* Glibc since 2.19: */ _DEFAULT_SOURCE || /* Glibc versions <= 2.19: */ _BSD_SOURCE || _SVID_SOURCE
The telldir() function returns the current location associated with the directory stream dirp.
On success, the telldir() function returns the current location in the directory stream. On error, -1 is returned, and errno is set appropriately.
EBADF Invalid directory stream descriptor dir
POSIX.1-2001, POSIX.1-2008, 4.3BSD..
closedir(3), opendir(3), readdir(3), rewinddir(3), scandir(3), seekdir(3)
This page is part of release 5.01 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. 2017-09-15 TELLDIR(3)
Pages that refer to this page: closedir(3), dirfd(3), opendir(3), readdir(3), rewinddir(3), scandir(3), seekdir(3) | http://man7.org/linux/man-pages/man3/telldir.3.html | CC-MAIN-2019-22 | refinedweb | 165 | 59.09 |
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo
You can subscribe to this list here.
Showing
6
results of 6
Hi,
Using a HP Pavilion a805n running v2.6.12.3 + a few non-acpi related
patches (gentoo-sources-2.6.12-r6). Though I've defined \_TZ_.THRM as
"External" in the dsdt.dsl file (and it compiles w/o any
warnings/errors) I still get:
=20
ACPI-0352: *** Error: Looking up [\_TZ_.THRM] in namespace,
AE_NOT_FOUND
search_node c14731a0 start_node c14731a0 return_node 00000000
ACPI-1138: *** Error: Method execution failed [\_GPE._L1C] (Node
c14731a0), AE_NOT_FOUND
ACPI-0552: *** Error: AE_NOT_FOUND while evaluating method [_L1C]
for GPE[ 0]
I get this regardless of using a custom dsdt or not.
tia!
Hi!
> .
Okay.... well some kind of "patch applied" notification would be nice.
Pavel
--
if you have sharp zaurus hardware you don't need... you know my address
Hi!
I've finally found why I had so strong intuition that the solution is very
near :-).
The problem is that -in -rc4 there is not the famous ec_polling patch which
I need... After patching it again it started to work again.
I've found that the deferred function is really called and that it is
acpi_ce_gpe_query(). Without the ec_polling patch, a "burst variant" of the
query is called, which doesn't work here.
So I vote for inserting ec_polling patch to 2.6.13-final..
WIth regards, Pavel Troller
I'm going to Lindent the entire ACPI sub-system
at the start of 2.6.14 -- so best to hold style
and whitespace patches till after that hits the tree.
thanks,
-Len
=20
.
thanks,
-Len
> > It's exactly as the subject says - /proc/acpi/event is totally dummy
>
> It might be something particular to your configuration or hardware?
> I've just tried 2.6.13-rc4's /proc/acpi/event on my TP 600X and it's
> working okay. 'cat /proc/acpi/event' produces lid events, and acpid
> is reading from it into the /var/log/apcid
>
Hi!
I was trying to debug it a bit:
1) I put some printk's in drivers/acpi/bus.c, in acpi_bus_generate_event()
and acpi_bus_receive_event(). They proved that neither of these functions
is even called.
2) I modified /proc/acpi/debug_level. I've found that when I set the
ACPI_LV_EXEC bit, every event generates a line
osl-0698 [04] os_queue_for_execution: Scheduling function [c024bbdd(dbe5a900)] for deferred execution.
(it is always the same regardless of the event which caused it).
So the events are generated by hardware and at least osl knows about them.
It seems that the execution is deferred ad infinitum :-).
I didn't find any other bit which would generate anything more.
Now I'm stuck a bit, I don't know how to continue with debugging.
Any help is greatly appreciated.
With regards, Pavel Troller | http://sourceforge.net/p/acpi/mailman/acpi-devel/?viewmonth=200507&viewday=31 | CC-MAIN-2015-22 | refinedweb | 482 | 76.72 |
using Directive (C# Reference)
The using directive has two uses:
To allow the use of types in a namespace so that you do not have to qualify the use of a type in that namespace:
To create an alias for a namespace or a type. This is called a using alias directive.. The right side of a using alias directive must always be a fully-qualified type regardless of the using directives that come before it.
Create a using directive to use the types in a namespace without having to specify the namespace. A using directive does not give you access to any namespaces that are nested in the namespace you specify.
Namespaces come in two categories: user-defined and system-defined. User-defined namespaces are namespaces defined in your code. For a list of the system-defined namespaces, see .NET Framework Class Library.
For examples on referencing methods in other assemblies, see Creating and Using C# DLLs.
Description
The following example shows how to define and use a using alias for a namespace:
Code
A using alias directive cannot have an open generic type on the right hand side. For example, you cannot create a using alias for a List<T>, but you can create one for a List<int>.
Description
The following example shows how to define a using directive and a using alias for a class:
Code
using System; // Using alias for a class. using AliasToMyClass = NameSpace1.MyClass; namespace NameSpace1 { public class MyClass { public override string ToString() { return "You are in NameSpace1.MyClass"; } } } namespace NameSpace2 { class MyClass { } } namespace NameSpace3 { // Using directive: using NameSpace1; // Using directive: using NameSpace2; class MainClass { static void Main() { AliasToMyClass somevar = new AliasToMyClass(); Console.WriteLine(somevar); } } } // Output: You are in NameSpace1.MyClass
For more information, see the following sections in the C# Language Specification:
9.3 Using directives | https://msdn.microsoft.com/en-US/library/sf0df423(v=vs.90).aspx | CC-MAIN-2017-09 | refinedweb | 304 | 55.95 |
Muon Site¶
Positive muons implanted in metals tend to stop at interstitial sites that correspond to the maxima of the Coulomb potential energy for electrons in the material. In turns the Coulomb potential is approximated by the Hartree pseudo-potential obtained from the GPAW calculation. A good guess is therefore given by the maxima of this potential.
In this tutorial we obtain the guess in the case of MnSi. The results can be compared with A. Amato et al. [Amato14], who find a muon site at fractional cell coordinates (0.532,0.532,0.532) by DFT calculations and by the analysis of experiments.
MnSi calculation¶
Let’s perform the calculation in ASE, starting from the space group of MnSi, 198, and the known Mn and Si coordinates.
from gpaw import GPAW, PW, MethfesselPaxton from ase.spacegroup import crystal from ase.io import write a = 4.55643 mnsi = crystal(['Mn', 'Si'], [(0.1380, 0.1380, 0.1380), (0.84620, 0.84620, 0.84620)], spacegroup=198, cellpar=[a, a, a, 90, 90, 90]) for atom in mnsi: if atom.symbol == 'Mn': atom.magmom = 0.5 mnsi.calc = GPAW(xc='PBE', kpts=(2, 2, 2), mode=PW(800), occupations=MethfesselPaxton(width=0.005), txt='mnsi.txt') mnsi.get_potential_energy() mnsi.calc.write('mnsi.gpw') v = mnsi.calc.get_electrostatic_potential() write('mnsi.cube', mnsi, data=v)
The ASE code outputs a Gaussian cube file, mnsi.cube, with volumetric data of the potential (in eV) that can be visualized.
Getting the maximum¶
One way of identifying the maximum is by the use of an isosurface (or 3d contour surface) at a slightly lower value than the maximum. This can be done by means of an external visualization program, like eg. majavi (see also Plotting iso-surfaces with Mayavi):
$ python3 -m ase.visualize.mlab -c 11.1,13.3 mnsi.cube
The parameters after -c are the potential values for two countour surfaces (the maximum is 13.4 eV).
This allows also secondary (local) minima to be identified.
A simplified procedure to identify the global maximum is the following
# Creates: pot_contour.png from gpaw import restart import matplotlib.pyplot as plt import numpy as np mnsi, calc = restart('mnsi.gpw', txt=None) v = calc.get_electrostatic_potential() a = mnsi.cell[0, 0] n = v.shape[0] x = y = np.linspace(0, a, n, endpoint=False) f = plt.figure() ax = f.add_subplot(111) cax = ax.contour(x, y, v[:, :, n // 2], 100) cbar = f.colorbar(cax) ax.set_xlabel('x (Angstrom)') ax.set_ylabel('y (Angstrom)') ax.set_title('Pseudo-electrostatic Potential') f.savefig('pot_contour.png')
The figure below shows the contour plot of the pseudo-potential in the plane z=2.28 Angstrom containing the maximum
The absolute maximum is at the center of the plot, at (2.28,2.28,2.28), in Angstrom. A local maximum is also visible around (0.6,1.75,2.28), in Angstrom.
In comparing with [Amato14] keep in mind that the present examples has a very reduced number of k points and a low plane wave cutoff energy, just enough to show the right extrema in the shortest CPU time. | https://wiki.fysik.dtu.dk/gpaw/tutorials/muonsites/mnsi.html | CC-MAIN-2020-05 | refinedweb | 516 | 53.58 |
For detailed training, please see the Brian Wren MVA series. There are a number of relevant chapters for this including:
- Creating a Management Pack solution
- An introduction to classes
- Building Classes and Relationships
- Registry Discoveries
So first, lets create our Management Pack Solution
1. Fire up Visual Studio and Choose File, New, Project
For the example I'm going to use the following. Feel free to use something different and perhaps map your changes here.
Name: GD.MyApp
Solution Name: GD.MyApp
There are many different ways of organising your work in Visual Studio. Some people like to create a folder structure like the following for organising the fragments:
However, I prefer this (note - there is no "right" answer here). I find it easier to segregate work in this manner as all the code for a specific component is in one place rather than spread potentially across separate folders for classes, relationships, discoveries and views.
Now lets look at the fragment that creates our class and registry based discovery.
I'm going to assume that we are looking to monitor an application that runs on one or more servers and that we'll identify whether that application is running by checking for a registry key.
Additionally, I want the health of the application to roll up to the windows computer that hosts the application. So I'll use a base class of Microsoft.Windows.LocalApplication - we could also choose Microsoft.Windows.ComputerRole as both will automatically roll up health. This
Right Click LocalAppClassAndDiscovery and choose Add, New Item
Add Empty Management Pack Fragment and choose a name of GDApplicationServer (again, feel free to use different names!). Server
Replace With: GD My App Server
Also, look for the following and change the path to the registry key that needs to be discovered (Remember, this is always under HKEY_LOCAL_Machine)
<Path>SOFTWARE\SCOMDiscoveryData\RegistryKeyName</Path>
Finally, lets configure the Management Pack properties
Go to Project, GD.MyApp properties (the name will be different if you have used a different Management Pack Name). Enter a more Friendly Name for the Management Pack and clear the default namespace.
Then click on the blue line (hyper link) at the bottom of this window and Change the Name and Description Tags to show the Application Name
<Name>GD My App Management Pack</Name>
<Description></Description>
Build Tab
Make the check box “Generate sealed and signed management pack” is selected.
Complete the Company Name and Copyright text boxes.
Browse through to the key file.
Management Group Tab
Set the Default Management Group. You can add in new Management Groups as required. This allows you to push to Dev and then when you are happy push the same Management Pack to Production without any Dev specific groups or views.
Deployment
Rather than having to manually import management packs, you can select “Deploy projects to default management group only”
Check Code and Export to Management Group
To validate the code, choose Build, Build from the menu bar.
If the project builds successfully then you can click on Start and this will export it into the default Management Group.
Note:
There is a backward compatibility check for sealed Management Packs that will fail if:
- You have deleted classes from a previous version of the Management Pack
- You have not incremented the version number
To check that the discovery is successful, scope Discovered Inventory to the class GD.MyApp and make sure that a server is listed (it will be listed as Not Monitored as we have no monitoring targeted at this class)..
As promised, here is a run through of authoring Management Packs using Visual Studio and the Operations | https://blogs.technet.microsoft.com/manageabilityguys/2015/08/24/visual-studio-management-pack-authoring-series-part-3-create-a-class-based-on-windows-local-application-along-with-a-registry-discovery/ | CC-MAIN-2017-30 | refinedweb | 608 | 50.87 |
NEW UPDATE - I've got the code working now. Check the latest post. Thanks to everyone for your help. I'm sorry to have bothered you with my ignorance -_-'.
Okay, small update...the new code I have is:
Now, this is a working version (one that compiles perfectly fine, that is). Mainly, there are two problems with this one:Now, this is a working version (one that compiles perfectly fine, that is). Mainly, there are two problems with this one:Code:#include <cstdlib> #include <iostream> using namespace std; int main() { int y = 0; int z = 0; for (int x = 0; x == 1; z++) { cout<< "Enter 0 to exit or 1 to continue."; cin>> y; if (y == 1) { if (x == 0) { x++; } else { return x; } cout<< "You have repeated this process "<< z <<" times. \n"; } else { return 0; } } cin.get(); }
It doesn't display the line I want displayed (Enter 0 to exit it 1 to continue) and terminates no matter what I enter,
and it exits no matter what I enter as a value.
So, my questions would really be:
1. What's causing the program to not display my line of text defined at cout<< ?
2. If those compiler errors were fixed, would the resulting program run like I want it to?
(and, if the answer to 2 is a no, and you're feeling generous: Why wouldn't it?)
I know I might be asking a lot...but I have fiddled with this for over 4 hours and I'm getting really, really stressed here -_-'.
Thank you so much for all your help!
P.S.: in case anyone's wondering...I added the 'return 0' on the 'else' statement because I thought returning 0 to 'main' would end the program...was that just a totally stupid thought? | http://cboard.cprogramming.com/cplusplus-programming/88117-what-do-please-help.html | CC-MAIN-2015-48 | refinedweb | 300 | 83.15 |
Hello,
I created a boot file and loaded it inside the quasar config file.
But what I cannot work out is how to create a function that I can reuse in any vue.js files?
Would you have a short example please?
Thanks
- s.molinari last edited by s.molinari
Just attach it to the Vue prototype in your boot file, similar to how Axios is added.
import myFunction from 'src/utils/myFunction' export default ({ Vue }) => { Vue.prototype.$myFunction = myFunction }
You can then access the function via
this.$myFunction.
It’s always good practice to add the
$to show it’s a custom function and additional to Vue.
Scott
Hi Scott,
Superb, thank you. Let me give it a shot.
Strangely I cannot get it to work.
I created a new folder in:
src/functions/functions.js
and added this inside:
const myFunction = function () { let min = 1; let max = 65000; return Math.floor(Math.random() * (max - min + 1)) + min; }; export default myFunction;
Now in my boot/globalFunctions.js
I have:
import { myFunction } from '../functions/functions'; export default async ({ Vue }) => { Vue.prototype.$myFunction = myFunction; };
I keep getting this error:
76:10 error myFunction not found in ‘…/boot/globalFunctions’ import/named
Try without {} in your import
import myFunction from '../functions/functions'
Thanks Yes I just tried it and now I am getting something. But, the function, instead of giving me a return, it output the whole function as text in my vue file:
like this:
function myFunction() { var min = 1; var max = 65000; return Math.floor(Math.random() * (max - min + 1)) + min; }
When used this way:
data () { return { myFunction: this.$myFunction } }
oh my bad…what an idiot I am…I output the function as {{ myFunction }} oups~! sorry
Still, I was expecting to see my function return when doing this:
{{ myFunction }} data () { return { myFunction: '' } }, mounted () { this.myFunction = this.$myFunction; },
But all I see is the output of what is written inside the function as a string and not the return calculations, any idea why please? I am lost on this one.
Ok My bad, another mistake…the export had to be like this:
from export default myFunction; to export default myFunction();
- s.molinari last edited by
Looks like you might need some JS basic training. You should be invoking the function during the assignment to your data prop and not during the export.
i.e.
mounted () { this.myFunction = this.$myFunction(); },
Notice the parenthesis…
And theoretically, wherever you need the function results in your template, with the global, you can just call it in your template.
i.e.
{{ $myFunction() }}
No need to put it into a data prop first.
Scott
I from the PHP world, still trying to get into JS. Thanks for this.
- s.molinari last edited by
Actually, the above is partially basic JS and partially basic VueJS. You should learn both well. Then Quasar becomes a powerhouse for you.
Scott
The whole export / import thing is not something I learned years ago in javascript(8 to 10 years ago). Getting used to it now.
I love Quasar, this is my second app built with it.
What I like about Quasar is how it is organized, especially with Vuex and how you can easily build an app that works on Cordova or an HTTP server.
I am not really a big fan of pure javascript, to be honest, but I like to use Vue.js and the way things are shortened(when used with lodash). A lot of things are happening behind the hood which I believe would be difficult to built-in pure JS. I just stick to CRUDs and Restful APIs, that’s good enough for me to build what I need to build. | https://forum.quasar-framework.org/topic/4265/how-to-create-global-functions-in-quasar-v1 | CC-MAIN-2021-39 | refinedweb | 614 | 66.23 |
Python: split list into chunks
Today I found 30 sec. of python code repository, that claims that every snippet there can be easily understood and copy/pasted by new developers.
This project contains plenty of useful snippets which can help beginners and newcomers quickly ramp-up on grasping python 3's syntax.
In which I strongly disagree. Take this example - a function to split list to an equal chunks
from math import ceil def chunk(lst, size): return list( map(lambda x: lst[x * size:x * size + size], list(range(0, ceil(len(lst) / size)))))
If I was reviewing this code, I definitely recommend refactoring it. In my opinion, it is not clear enough what is going on and more - confusing to a newbie, because division in Python 2 and Python 3 works differently.
How can we make it better? This is how I feel to rewrite it:
from __future__ import division from math import ceil def chunk(items, size): # ceil returns float and range don't like it stop = int(ceil(len(items) / size)) return [ items[slice(i * size, i * size + size)] for i in range(0, stop) ] chunk([1, 2, 3], 2) # [[1, 2], [3]]
As a bonus, the same concept in JavaScript (yes, I know that original repository was about JavaScript all along). Sometimes I feel like switching languages clears the mind.
const range = function(n) { return Array.from(n).keys() }; const chunk = function(items, size) { const stop = Math.ceil(items.length / size); return range(stop).map(function(i) { return list.slice(i * size, i * size + size) }) }; chunk([1, 2, 3], 2) // [[1, 2], [3]] | https://aalekseev.me/python-split-list-into-chunks.html | CC-MAIN-2019-22 | refinedweb | 268 | 71.55 |
21 November 2012 17:35 [Source: ICIS news]
(updates with Canadian and Mexican data)
HOUSTON (ICIS)--Chemical shipments on Canadian railroads fell by 9.6% year on year for the week ended 17 November, marking their 34th decrease so far this year, according to data released by a rail industry association on Wednesday.
Canadian chemical railcar loadings for the week totalled 9,451, compared with 10,456 in the same week in 2011, the Association of American Railroads (AAR) said.
The previous week, ended 10 November, saw a 5.6% year-on-year increase in chemical shipments after four 17 November, Canadian chemical railcar loadings were down by 6.0% year on year, to 480,653.
The AAR that said weekly chemical railcar traffic in ?xml:namespace>
US chemical railcar traffic fell by 2.9% year on year in the week ended 17 November, marking its 29th decline so far this year.
There were 28,080 chemical railcar loadings last week, compared with 28,919 in the corresponding week in 2011. In the previous week, ended 10 November, US weekly chemical railcar loadings rose by 2.5%.17 | http://www.icis.com/Articles/2012/11/21/9616637/canada-weekly-chemical-railcar-traffic-falls-9.6.html | CC-MAIN-2015-22 | refinedweb | 188 | 55.64 |
22
Debugging with RxTimelane
Written by Marin Todorov
In this short chapter, you will learn the basics of debugging RxSwift code with Timelane. Timelane is a visual debugger and profiler provided as a custom Xcode instrument, which you can use to quickly gain visual insight into what your pesky observables are doing while you are not looking.
Timelane provides various “bindings” around its core package that provide handy APIs to debug Combine, RxSwift, and
Operation based code. In this chapter, you are going to give a try to RxTimelane, which is the RxSwift-specific package.
Installing the Timelane Instrument
Before getting started, you’ll need to install Timelane, which you can get from.
Once installed in your Applications folder, open Timelane and install the Timelane Instrument by clicking on the package icon:
This will spawn a standard Instruments installation dialog; click “Install”:
This will install the Timelane Instrument alongside your standard instruments like Zombies, Time Profiler, Core Animation, etc:
Using the RxTimelane library
The second step you need to take before getting started with debugging is to include the RxTimelane package in your project.
pod 'RxTimelane', '1.0.9'
Installing RxTimelane (1.0.9) Installing TimelaneCore (1.0.10)
The lane(…) operator
Open the starter project for this chapter. In MainViewController.swift add a new import at the top of the file:
import RxTimelane
images .lane("Photos") .throttle(.milliseconds(500), scheduler: MainScheduler.instance)
.lane("Photos", transformValue: { "\($0.count) photos" })
Tracking multiple subscriptions
Try logging more subscriptions to Timelane. You can use
lane as much as you like. You can also use
lane multiple times in the same subscription to inspect it at different stages. Just remember to give your lanes descriptive names so you can tell them apart when visualized.
let newPhotos = photosViewController.selectedPhotos
let newPhotos = photosViewController.selectedPhotos .lane("New Photos") .share()
Inspecting values over time
To wrap up this very quick introduction to Timelane, see how you can inspect in a little more detail the values emitted by one of your observables.
import RxTimelane
let authorized = PHPhotoLibrary.authorized .lane("Photo Library Auth") .share()
Where to go from here?
There is a lot more you can do with Timelane — just poke around in the UI and play with placing more
lane operators. For more information and documentation, visit the official repo the project at. | https://www.raywenderlich.com/books/rxswift-reactive-programming-with-swift/v4.0/chapters/22-debugging-with-rxtimelane | CC-MAIN-2021-04 | refinedweb | 384 | 56.66 |
hello,
im trying to make something using quotes in a string which is then used by system(d.c_string)
heres the code:
#include <iostream.h> #include <string> using namespace std; int main() { string a,b,c,d; a="del C:\\users\\robocop\\desktop\\\""; cin >> b; c="\""; d=a+b+c; system(d.c_str()); }
Lets say when running i type 'a b c.txt'
What im trying to get here then is the command
del C:\users\robocop\desktop\"a b c.txt"
But when I run it, it says: "cant find C:\users\robocop\desktop\a"
So apparantly it didnt put the quotes around a b c.txt when string d was used as d.c_str(), because those quotes were for the command line console to read a b c.txt as one filename.
Does anyone know what im doing wrong here? :S | https://www.daniweb.com/programming/software-development/threads/143052/using-system-with-quotes-is-messed-up-s | CC-MAIN-2018-13 | refinedweb | 143 | 85.18 |
Creating.
But luckily, React forms don’t have to be awkward. In fact, they can be downright easy. And the trick is simple: you just need to understand that form state isn’t like other state.
Three types of state
As it turns out, the problem is that web applications usually have three types of state, and we only talk about two of them.
1. View State
View state includes anything that is associated with the DOM itself. Examples include the state of animations and transitions.
React component state is great is a perfect way to store it, as each React component is tied to a DOM node.
2. Environment state
Environment state includes anything that is relevant throughout your entire application. Examples include the current user’s details, received data, or in-progress requests.
Redux is great at storing environment state, as you generally only have a single Redux store, which can be accessed anywhere via React context.
3. Control state
Control state includes anything associated with user interactions. Examples include the state of forms, selected items, and error messages.
The thing about control state is that it is associated with a specific part of the application, which may be loaded, disposed, or re-used in multiple places. This means that it isn’t environment state, and it doesn’t fit well in a global Redux store.
And while you can put form state in React components, it really doesn’t belong in them. After all, sometimes you want to keep your form state around after the DOM nodes disappear. Sure, you could lift your state up, except now you can’t easily re-use the logic. And isn’t React about reusable components?
But… what if you could define components that weren’t tied to the DOM?
Introducing Govern
Govern is a library for managing state with store components. These are a lot like React components – they can receive props, call
setState, and define lifecycle methods. They can be defined as classes, or as stateless functions.
But instead of rendering elements, store components publish raw JavaScript, so they’re not tied to the DOM.
And the best thing? If you know React, then you already know most of Govern’s API, so you’ll be productive in no time. In fact, by the end of this short guide, you’ll be able to build a form with validation and connect it to a JSON API — and you’ll barely need to learn a thing!
So let’s get started by creating your first store component for managing form state.
Defining store components
Govern components are just JavaScript classes that extend
Govern.Component. Like React components, they have props, state, and lifecycle methods.
For example, here’s how you’d create a
Model component that handles state and validation for individual form fields:
import * as Govern from 'govern' class Model extends Govern.Component { static defaultProps = { defaultValue: '' } constructor(props) { super(props) this.state = { value: props.defaultValue, } } publish() { let value = this.state.value let error = this.props.validate ? this.props.validate(value) : null return { value: this.state.value, error: error, change: this.change, } } change = (newValue) => { this.setState({ value: newValue, }) } }
Govern components have one major difference from React components: instead of
render(), they take a
publish() method. This is where you specify the component’s output, which will be computed each time the component’s props or state change.
Subscribing to stores
Now that you have a Model component, the next step is to subscribe to its published values from inside of your React app.
Govern exports a
<Subscribe to> React component to handle this for you. This component takes a Govern element for its
to prop, and a render function for its
children prop. It calls the render function with each new published value — just like React’s context API.
Here’s a barebones example that connects a
<Model> to an input field, using
<Subscribe>.
import * as React from 'react' import * as ReactDOM from 'react-dom' import { Subscribe } from 'react-govern' ReactDOM.render( <Subscribe to={<Model validate={validateEmail} />}> {emailModel => <label> Email: <input value={emailModel.value} onChange={e => emailModel.change(e.target.value)} /> { emailModel.error && <p style={{color: 'red'}}>{emailModel.error}</p> } </label> } </Subscribe>, document.getElementById('root') ) function validateEmail(value) { if (value.indexOf('@') === -1) { return "please enter a valid e-mail" } }
Try it live at CodeSandbox »
Adding Govern to create-react-app
If you’d like to follow along and create your own app — which I’d highly recommend — then all you need to do is create a new app with
create-react-app, and then add govern:
create-react-app govern-example cd govern-example npm install --save govern react-govern npm run start
Once you’ve got Govern installed, you can just copy the above two code blocks into
index.js, and you’re set! If you run into any trouble, take a look at the live example.
Did you get it to work? Then congratulations — you’ve just learned a new way to manage state! And all you need to remember is that:
- Store components extend from
Govern.Componentin place of
React.Component.
- Store components use a
publishmethod in place of
render.
- You can subscribe to store components with
These three things will get you a long way. But I promised that you’d make a form component, and so far I’ve only demonstrated a field component…
Combining stores
Govern components have one special method that doesn’t exist on React components —
subscribe().
When a component’s
subscribe() method returns some Govern elements, the component will subscribe to those elements, placing their latest published values on the component’s
subs instance property. You can then use the value of
this.subs within the component’s
publish() method, allowing you to combine store components.
For example, you could combine two
<Model> elements to create a
<RegistrationFormModel> component:
class RegistrationFormModel extends Govern.Component { static defaultProps = { defaultValue: { name: '', email: '' } } subscribe() { let defaultValue = this.props.defaultValue return { name: <Model defaultValue={defaultValue.name} validate={validateNotEmpty} />, email: <Model defaultValue={defaultValue.email} validate={validateEmail} />, } } publish() { return this.subs } } function validateNotEmpty(value) { if (!value) { return "please enter your name" } }
Field view components
One of the benefits of using the same
<Model> component for each field is that it makes creating reusable form views simpler. For example, you could create a
<Field> React component to render your field models:
class Field extends React.Component { render() { return ( <label style={{display: 'block'}}> <span>{this.props.label}</span> <input value={this.props.model.value} onChange={this.handleChange} /> { this.props.model.error && <p style={{color: 'red'}}>{this.props.model.error}</p> } </label> ) } handleChange = (e) => { this.props.model.change(e.target.value) } } ReactDOM.render( <Subscribe to={<RegistrationFormModel />}> {model => <div> <Field label='Name' model={model.name} /> <Field label='E-mail' model={model.email} /> </div> } </Subscribe>, document.getElementById('root') )
Try it live at CodeSandbox ».
Stateless functional components
You’ll sometimes find yourself creating components that just
subscribe() to a few elements, and then re-publish the outputs without any changes. Govern provides a shortcut for defining this type of component: just return the elements you want to subscribe to from a plain function — like React’s stateless functional components.
For example, you could convert the above
<RegistrationFormModel> component to a stateless functional component:
const RegistrationFormModel = ({ defaultValue }) => ({ name: <Model defaultValue={defaultValue.name} validate={validateNotEmpty} />, email: <Model defaultValue={defaultValue.email} validate={validateEmail} /> }); RegistrationFormModel.defaultProps = { defaultValue: { name: '', email: '' } }
Try it live at CodeSandbox ».
Calling a JSON API
Once you have some data in your form, submitting it is easy — you just publish a
submit() handler along with the form data. Everything you know about handling HTTP requests in React components transfers over to Govern components.
However, Govern does give you an advantage over vanilla React — your requests can be components too!
For example, this component takes the request body as props, makes a request in the
componentDidInstantiate() lifecycle method, and emits the request status via the
publish() method.
Note: I use the axios package here to simplify the network code. If you’re following along, you can install it with
npm install --save axios.
import * as axios from "axios"; class PostRegistrationRequest extends Govern.Component { state = { status: 'fetching', } publish() { return this.state } componentDidInstantiate() { axios.post('/user', this.props.data) .then(response => { if (!this.isDisposed) { this.setState({ status: 'success', result: response.data, }) } }) .catch((error = {}) => { if (!this.isDisposed) { this.setState({ status: 'error', message: error.message || "Unknown Error", }) } }); } componentWillBeDisposed() { this.isDisposed = true } }
You can then make a request by subscribing to a new
<PostRegistrationRequest> element.
class RegistrationFormController extends Govern.Component { state = { request: null }; subscribe() { return { model: <RegistrationFormModel />, request: this.state.request }; } publish() { return { ...this.subs, canSubmit: this.canSubmit(), submit: this.submit }; } canSubmit() { return ( !this.subs.model.email.error && !this.subs.model.name.error && (!this.subs.request || this.subs.request.status === "error") ); } submit = e => { e.preventDefault(); if (this.canSubmit()) { let data = { email: this.subs.model.email.value, name: this.subs.model.name.value }; this.setState({ request: ( <PostRegistrationRequest data={data} key={new Date().getTime()} /> ) }); } }; } ReactDOM.render( <Subscribe to={<RegistrationFormController />}> {({ canSubmit, model, request, submit }) => ( <form onSubmit={submit}> {request && request.status === "error" && ( <p style={{ color: "red" }}>{request.message}</p> )} <Field label="Name" model={model.name} /> <Field label="E-mail" model={model.email} /> <button type="submit" disabled={!canSubmit}> Register </button> </form> )} </Subscribe>, document.getElementById("root") );
Try it live at CodeSandbox ».
Did you see how the
key prop is used in the above example? Just like React, changing
key will result in a new component instance being created, and thus a new request being made each time the user clicks “Register”.
While request components can take a little getting used to, they have the benefit of being able to publish multiple statuses over time — where promises can only publish one. For example, you could publish
disconnected or
unauthenticated states, along with a
retry() action to give the request another crack.
Request components also make it easy to share communication logic within and between applications. For an example, I use a
<Request> component within my own applications.
Performance note: selecting data
Govern’s
<Subscribe> component needs to call its render prop each time that any part of its output changes. This is great for small components, but as the output gets larger and more complicated, the number of re-renders will also grow — and a perceivable delay can creep into user interactions.
Where possible, you should stick to
<Subscribe>. But in the rare case that there is noticeable lag, you can use Govern’s
<Store of> component to instantiate a Store object, which allows you to manually manage subscriptions.
Once you have a Store object, there are two ways you can use it:
- You can access the latest output with its
getValue()method.
- You can return the store from a component’s
subscribe()method, and then republish the individual parts that you want.
For example, here’s how the above example would look with a
<Store of> component. Note that this adds a fair amount of complexity — try to stick to
class MapDistinct extends Govern.Component { subscribe() { return this.props.from } shouldComponentUpdate(nextProps, nextState, nextSubs) { return nextProps.to(nextSubs) !== this.props.to(this.subs) } publish() { return this.props.to(this.subs) } } const Field = ({ model, label }) => ( <Subscribe to={model}> {model => ( <label style={{ display: "block" }}> <span>{label}</span> <input value={model.value} onChange={e => model.change(e.target.value)} /> {model.error && <p style={{ color: "red" }}>{model.error}</p>} </label> )} </Subscribe> ); ReactDOM.render( <Store of={<RegistrationFormController />}> {store => ( <form onSubmit={store.getValue().submit}> <Subscribe to={<MapDistinct from={store} to={output => output.request} />} > {request => request && request.status === "error" ? ( <p style={{ color: "red" }}>{request.message}</p> ) : null } </Subscribe> <Field label="Name" model={<MapDistinct from={store} to={output => output.model.name} />} /> <Field label="E-mail" model={<MapDistinct from={store} to={output => output.model.email} />} /> <Subscribe to={<MapDistinct from={store} to={output => output.canSubmit} />} > {canSubmit => ( <button type="submit" disabled={!canSubmit}> Register </button> )} </Subscribe> </form> )} </Store>, document.getElementById("root") );
Try it live at CodeSandbox ».
Note how the above example uses
getValue() to access the
submit() action, but uses
<Subscribe> elsewhere. This is because we know that
submit won’t change, and thus we don’t need to subscribe to future values.
Also note how the selector component defines a
shouldComponentUpdate() method. If this wasn’t defined, then each update to the
from store would cause a new publish — even if the published value didn’t change! Defining
shouldComponentUpdate() gives you control over exactly which changes cause a publish — just like with React.
Built-in components
Govern has a number of built-in elements to help you reduce boilerplate and accomplish common tasks. These are particularly useful for creating selector components.
The three built-ins that you’ll use most often are:
<map from={Store | Element} to={output => mappedOutput} />
Maps the output of
from, using the function passed to
to. Each publish on the
from store will result in a new publish.
<flatMap from={Store | Element} to={output => mappedElement} />
Maps the output of
from, using the output of whatever element is returned by
to. Each published of the mapped element results in a new publish.
<distinct by?={(x, y) => boolean} children>
Publishes the output of the child element, but only when it differs from the previous output. By default, outputs are compared using reference equality, but you can supply a custom comparison function via the
by prop.
For example, you could use the
<flatMap> and
<distinct> built-ins to rewrite the
<MapDistinct> component from the previous example as a stateless functional component.
const MapDistinct = props => ( <distinct> <map from={props.from} to={props.to} /> </distinct> );
Try it live at CodeSandbox ».
What next?
Believe it or not, you’ve just learned enough about Govern to make useful applications with it. Congratulations!
In fact, I already use Govern in my own projects. I wouldn’t yet recommend it for critical applications, but with your help, it will continue to steadily involve.
If you do find any issues while using Govern, and find the time to file an issue, you’ll be my new best friend. I’m also extremely open to pull requests, and would be happy to help get the word out about any components you’d like to share.
Finally, if you like what you see, please let me know with a star on GitHub! And if you want to hear more about Govern, join my newsletter below – I’ll even throw in a few PDF cheatsheets as thanks!
I will send you useful articles, cheatsheets and code.
Thanks for reading, and until next time, happy Governing! | http://jamesknelson.com/sensible-react-forms-with-govern/ | CC-MAIN-2020-16 | refinedweb | 2,408 | 50.63 |
Hello, I have a page with a "dataset1" (it has a sort of Category A-Z), the repeater is called"repeater2" and database called "Categories"( it is sorted by category alphabetically). The field I am trying to display has a key name of "category". I have tried many approaches and have read all of the articles I could find on the subject.
The code is as follows:
import { local } from 'wix-storage'; import wixData from 'wix-data'; import wixLocation from 'wix-location'; $w.onReady(() => { }); export function dataset1_ready_1() { wixData.query("Categories") .limit(300) .find() .then(results => { $w("#repeater2").data = Array.from(results.items.reduceRight((m, t) => m.set(t.category, t), new Map()).values()); }) }
The result are as follows: You will notice it is sorted until it reaches the 25th display on ???
It;s hard to understand what you're trying to do. You have a dataset but you don't use it, and instead you pull the data directly from the collection and then you process it in a complicated way that is not clear to outsiders.
Anyway, if you want the data to be sorted alphabetically, then add to your query:
J.D. THANK YOU.........SOLVED:
Added 1) .ascending("category") and 2) .reverse()
New Code that works:
export function dataset1_ready_1() { wixData.query("Categories") .ascending("category") .limit(300) .find() .then(results => { $w("#repeater2").data = Array.from(results.items.reduceRight((m, t) => m.set(t.category, t), new Map()).values()).reverse(); }) }
J.D. Thank you for the quick response. I will try your suggestion.
Let me explain further. I am eliminating duplication of Categories with the code . I have Categories with many subcategories in the database.
For example: a Category is Bathroom has the Subcategories of Bathroom design, Bathroom installation, Electrical, Plumbing, Cabinets, Tiling, Sinks, Toilets and Bathtubs. One Category with 9 Sub Categories. I then want to display the repeater Categories alphabetically.
You can see the page on my live site at:.
I hope this clarifies things further.
Kindest Regards,
Bill
I see. It's a really complicated way to omit duplicates. There're simpler ways.
But if it works for you then all is good.
J.D. If you have a easier way I am interested. I am always trying to improve my coding and I love WIX.
Kindest Regards,
Bill
OK. So, for example:
J.D. Thank you Again.
I will try this code and let you know if I am successful. Your code is so much easier to understand and I appreciate that.
Kindest Regards,
Bill
Also you may consider using the query alone or the dataset alone. there's no reason to go to the back end twice.
Thank you J.D. Coding and testing now. Bill
Hi J.D. tried code below and got the wrong results, everything is duplicate on display?:
export function dataset1_ready_1() { wixData.query("Categories") .ascending("category") .limit(300) .find() .then(res => { let items =res.tems; let categories = items.filter((o, index) => items.findIndex(e => e.category === o.category) === index); $w("#repeater2").data = categories; })
Back to first solution. Thank you for your help. Bill
OK, I don't see any problem wit the code you posted (except for the lack of closing } at the end of the function which you probably just didn't copied to here).
Anyway as long as your first code works, it's alright.
@J. D. Thank you for checking.I will check again this afternoon and let you know. Bill | https://www.wix.com/corvid/forum/community-discussion/repeater-not-sorting-alphabetically-with-dataset | CC-MAIN-2020-05 | refinedweb | 573 | 62.04 |
appsettings
Sometimes you want to read a setting from the command line. Sometimes you want to read it from an environment variable. Sometimes you want to read it from a config file.
And in some very special times, you want to allow the value to be passed using some combination of the three.
The appsettings module provides an argparse subclass that allows pulling settings from the command line, environment variables, or a yaml config file.
If the same value is provided in several of those locations, then env var always beats config file, and command line always beats everything.
Usage is exactly the same as argparse, with the addition of some new kwargs on initializing the parser and adding arguments to it. Example:
from appsettings import SettingsParser f = open('some_config_file.yaml') parser = SettingsParser(yaml_file=f) parser.add_argument('--color', default='blue', env_var='FAVCOLOR') args = parser.parse_args() print args.color # If you've set the FAVCOLOR environment variable you should now see its # value printed to the console. Otherwise you'd see 'blue'
Things to Know
Options Only
Only long form arguments like "--color" will provide env var and config file fallbacks. Positional arguments and short options like "-c" will behave just like they do in the argparse module.
APP_SETTINGS_YAML
If you don't provide a yaml_file argument to the SettingsParser constructor, and the APP_SETTINGS_YAML environment variable is set, then that file will be read and parsed to provide settings. (Though they'll still be overridable by environment variables and command line options.) | https://bitbucket.org/btubbs/appsettings/src | CC-MAIN-2015-22 | refinedweb | 251 | 56.45 |
Every day, more and more developers are being hired based on their Swift skills. Apple is committed to Swift and Swift is the future. Not a day goes by without some developer wandering by for Swift peer support with “I have to use Swift” because it’s in the contract.
I have shipped only one Swift app for live app store sale and I absolutely love Swift.. So how do you break the news to your client, your boss, or that new guy on your team who saw a few WWDC videos that Swift is better suited for a long-term investment than short-term development tasks?
From a smart manager’s point of view, existing code bases, a known stable development path, and a pool of trained developers is more valuable than a scary new language still in development. The lower the risk of failure the better: deadlines are everything when it comes to bonuses, promotions, and job stability. When you have a stable Objective-C code base, why risk your credibility, your job, and your career for an unstable language and API? Experienced managers are the easy sell.
But more and more, people are coming to me and saying, “It’s next to impossible to explain this and sell Objective-C inside any large group or corporation. They look at you as if you’re crazy. Apple says Swift is now. They don’t get the push back.” Apple’s message of “Swift now. Swift for production” is becoming a big issue for developers.
Unless you can express a strong message of investment, stability, risk, and reward you’re going to be in trouble. I’m not saying “Don’t develop in Swift.” For many developers, the language benefits outweigh the refactoring costs that will be incurred over the next few years. It’s the people who don’t see the full picture and timeline that are and will be struggling, the ones jumping in without properly seeing warning signs.
It takes a good six months or so to retrain your brain into Swift development patterns. It will take several more years for the language to stabilize. When you think “Swift”, you shouldn’t be thinking quick-hit-then-walk-away projects. A Swift project means a long term commitment, unless you never plan to re-use any source code, fix any bugs, or provide any upgrades to your apps.
This isn’t the first time the industry has met this issue: think about how C# developed over its first decade. I’m told that many programmers found little projects, worked through them, and gradually built libraries and codebases to minimize maintenance costs over time. Now, as then, training and acquiring developers in anticipation of language stability is and was a challenge.
It comes down to this. If someone is pushing hard for Swift for full apps or critical production code, make sure they know the commitment they’re buying into with associated migration and core refactoring costs, such. Otherwise, Objective-C is still, and will continue, delivering product.
Thanks Mike Prenez-Isbell, Director of Mobile for Univision Television, and the other unnamed developers who spent time chatting with me about this topic.
18 Comments
Another tangent on this topic to consider; if you are building stuff in Swift for a client now, be more cautious on features you adopt to minimize migration later. I’m on a large project that started using Swift almost at launch, but we were careful about moving into Swift gradually and letting new features bake for a while, and as a result we’ve had no more than a few hours so far of migration between Swift versions.
I think Swift can be a great choice to start with now, just be mindful of how the language will evolve as you build today – even easier now that the evolution is public!
What do you mean by “stable”? It’s a dead language that will not change? You don’t want to deal with migrating Swift 2.x to 3.x code so you’re better off in the dead language because you won’t have to touch the code again?
Stable means that your source code won’t break with new changes and it will keep on compiling. Obj-C has introduced many new features over past few years: properties, literals, blocks, ARC, light generics, etc. These are huge, but none of them broke existing code. Thus ObjC Is live, evolving and stable.
Ok, so we all agree that Swift will have breaking changes. Of course the importance on that debatable. If you are uncomfortable with that then, sure, avoid using Swift. It is certainly not a deal breaker for many developers. I’d rather have 100,000 lines in more safer, more maintainable Swift than in Objective C. Also, my bet is the amount of work writing Swift then porting it to the next version is less work than writing Objective C, which is still quite verbose with all those header files, @properties, and code in general.
NSString *s = @”Swift is the future”;
UIViewController *vc = [[UIViewController alloc] init];
NSArray *names = @[@”John”, @”Paul”, @”George”, @”Ringo”];
NSMutableDictionary *dict =
vs
let s = “Swift is the future”
let vc = UIViewController()
let names = [“John”, “Paul”, “George”, “Ringo”]
You are not factoring in the sheer amount of time it takes to properly convert and maintain code, nor the points that Erica raised about switching mental models. These things take time and effort, and thus money.
OK, so you love Swift. That’s great. But Swift is still essentially in Beta. Meanwhile, our apps have to go OUT of Beta and in to the real world, hopefully on time and on budget.
Of course I factored it in. Converting Objective C to Swift is not difficult. Of course there’s absolutely no need to convert the code. You can simply write new code in Swift:
extension MyViewController {
}
Switching context is painful. You are digging a bigger hole by sticking with Objective C for the next 12 months. You are making the case to switch early or stick with Objective C forever. Since the same functionality in Swift is 30% smaller, the extra code you don’t write in Swift is also money you save.
Simply look at this code:
@import UIKit;
#import “ViewController1.h”
#import “ViewController2.h”
#import “MyDataModel.h”
NSString *s = @”Swift is the future”;
UIViewController *vc = [[UIViewController alloc] init];
NSArray *names = @[@”John”, @”Paul”, @”George”, @”Ringo”];
NSDictionary *ages = @{@”John”: @(1940), @”Paul”: @(1942), @”George”: @(1943), @”Ringo”: @(1940)};
vs
import UIKit
let s = “Swift is the future”
let vc = UIViewController()
let names = [“John”, “Paul”, “George”, “Ringo”]
let ages = [“John”: 1940, “Paul”: 1942, “George”: 1943, “Ringo”: 1940]
Swift is less visually noisy. Now, image 100,000 of code of each language. Which is more maintainable? Less code and more maintainable code directly translate to a cost saving.
I’m going to put my argument in a blog where I can update my answer. I’ve heard it all before. The “old guard” is wrong on this one.
@h4labs I tend to agree with what you have said. I started my first production app with Swift 1.1. I upgraded it first to Swift 1.2. No major issues. I don’t think anything broke. The migration went smooth. Then, I upgraded the app to Swift 2.1. This was a more involved migration, but Apple’s migration tool took care of a large portion of it. I had more manual migration to do, but I expected that. It was a semi-major update to the language. My app does some low-level binary data packing, Bonjour, TCP/IP networking (using CocoaAsyncSocket), and uses a proprietary 3rd party Objective-C dynamic library. I expected any major problems to be in those areas of code. In fact, there were little, if any changes required in that code. It just worked. Now, my opinion might change when Swift 3.0 is out and I have to do that upgrade, but I doubt it. Luckily, the code base size of this particular app is not that large. I can see where chasing Swift version upgrades would be more of an issue with much larger Swift app projects.
TBH, I don’t care if you think in terms of “old guard” or not. I don’t see myself as “old guard”, “new guard” or anything in between. But I develop for a living. I don’t develop FOR Swift or Objective-C. I develop PRODUCTS that use certain languages and tools. I’m in business, therefore I use the tools that are most cost-effective for me, over a range of criteria. These include memory burdens, paradigm shift burdens, the tug of novelty vs the tug of conservatism, dealing with Beta issues, etc.
If you want to use Swift, nobody is stopping you. If it’s right for you, it’s right for you. But unless you want to pay my bills for me, I’m not going to listen to you if you tell me I’m “wrong” without valid reasons beyond “Swift is easier to write”.
if you care about what’s best for you and not for your clients, that’s up to you. When your competition is able to deliver more reliable product in less time, thus costing them less, you might find you’re customers looking elsewhere. They don’t care if you’re too lazy too learn something new. Time is money.
Very good post. Swift is an exciting language, but as I shared in my blog () I really think Apple should have deprecated the features removed from Swift 1 when they introduced Swift 2 instead of totally removing them.
As someone who has introduced a lot of Swift into a large project over the last year, I disagree; if some of the other stuff had been merely deprecated it would have remained, growing technical debt and making the code harder to maintain in the future. The update tools have by and large worked pretty well to migrate to newer versions of swift, and the parts that did not didn’t take much time to re-work slightly and were better for it.
Just like in app design it’s better not to leave a user with too many ways to do something, in language design I feel like it’s important to clean out cruft early and move forward so that programmers generally have a better codebase.
Can you imagine what it would be like if several variants of range definition were still around? No thanks!
Good points. That’s why it’s important to comment on and document your code. I haven’t used Swift much myself as my day job has been keeping me too busy, but I had just started learning Swift 1 when Swift 2 was released and it was a little hard to really get into it when it’s such a moving target..
I would suggest continuing on your learning path and not worrying too much about the moving target. Each update so far has been incremental and you won’t have to experience the Swift version changes until you update your dev environment (Xcode, iOS). Once you decide to do that, run the migration tool (which, as @ktest098 said, has been very good to date) on your code. This is a good way to learn about Swift version changes and new features. Note, the next migration to 3.0 is going to be a significant. If I were just now learning Swift, I probably would not wait for 3.0. Get going and learn the base language. There’s more to learning Swift than just learning the syntax. With Swift, I am still learning to “think different” (coming from many years of Java).
FWIW, I’ve been using Swift for my own projects since its introduction, and I have yet to encounter a situation where Xcode wasn’t capable of updating it automatically. Not saying it can’t happen, but so far the conversion scripts have worked 100% for me.
-jcr
I have to strongly disagree with this sentiment, mostly because I believe the main thesis of “…is not stable or will be for a while to come” is very misleading and will do a disservice to developers who could greatly benefit from using Swift.
While technically correct on Swift being in flux, the amount of _breaking_ changes an iOS developer can expect are extremely limited. A var++ here or a foo.count there, or in absolute worst case minor grammatical updates to Foundation/UIKit – but nothing that a cup of coffee’s worth of time won’t fix. Shiny new features, of course, can be adopted whenever.
Maintaining an app of over 400 Swift files and 800 Obj-C files, I can attest that the only real pain we ever felt was in the early days of SourceKit crashes and slow build times – which have become non-issues as the toolchain has greatly matured.
Alternatively, I cannot recommend enough the benefits in safety and productivity Swift provides _today_. Generics and protocol extensions in particular have been paying dividends for months.
Here’s another group that happily and successfully started using Swift:
[…] Source: When your client demands Swift — Erica Sadun […] | https://ericasadun.com/2016/02/08/when-your-client-demands-swift/ | CC-MAIN-2021-43 | refinedweb | 2,219 | 71.95 |
WP7 LongListSelector in depth | Part2: Data binding scenariospublished on: 11/16/2010 | Views: N/A | Tags: WP7Toolkit Binding LongListSelector windows-phone
This is Part2 of the "WP7 LongListSelector in depth" series of two posts in which I talk about the key properties, methods, events and the basic structure of the LongListSelector in details. In the first "Part1: Visual structure and API " I explained the visual structure of the control and all abut the available public API. Now In "Part2: Data binding scenarios" I will talk about using the API and populating LongListSelector in different ways.
Note: Take a look at the previous "LongListSelector in depth - Part1: Visual structure and API" post for reference. For more information about all new controls in the updated version of the toolkit please visit the "Silverlight Toolkit November 2010 Update - What's New" post.
Generally when talking about the LongListSelector and populating it with data you have two chaises:
- to use it as a standard ListBox with flat lists.
- to use it as an advanced ListBox with grouped lists.
Note: LongListSelector supports full data and UI virtualization.It is usually used to scroll through long lists of data.
To begin with lets first mention that in this article I will use the following simple data source (presenting the Country/Language/City relation):
Sample data source:
public class City { public string Name { get; set; } public string Country { get; set; } public string Language { get; set; } } List<City> source = new List<City>(); source.Add(new City() { Name = "Madrid", Country = "ES", Language = "Spanish" }); source.Add(new City() { Name = "Barcelona", Country = "ES", Language = "Spanish" }); source.Add(new City() { Name = "Mallorca", Country = "ES", Language = "Spanish" }); source.Add(new City() { Name = "Las Vegas", Country = "US", Language = "English" }); source.Add(new City() { Name = "Dalas", Country = "US", Language = "English" }); source.Add(new City() { Name = "New York", Country = "US", Language = "English" }); source.Add(new City() { Name = "London", Country = "UK", Language = "English" }); source.Add(new City() { Name = "Mexico", Country = "MX", Language = "Spanish" }); source.Add(new City() { Name = "Milan", Country = "IT", Language = "Italian" }); source.Add(new City() { Name = "Roma", Country = "IT", Language = "Italian" }); source.Add(new City() { Name = "Paris", Country = "FR", Language = "French" });
1.FlatList implementstion.
The first thing to do when using flat lists is to set IsFlatList="True".
Simple Flat List
Lets say we want to show a standard list structure with minimum efforts in our LongListSelector. The source code for accomplishing this is as follows:
XAML:
<toolkit:LongListSelector x:
C#:
this.citiesList.ItemsSource = new List<string> { "Madrid", "Barcelona", "Mallorca", "Las Vegas" };
The result is :
Composite Flat List
In more composite flat scenarios you can use more complex data, define your own ItemTemplates and customize the appearance of the items. For instance in this example I will show the names of the countries, cities and the specified languages in different colors. Lets add some elements to the ItemTemplate, ListHeaderTemplate and ListFooterTemplate. The source cod is:
Note: The sample data source is given at the beginning of the article. Take a look at it for reference. For more information about ItemTemplate, ListHeaderTemplate and ListFooterTemplate visit the previous "WP7 LongListSelector in depth | Part1: Visual structure and API" post.
XAML:
<DataTemplate x: <Border Background="Purple"> <TextBlock Text="Cities Header" /> </Border> </DataTemplate> <DataTemplate x: <Border Background="Green"> <TextBlock Text="Cities Footer" /> </Border> </DataTemplate> <DataTemplate x: <StackPanel Grid. <TextBlock Text="{Binding Name}" FontSize="26" Margin="12,-12,12,6"/> <TextBlock Text="{Binding Country}" Foreground="GreenYellow"/> <TextBlock Text="{Binding Language}" Foreground="Orange" /> </StackPanel> </DataTemplate> <toolkit:LongListSelector x:
C#:
The result is:
this.citiesList.ItemsSource = source;
Generally when used in Flat mode the LongListSelector is nothing more than a List with Header and Footer.
2.Grouped Lists implementation
This is the more complex scenario. In order to have a hierarchy with groups IsFlatList must be set to False which is actually its default value.
Lets focus on implementing the group hierarchy. In order to fit in the ItemsSource requirements the group class should implement IEnumerable. In our case it looks like the following:
C#: }
Note: I have overridden the Equals(object obj) method.
Basically we have a Title property of type string that will be used as a text in the group items/headers. After we have defined the group class its time to add it to our data source and finally set the ItemsSource of our LongListSelector. To do this we use a Linq expression so that we are able to add each group in the right place:
C#:
var cityByCountry = from city in source group city by city.Country into c orderby c.Key select new Group<City>(c.Key, c); this.citiesListGropus.ItemsSource = cityByCountry;
Note: The sample data source is given at the beginning of this article. Take a look at it for reference.
In the given example all cities are grouped by country name so as a result the group headers/items contain the name of the country and below each group appear all the information connected with this country. The XAML structure is as follows:
XAML:
<DataTemplate x: <Border Background="YellowGreen" Margin="6"> <TextBlock Text="{Binding Title}" FontSize="40" Foreground="Black"/> </Border> </DataTemplate> <DataTemplate x: <Border Background="YellowGreen" Width="99" Height="99" Margin="6"> <TextBlock Text="{Binding Title}" FontSize="40" Foreground="Black"/> </Border> </DataTemplate> <toolkit:LongListSelector x: </toolkit:LongListSelector>
The corresponding ItemTemplate, ListHeaderTemplate and ListFooterTemplate are the same as those given in the above Composite Flat List section. And here are some screen shots to see the result:
The next step is to customize the group popup. We will change the default ItemsPanel with a WrapPanel:
XAML:
<toolkit:LongListSelector.GroupItemsPanel> <ItemsPanelTemplate> <toolkit:WrapPanel/> </ItemsPanelTemplate> </toolkit:LongListSelector.GroupItemsPanel>
After that the LongListSelector popup looks like:
In this post I demonstrated how to bind a Windows Phone 7 LongListSelector to different data in Flat and Grouped mode.
Hope you enjoy this article. The full source code is available here.
You can also follow us on Twitter: @winphonegeek for Windows Phone; @winrtgeek for Windows 8 / WinRT
Articles
posted by: Dick Heuser on 11/16/2010 6:56:44 PM
These are great articles. Thank you very much for taking the time to do them so well.
Dick
posted by: UshaKiran on 11/18/2010 1:33:21 PM
Excellent work... Thanks a lot...
Fantastic: Thank You!
posted by: Andy Pennell on 11/19/2010 1:34:38 PM
This made things so much clearer for me, thank you so much. I created my own post describing how to do this without LINQ for asynchronous population: by expanding your Group class.
Databinding SelectedItem
posted by: pFaz on 12/4/2010 12:46:07 AM
Does databinding SelectedItem work? This doesn't update a bound property on VM.
RE:Databinding SelectedItem
posted by: winphonegeek on 12/5/2010 7:41:58 PM
Yes it works. However, in order to update the property on your view model you will need to use two way binding:
SelectedItem="{Binding SelectedItem, Mode=TwoWay}"
You man also need to build and use the latest change set of the toolkit. More details here: WP7 LongListSelector and ListPicker fixes and new features in the latest build
Excellent
posted by: Rakesh on 12/10/2010 9:29:00 AM
Awesome Work..!! Great Explanation.. Had been waiting for this control for my project for quite long..as buget constraints held my neck from reinventing the wheel!!
Problem
posted by: Alberto on 12/16/2010 3:41:21 PM
When I try to use the LongListSelector always obtain a invalid cast exception in the LongListSelector.flattenData() funtion.
I'm doing all following the example, and I'm a little bit desperated.
re: Problem
posted by: winphonegeek on 12/17/2010 1:43:13 PM
@Alberto,
looking at the long list selector code it seems that you are binding to group objects that do not implement IEnumerable
However, without looking at your code we cannot be completely sure what the actual problem is. It will be helpful if you can send the code that does the binding.
Thank you
posted by: skt on 12/18/2010 3:20:48 AM
Just wanted to thank you for the blog post and code. I had been looking for a simple example of the LongListSelector and this was perfect.
linq expression not evaluating
posted by: Ron on 12/24/2010 10:05:46 AM
i followed your code but linq expression is not working for me. This is what i'm getting
<a target='_blank' title='ImageShack - Image And Video Hosting' href=''><img src='' border='0'/></a> i tried the code in visual studio c# and it works , not in vs express for wp7
fixed it
posted by: Ron on 12/25/2010 11:29:02 PM
i had to reinstall windows phone developers tool. Linq is working properly now. Thanks for the blog.
Sample code no longer works
posted by: Avery Pierce on 1/13/2011 12:59:15 AM
Everything worked great until build 60019. Your sample code now throws an InvalidCastException when you select a group item in the group popup. I'm hoping you can update your sample code because I used your code in a project and I'm not sure how to fix it. Thanks.
In LongListSelector.cs:
private int GetGroupOffset(object group) { int listHeaderOffset = HasListHeader && ShowListHeader ? 1 : 0; int listFooterOffset = HasListFooter && ShowListFooter ? 1 : 0;
bool displayAll = DisplayAllGroups; int groupHeaderOffset = GroupHeaderTemplate != null ? 1 : 0; int groupFooterOffset = GroupFooterTemplate != null ? 1 : 0;
int offset = listHeaderOffset;
foreach (IList g in ItemsSource) <-- EXCEPTION IS THROWN HERE (IList g) { if (g.Equals(group)) { break; }
if (displayAll || g.Count > 0) { offset += groupHeaderOffset + groupFooterOffset; }
offset += g.Count; }
return offset; }
RE:Sample code no longer works
posted by: winphonegeek on 1/16/2011 1:04:46 AM
Unfortunately in the latest internal build 60019 there are some changes in the ItemSource requirements. I.e the group items in the ItemSource collection must implement IList. The strange thing is that the toolkit default example is also not working and throws the same exception as you reported. So in order to fix the LongListSelector all you have to do is to implement IList (Note that practically only the Count property has to be changed!):
public class Group<T> : IEnumerable<T>, IList ... public int Count { get { return this.Items.Count; } }
I hope that this will help you. NOTE: This is only internal change set but not an official release!
RE:Sample code no longer works
posted by: TN on 3/7/2011 3:03:36 AM
But... IList doesn't require a Count property?
RE:Sample code no longer works
posted by: windowsphonegeek on 3/7/2011 9:38:01 AM
The issue with IList was fixed so if you use the latest release of the toolkit then implementing IList is no longer required. We have explained all this in the following post: WP7 Toolkit LongListSelector fix in the latest build: 60973
opening an item in a detailed page
posted by: Ghisura on 3/17/2011 2:21:32 PM
hello,
could you tell me how to send the information to the longlist of cities in a detailed new page please? I'm looking for by clicking on a city to open a new page where I could display different informations for this city.
I tried to adapt the example provided with the toolkit, but without success.
could you give me the source code for mainpage.cs and detailedpage.cs if possible? I'd be very grateful.
like :
void CitySelectionChanged(object sender, SelectionChangedEventArgs e)
{ NavigationService.Navigate(new Uri("/Samples/detailedpage.xaml,UriKind.Relative)); }`
`detailedpage.cs
protected override void OnNavigatedTo(NavigationEventArgs e) { } ` thank you.
RE:opening an item in a detailed page
posted by: winphonegeek on 4/26/2011 10:14:24 PM
When navigating to the details page you can pass data(for example Name or ID of the City) which can be used in your case to load the details for a specific City. More information about passing data when navigating to a page you can find in the following article:
One item appears in more then one groups
posted by: magician on 5/10/2011 9:31:48 PM
how do you handle the scenario of one item appearning in more then one groups? For example in your cities example - say instead of city we group based on languages and a city may have more than one languages. so effectively a city can come under more than group. How to do this with long list selector?
Thank you in advance.
Automatic sorting?
posted by: itsme on 5/23/2011 3:43:42 PM
Hi
I am fetching data from web and inserting it to a List, displaying as GroupItems. Everything works fine, just the list is sorted from a to z. I want that it doesnt sort. Is that possible? I have three data in the list, URL, Name and Group.
got it..
posted by: itsme on 5/25/2011 6:05:09 PM
was easy..just removed "order by"line.
WP7er
posted by: Ruaki on 6/4/2011 4:54:08 PM
Hi there?
I'm working on an WP7 app, in which I have a longlistSelector to bind data.
And, I have made the data binded on it.
but how can I add element to the datasource meanwhile, the longlistSelector and get the
new item, and present it on itself?? ( in mvvm)
Thanks.
mail : [email protected]
Dynamic Header & Footer
posted by: mlee on 6/8/2011 11:30:47 AM
Hi,
How can I display the Count of items of the bound datasource in the header and footer? Is it a case of hardcoded text only?
Log=ng Listpicker linq qry problems
posted by: Juan R. on 6/23/2011 7:30:57 AM
Hi I am having a problem displaying my information I have followed you sample above however only the jump list seems to display the data the listpicker does not.. here is my code sample..
c#code
private void GetUpCommingEpisodes() { int showcount = MyShowItems.Count; List<ShowDetail> episodes = new List<ShowDetail>(); for (int i = 0; i < showcount; ++i) { episodes = MyShowItems[i]._episodeList; } var episodesbydate = from e in episodes group e by e.DateStarted into n orderby n.Key descending select new Group<ShowDetail>(n.Key, n); lstepisodes.ItemsSource = episodesbydate; } }
xaml code
<!-- The group header template, for groups in the main list --> <DataTemplate x: <Border Background="#FFE51515" Height="30"> <TextBlock Text="{Binding Title}" Style="{StaticResource PhoneTextLargeStyle}" /> </Border> </DataTemplate> <!-- The template for groups when they are items in the "jump list". Not setting --> <!-- the GroupItemTemplate property will disable "jump list" functionality. --> <DataTemplate x: <Border Background="#FFE51515" Margin="{StaticResource PhoneTouchTargetOverhang}" Padding="{StaticResource PhoneTouchTargetOverhang}"> <TextBlock Text="{Binding Title}" Style="{StaticResource PhoneTextLargeStyle}"/> </Border> </DataTemplate> <!-- The template for movie items --> <DataTemplate x: <Grid Margin="{StaticResource PhoneTouchTargetOverhang}"> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="*"/> </Grid.ColumnDefinitions> <StackPanel Grid. <TextBlock Text="{Binding key}" Style="{StaticResource PhoneTextLargeStyle}" FontFamily="{StaticResource PhoneFontFamilySemiBold}" TextWrapping="Wrap" Margin="12,-12,12,6"/> <TextBlock Text="{Binding key}" Style="{StaticResource PhoneTextNormalStyle}" TextWrapping="Wrap" FontFamily="{StaticResource PhoneFontFamilySemiLight}"/> </StackPanel> </Grid> </DataTemplate>
RE: LongListSelector LINQ query problems
posted by: winphonegeek on 6/24/2011 11:29:17 AM
@Juan: while looking at your code I noticed the following two places that could potentially lead to problems:
in the GetUpCommingEpisodes method in the for loop, only the episodeList of the last item in the MyShowItems collection will be assigned to the episodes variable.
in the item template, both TextBlock controls are bound to a property named "key", and if there is no such property in your ShowDetail class then it would most probably seem that only groups are properly displayed and items are not; note that in the citiesItemTemplate in the article all three TextBlock controls are bound to different properties of the City class.
RE: @Ruaki
posted by: winphonegeek on 6/24/2011 12:19:39 PM
you can update the LongListSelector control dynamically by:
- inheriting your group class from ObservableCollection in order to propagate changes in a group's items to the UI
- binding the LongListSelector control to an ObservableCollection of group items in order to propagate changes to the list of groups to the UI
expect an article about that soon
RE: Dynamic Header & Footer
posted by: winphonegeek on 6/24/2011 12:47:43 PM
you can display the number of groups by putting a TextBlock in your header (or footer) template that binds to the Count property of the LongListSelector's data source:
<TextBlock Text ="{Binding Path=ItemsSource.Count, ElementName=citiesListGropus}"/>
Note that for the above binding to work you will have to bind the LongListSelector to a a collection that has a Count property (List for example) and not directly to an IQueryable. A simple way to do this is:
IList<Group<City>> cityByCountryList = cityByCountry.ToList();
If displaying the number of groups is not what you want, you will have to implement a property that returns what you need and bind to it.
RE: @Ruaki Update the LongListSelector control dynamically
posted by: windowsphonegeek on 6/28/2011 1:34:04 PM
RE: @Ruaki
We have just published a new article that explains how to update the LongListSelector dynamically: Dynamically updating a data bound LongListSelector in Windows Phone
ExpanderView inside longlistselector
posted by: WinDev on 11/8/2011 7:20:35 AM
Hi,
I am using the expanderview as the item in long list selector.
The issue which I faced is when we expand two or more items and then scroll till end/begin of list, and then back to that expanded items, the items are meshed up. The UI is disturbed, sometimes it causing crash too.
I think the issue comes due to list item De-allocated/ recycle(due to not visible) and then again allocated when appears back.
Please let me know the solution or suggestions to resolve this.
RE: ExpanderView inside longlistselector
posted by: winphonegeek on 11/8/2011 12:22:37 PM
Do you get any exceptions? If yes, it will help if you can share them (together with the stack trace).
Also, the long list selector is complex enough by itself, even without having expanders as items. Is there no other way to present the data that you have?
ExpanderView inside longlistselector
posted by: WinDev on 11/9/2011 8:07:53 AM
Hi,
Thanks for the reply.
Actually I am having the disturbed UI while scrolling up/down with expanded items.
Refer the attached screenshot.
Here, Art1, Art2, Art3 are under "Art" category; I am expanding the Art & Business types and then scrolling upto end of the list; then scrolling up to these items, they are mixed up; the Arts sub types are mixing with Business.
I think we need to do something when list items are being recycled.
Please suggest me what should we do to resolve this.
What other ways can you suggest to represent our data like these.
RE: ExpanderView inside longlistselector
posted by: winphonegeek on 11/9/2011 11:20:22 AM
Indeed, it seems like this "disturbed" UI is caused by the recycling of items during scrolling. The picture shows that the long list selector does not take into account the children (art2 and art3) of the Art item. Which is normal probably, because the expanded state is not remembered and restored during this recycling.
It seems that you have two options:
- (recommended) - why not have your groups be the categories, i.e. instead of having groups in the long list selector like A, B, etc. why not have the groups be Art, Business, etc. This way you will only have 2 levels instead of 3 and will be using the long list selector control the way it is intended to be used, both from development and user points of view.
- try to preserve the expanded state in the objects that you bind to items in the long list selector, so that when items are recycled and bound to a data item the expanded state is restored. This may work, but it may also not work, and it is probably not worth to implement it.
ExpanderView inside longlistselector
posted by: WinDev on 11/10/2011 10:35:02 AM
Hi,
Thanks for your answers and kind helps.
I can't follow the option#1 as we may have many categories under the same group, like group "T" may contains Travel, Technology, Trends, etc.. categories. So, We should have the first letter only as the group.
Also, for option#2 I have already bind a property (two way mode) to the expanded attr. of longlistselector.
So, as of now we are stuck with the same disturbed UI.
Forgive me if I am harassing you.
Thanks again for your helps.
ExpanderView inside longlistselector
posted by: WinDev on 11/11/2011 7:34:30 AM
Hi winphonegeek,
any suggestions from you to resolve this?
RE: ExpanderView inside longlistselector
posted by: winphonegeek on 11/13/2011 12:23:14 PM
@WinDev
Another thing that you could (in order not to use the expander in the long list selector) is to have the letter categories (A, B, C, ...) and under them the subcategories like what you are doing now. But then instead of showing the third level items in an expander, show them in a popup or a new page. And if you wanted to have your users more informed, you could show a summary for each item in the long list selector, like for example if it contains any items or not, so the user knows that there is no point in invoking the popup / new window.
i want to display all alphabets that not included in the collection
posted by: anoop on 12/7/2011 4:51:59 PM
hi all.. i wanted to display all the alphabets in the pop up;in which have to show alphabet which has no items as another color..?any solution?
Setting the ItemsPanelTemplate for the items, not only for the groups
posted by: Brian Elgaard on 12/17/2011 10:42:53 PM
It is fine to be able to set the ItemsPanelTemplate for the group items, but how can I set the ItemsPanelTemplate for the items in the list?
I would like to be able to use a WrapPanel for the items in the list.
Maybe someone has extended the LongListSelector with a dependency property, LongListSelector.ItemsPanel? Or maybe there is a trick I could use?
/Brian
Data binding or sample data
posted by: Jason Short on 12/28/2011 8:03:12 AM
I understand the sample above, very clear and concise. But I cannot figure out how would you generate a data context for this grouped data so you have design time data? Maybe I am just not that good with XAML, but I really need the sample data to visualize the layout of the controls. :)
tombstoning and binding
posted by: mayhemsoftware on 3/2/2012 10:00:32 AM
The longlistselector is a great control but I did come across a few issues with it related to selecteditem binding and tombstoning.
They are documented here:
Re: Data binding or sample data
posted by: JustAnotherAppDeveloper on 3/21/2012 6:24:29 AM
@Jason Short take a look at the latest LongListSelector walkthrough here:
framework
posted by: Annie Calvert on 7/19/2012 10:54:28 AM
Do these frameworks also support jQueryMobile applications? Some month ago I was not able to get KnockoutJS working properly with jQueryMobile (ajax DOM updates didn't work with bindings, etc.), so I gave it up..
Data Virtualization (LLS WP7)
posted by: Nazar on 1/15/2013 4:23:20 PM
I don’t understand why you saying that LongListSelector supports full data virtualization when it calls GetEnumerator() in IList instead of indexer. Data virtualization means getting only batch of necessary elements to fill virtualized UI by calling IList’s indexer.
Perform work by action
posted by: Mani on 2/3/2013 12:08:05 PM
I have a button in the data template. Now how can i get all the data found in data template through this button click event?
This is the data template code
how to scrolling same loike our windows phone 7 people hub app?
posted by: Ram on 3/28/2013 7:49:09 AM
HI ,
I am using WP7 LongListSelector.
how to scrolling same like windows phone 7 people hub application ? when I scrolling my list it is scroll total with header group also but I need only scroll items group up to second group coming to first group after move first group when our second group coming to first group.
any one please help to me .
Thanks , Ram
Sorting the items in the group
posted by: Andrei on 9/12/2013 12:09:09 AM
Hi,
How can I sort the items in the group also based on the alphabet? For example in the example you gave with the cities, for Spain the order should be Barcelona, Madrid, Mallorca... | http://www.geekchamp.com/articles/wp7-longlistselector-in-depth--part2-data-binding-scenarios | CC-MAIN-2015-14 | refinedweb | 4,140 | 61.87 |
This is the PurchaseOrder schema.
The corresponding data certainly not resides in the EMPLOYEES table. How do you intend to generate a valid XML, from which table ?
yes you are correct but if i am giving the schema wrong it should through error right but its executing
see the code below
SELECT XMLElement("Employee",
XMLAttributes(''
as "xmlns:xsi",''
as "xsi:SchemaLocation")
, XMLForest(employee_id)
) AS "RESULT"
FROM employees
WHERE department_id=10;
in the above its all related to ipo.xsd but its executing and giving the result , if it is pointing to the wrong xsd it should through error but i am getting result for this
Thanks,
mahesh
yes you are correct but if i am giving the schema wrong it should through error right but its executing
Yes, sorry for the misunderstanding.
The automatic validation will take place only if the schema location is specified properly.
It's not the case in your example :
- there's a typo on "xsi:SchemaLocation" : should be "xsi:schemaLocation".
- the location itself is incorrect. When the schema defines a target namespace, the location must be given using "namespace_uri schema_url".
This works for me on 10.2.0.5 (the closest I have to your version) :
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> SELECT XMLElement("Dummy",
2 XMLAttributes(
3 '' as "xmlns:xsi"
4 , '' as "xsi:schemaLocation"
5 )
6 )
7 FROM dual ;
FROM dual
*
ERROR at line 7:
ORA-31043: Element 'Dummy' not globally defined in schema
''
Hi thanks ,
the same error which i am getting right now if this is the case can you provide me how to over come this error
ORA-31043: Element 'Dummy' not globally defined in schema ''
Mahesh
That's not an "error" per se.
It means the schema validation is now working as expected and has detected that the dummy document doesn't conform to the specified schema.
Is it not what you wanted to check?
Hi thanks its working fine when i changed the DUMMY to purchaseOrder in the root element.
SELECT XMLElement("purchaseOrder",
XMLAttributes('' AS
"xmlns:xsi",
'' AS
"xsi:schemaLocation"),
XMLForest(employee_id, last_name, salary)) AS "RESULT"
FROM hr.employees
WHERE department_id = 10;
But still my question is XSD is different and the passing values are different see it below
XMLForest(employee_id, last_name, salary)) AS "RESULT"
FROM hr.employees
WHERE department_id = 10;
But still it is executing and giving output if iam right when i am referning to wrong XMLForest(employee_id, last_name, salary)) and wrong table also it should through the
error.
Mahesh
could you please provide me a best way to validate my generated xml with xsd
OK, after testing a little bit further myself, it looks like we cannot rely on the schema validation feature this way (at least on 10.2). It seems to only validate the root node.
If you can afford a second step, you have to use XMLType's schemaValidate() method.
It'll perform a full validation and throw an error if the validation fails :
DECLARE
doc xmltype;
BEGIN
select xmlelement( ... )
into doc
from my_table ;
doc.schemaValidate();
END;
/
Hi Thanks a lot thank you so much its working fine .
I like to know one more thing in this doc xmltype in this how much memory the doc will store.
Because there is a possibilty for which the select statement can store millions of record , i would like to know two points clearly.
1.Is the doc can hold more memory is the number of rows retrived is more than million.
2.And if the select statement is selecting more rows which means it will be genereated as one XML?
Pleas help me on this .
Mahesh
1.Is the doc can hold more memory is the number of rows retrived is more than million.
In your version, the XMLType datatype is still based on a CLOB object, which can hold gigabytes of data.
Let me ask you another question : what are you intending to do with the XML? Writing to file, sending to another Oracle process, etc. ?
2.And if the select statement is selecting more rows which means it will be genereated as one XML?
Well, you have to make sure it returns only one row.
Hi Thanks,
1. I want to generate an XML and i have to validate it with XSD and then this XML file has to be send to a remote place.
2. And i also want to generate xml for many rows as a one file and have to validate as well as,
Thank you,
Mahesh
if i want to store all as one xml and also xsd validation which would be the best way please suggest some ideas
Hi Please share me some points its really very urgent.
47e033bd-d313-47bd-9372-871358ce3c3e wrote:
if i want to store all as one xml and also xsd validation which would be the best way please suggest some ideas
I already showed you how to do it.
Which of those two points you don't understand?
hi thanks ya got it XMLAGG is working fine very thnaks again | https://community.oracle.com/message/11129698 | CC-MAIN-2014-15 | refinedweb | 859 | 68.2 |
This is step 8 of a free "NerdDinner" application tutorial that walks-through how to build a small, but complete, web application using ASP.NET MVC 1.
Step 8 shows how to add paging support to our /Dinners URL so that instead of displaying 1000s of dinners at once, we'll only display 10 upcoming dinners at a time - and allow end-users to page back and forward through the entire list in an SEO friendly way.
If you are using ASP.NET MVC 3, we recommend you follow the Getting Started With MVC 3 or MVC Music Store tutorials.
NerdDinner Step 8: Paging Support
If our site is successful, it will have thousands of upcoming dinners. We need to make sure that our UI scales to handle all of these dinners, and allows users to browse them. To enable this, we'll add paging support to our /Dinners URL so that instead of displaying 1000s of dinners at once, we'll only display 10 upcoming dinners at a time - and allow end-users to page back and forward through the entire list in an SEO friendly way.
Index() Action Method Recap
The Index() action method within our DinnersController class currently looks like below:
// // GET: /Dinners/ public ActionResult Index() { var dinners = dinnerRepository.FindUpcomingDinners().ToList(); return View(dinners); }
When a request is made to the /Dinners URL, it retrieves a list of all upcoming dinners and then renders a listing of all of them out:
Understanding IQuerable<T>
IQueryable<T> is an interface that was introduced with LINQ as part of .NET 3.5. It enables powerful "deferred execution" scenarios that we can take advantage of to implement paging support.
In our DinnerRepository we are returning an IQueryable<Dinner> sequence from our FindUpcomingDinners() method:
public class DinnerRepository { private NerdDinnerDataContext db = new NerdDinnerDataContext(); // // Query Methods public IQueryable<Dinner> FindUpcomingDinners() { return from dinner in db.Dinners where dinner.EventDate > DateTime.Now orderby dinner.EventDate select dinner; }
The IQueryable<Dinner> object returned by our FindUpcomingDinners() method encapsulates a query to retrieve Dinner objects from our database using LINQ to SQL. Importantly, it won't execute the query against the database until we attempt to access/iterate over the data in the query, or until we call the ToList() method on it. The code calling our FindUpcomingDinners() method can optionally choose to add additional "chained" operations/filters to the IQueryable<Dinner> object before executing the query. LINQ to SQL is then smart enough to execute the combined query against the database when the data is requested.
To implement paging logic we can update our DinnersController's Index() action method so that it applies additional "Skip" and "Take" operators to the returned IQueryable<Dinner> sequence before calling ToList() on it:
// // GET: /Dinners/ public ActionResult Index() { var upcomingDinners = dinnerRepository.FindUpcomingDinners(); var paginatedDinners = upcomingDinners.Skip(10).Take(20).ToList(); return View(paginatedDinners); }
The above code skips over the first 10 upcoming dinners in the database, and then returns back 20 dinners. LINQ to SQL is smart enough to construct an optimized SQL query that performs this skipping logic in the SQL database – and not in the web-server. This means that even if we have millions of upcoming Dinners in the database, only the 10 we want will be retrieved as part of this request (making it efficient and scalable).
Adding a "page" value to the URL
Instead of hard-coding a specific page range, we'll want our URLs to include a "page" parameter that indicates which Dinner range a user is requesting.
Using a Querystring value
The code below demonstrates how we can update our Index() action method to support a querystring parameter and enable URLs like /Dinners?page=2:
// //); }
The Index() action method above has a parameter named "page". The parameter is declared as a nullable integer (that is what int? indicates). This means that the /Dinners?page=2 URL will cause a value of "2" to be passed as the parameter value. The /Dinners URL (without a querystring value) will cause a null value to be passed.
We are multiplying the page value by the page size (in this case 10 rows) to determine how many dinners to skip over. We are using the C# null "coalescing" operator (??) which is useful when dealing with nullable types. The code above assigns page the value of 0 if the page parameter is null.
Using Embedded URL values
An alternative to using a querystring value would be to embed the page parameter within the actual URL itself. For example: /Dinners/Page/2 or /Dinners/2. ASP.NET MVC includes a powerful URL routing engine that makes it easy to support scenarios like this.
We can register custom routing rules that map any incoming URL or URL format to any controller class or action method we want. All we need to-do is to open the Global.asax file within our project:
And then register a new mapping rule using the MapRoute() helper method like the first call to routes.MapRoute() below:
public void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "UpcomingDinners", // Route name "Dinners/Page/{page}", // URL with params new { controller = "Dinners", action = "Index" } // Param defaults ); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with params new { controller="Home", action="Index",id="" } // Param defaults ); } void Application_Start() { RegisterRoutes(RouteTable.Routes); }
Above we are registering a new routing rule named "UpcomingDinners". We are indicating it has the URL format "Dinners/Page/{page}" – where {page} is a parameter value embedded within the URL. The third parameter to the MapRoute() method indicates that we should map URLs that match this format to the Index() action method on the DinnersController class.
We can use the exact same Index() code we had before with our Querystring scenario – except now our "page" parameter will come from the URL and not the querystring:
// //); }
And now when we run the application and type in /Dinners we'll see the first 10 upcoming dinners:
And when we type in /Dinners/Page/1 we'll see the next page of dinners:
Adding page navigation UI
The last step to complete our paging scenario will be to implement "next" and "previous" navigation UI within our view template to enable users to easily skip over the Dinner data.
To implement this correctly, we'll need to know the total number of Dinners in the database, as well as how many pages of data this translates to. We'll then need to calculate whether the currently requested "page" value is at the beginning or end of the data, and show or hide the "previous" and "next" UI accordingly. We could implement this logic within our Index() action method. Alternatively we can add a helper class to our project that encapsulates this logic in a more re-usable way.
Below is a simple "PaginatedList" helper class that derives from the List<T> collection class built-into the .NET Framework. It implements a re-usable collection class that can be used to paginate any sequence of IQueryable data. In our NerdDinner application we'll have it work over IQueryable<Dinner> results, but it could just as easily be used against IQueryable<Product> or IQueryable<Customer> results in other application scenarios:
public class PaginatedList<T> : List<T> { public int PageIndex { get; private set; } public int PageSize { get; private set; } public int TotalCount { get; private set; } public int TotalPages { get; private set; } public PaginatedList(IQueryable<T> source, int pageIndex, int pageSize) { PageIndex = pageIndex; PageSize = pageSize; TotalCount = source.Count(); TotalPages = (int) Math.Ceiling(TotalCount / (double)PageSize); this.AddRange(source.Skip(PageIndex * PageSize).Take(PageSize)); } public bool HasPreviousPage { get { return (PageIndex > 0); } } public bool HasNextPage { get { return (PageIndex+1 < TotalPages); } } }
Notice above how it calculates and then exposes properties like "PageIndex", "PageSize", "TotalCount", and "TotalPages". It also then exposes two helper properties "HasPreviousPage" and "HasNextPage" that indicate whether the page of data in the collection is at the beginning or end of the original sequence. The above code will cause two SQL queries to be run - the first to retrieve the count of the total number of Dinner objects (this doesn't return the objects – rather it performs a "SELECT COUNT" statement that returns an integer), and the second to retrieve just the rows of data we need from our database for the current page of data.
We can then update our DinnersController.Index() helper method to create a PaginatedList<Dinner> from our DinnerRepository.FindUpcomingDinners() result, and pass it to our view template:
// // GET: /Dinners/ // /Dinners/Page/2 public ActionResult Index(int? page) { const int pageSize = 10; var upcomingDinners = dinnerRepository.FindUpcomingDinners(); var paginatedDinners = new PaginatedList<Dinner>(upcomingDinners, page ?? 0, pageSize); return View(paginatedDinners); }
We can then update the \Views\Dinners\Index.aspx view template to inherit from ViewPage<NerdDinner.Helpers.PaginatedList<Dinner>> instead of ViewPage<IEnumerable<Dinner>>, and then add the following code to the bottom of our view-template to show or hide next and previous navigation UI:
<% if (Model.HasPreviousPage) { %> <%= Html.RouteLink("<<<", "UpcomingDinners", new { page = (Model.PageIndex-1) }) %> <% } %> <% if (Model.HasNextPage) { %> <%= Html.RouteLink(">>>", "UpcomingDinners", new { page = (Model.PageIndex + 1) }) %> <% } %>
Notice above how we are using the Html.RouteLink() helper method to generate our hyperlinks. This method is similar to the Html.ActionLink() helper method we've used previously. The difference is that we are generating the URL using the "UpcomingDinners" routing rule we setup within our Global.asax file. This ensures that we'll generate URLs to our Index() action method that have the format: /Dinners/Page/{page} – where the {page} value is a variable we are providing above based on the current PageIndex.
And now when we run our application again we'll see 10 dinners at a time in our browser:
We also have <<< and >>> navigation UI at the bottom of the page that allows us to skip forwards and backwards over our data using search engine accessible URLs:
Next Step
Let's now look at how we can add authentication and authorization support to our application.
This article was originally created on July 27, 2010 | http://www.asp.net/mvc/overview/older-versions-1/nerddinner/implement-efficient-data-paging | CC-MAIN-2016-40 | refinedweb | 1,667 | 51.48 |
Help talk:Contents
From Wikibooks, the open-content textbooks collection
[edit] Which pages should we still link to?
I assume all the pages of the old help hierarchy that we no longer link to should be deleted in the future. (I've included Help:Introduction 2 in them in order to mark them.) Should we still link to Help:Introduction? Doesn't Wikibooks:Welcome and Wikibooks:About cover the same information and more? I would like to keep a link to Help:Contributing to Wikibooks for a while because this was my primary source of information about editing (or at least of links to the corresponding pages). --Martin Kraus (talk) 19:49, 19 December 2008 (UTC)
Should we include a link to Wikibooks:Template messages? I think it is about the only useful directory of templates. --Martin Kraus (talk) 15:54, 23 December 2008 (UTC)
[edit] PDF Books
I have spent the last 4-6 months working on the Commons Category:Pdf files and have created several subcategories. Many of the saves and deletions are books, University theses, manuals etc in Pdf format. It was suggested on a discussion page at one time that Pdf books and Theses go in wikibooks. If so how do I go about linking or moving these book files to here? WayneRay (talk) 15:50, 30 December 2008 (UTC)WayneRay
- You might want to discuss this question at WB:RR. I think University theses don't belong in Wikibooks because (at least in the Universities I know) they usually include original research, which should not be published at Wikibooks (WB:NOR). Moreover, you would have to convert the PDF files to the wiki markup language, as the hosting of existing books (in the PDF format) is discouraged (WB:SOURCE and WB:HOST). --Martin Kraus (talk) 16:41, 30 December 2008 (UTC)
- Last I heard in a recent reform of Commons' inclusion policy, PDF files are still allowed on Commons. I don't see any reason why they would need to be moved anywhere. They can be accessed from any project where they might be appropriate. --darklama 14:14, 11 January 2009 (UTC)
[edit] Help cleanup
Some clean up effort has been going on for some time now, on and off again, to make Wikibooks' help more useful. I propose that Wikibooks' help space be treated like a book. By formatting pages like is done within our books, people unfamiliar with Wikibooks' style can quickly become familiar with how things are done differently on Wikibooks. Using this approach the help space can teach and help people understand how to do things on Wikibooks by following its own example. To this end, I think some requirements are a must: each page must include a 2 level heading with the page/chapter title; additional headings must use 3 level heading or greater; text must not come before the first level 2 heading; and the help space must have a consistent style, which means discussion needs to happen to develop a style guide for the help namespace.
I also think the help space could be made more useful by including more media, diagrams, illustrations, etc. However media should be used to compliment rather than substitute for missing information, in order to make Wikibooks' help accessible to more people, such as the blind.
Any thoughts, ideas, opinions, or any other suggestions? --darklama 14:41, 11 January 2009 (UTC)
- I don't have any objections as such to treating the Help name-space as a book, but would caution against forcing it into that mold. First of all, I don't foresee anyone really reading this cover to cover. It should, therefore, be treated more as a handbook with a strong emphasis on structure to help readers quickly locate useful information.
- Linear navigation aids will not necessarily be of much use for such a book. As for other Wikibooks conventions, I would think it more useful to point to various featured books for examples as there often exist multiple approaches to similar issues.
- Whiteknight wants to take the book-approach a step further and deprecate the Help name-space all together. I'm not sure which approach is the best. I've so far just been trying to wrap my head around the sprawling set of pages that we have here. For the time begin, I think it would be best to implement changes in increments, first organising the available contents and then molding it into whatever we deem best. --Swift (talk) 03:49, 12 January 2009 (UTC)
- Your saying what I really meant anyways. I was trying to suggest a strong emphases on structure, as well as a strong emphases on consistent style. I wasn't suggesting that there necessarily be a specific reading order or navigation aids. I guess I'm a bit ahead of you in that regard, I've already wrapped my head around the help pages, years ago. --darklama 18:01, 12 January 2009 (UTC)
- So, how will the redesigned help space be different from Using Wikibooks? One of the major differences between the current help pages and Using Wikibooks is probably that help pages are full of links (and should be in my opinion) while the pages in Using Wikibooks have less links (since they should be self-contained as in a good textbook). Do you intend to keep this difference? --Martin Kraus (talk) 12:40, 13 January 2009 (UTC)
- Well I guess to me the most obvious differences will be in presentation and audience. I think Using Wikibooks is intended to be read in a certain order and is intended to be read by an audience already familiar with most wiki concepts, while the help space is likely to depend less on a specific reading order and should be intended to be read by people not familiar with most wiki concepts. I would like to make the help space more self-contained and limit links to important things explained in greater detail that don't quite fit the beginner audience, by linking only to Using Wikibooks and to the MediaWiki website if I can. I think links should be used more sparingly than they currently are. Does that answer your questions?
- It does, thanks. I wasn't aware that Using Wikibooks is intended for an audience who is already familiar with most wiki concepts. Since I'm rather new I often read help pages and in fact I'm often missing links. In particular, I often know that I have read some information but don't find the page to read it again. The most recent case was Help:Category which didn't link to the Wikibooks-specific information at Wikibooks:Categories (I added that link); thus, I was searching quite a bit for that page. Thus, at least currently I think links to the Wikibooks namespace should also be included. --Martin Kraus (talk) 14:22, 13 January 2009 (UTC)
- I could be wrong about the intended audience or what assumptions Using Wikibooks makes. Even if there is some overlap, I don't think that is going to be a problem. I think linking to related Wikibooks policies like with your example is fine. Don't get me wrong, I'm not suggesting that links be blindly deleted. I just think the help space relies too much on external links in an attempt to make up for what is missing or what is lacking, rather than expanding the available help. --darklama 20:00, 13 January 2009 (UTC)
[edit] Deleting non-Wikibooks-specific pages
I've mentioned this to Darklama, but I figured I'd add this here in case others would like to comment.
We have a number of help pages that explain how to use markup. The problem is that they are unmaintained versions of content that rather belongs at Meta and the MediaWiki wiki. I've gone through these and have found nothing of use to describe Wikibooks related syntax. I propose that the following be deleted:
- Help:EasyTimeline syntax (See mw:Extension:EasyTimeline/syntax)
- Help:HTML in wikitext (See meta:Help:HTML in wikitext)
- Help:List (See meta:Help:List)
- Help:Variable, Help:Magic words (See: mw:Help:Magic words)
- Help:Navigational image (See meta:Help:Navigational image)
- Help:Special characters, Help:Special characters/Part 2, Help:Turkish characters (See: meta:Help:Special characters).
- Help:TeX markup, Help:Formula (See meta:Help:Displaying a formula
- Help:URL redirect to Help:Editing?
The parenthesised links indicate where I believe we should point users who might be looking for this sort of content. --Swift (talk) 22:11, 4 February 2009 (UTC)
Oh, and this one too:
--Swift (talk) 07:29, 5 February 2009 (UTC)
- I don't think these pages are needed because the majority of the markup is described already in Help:Editing or should be described there. I think Help:Templates should also discuss variables and magic words. I don't think there will really be a need to point users to content on the other projects.
- Also I think a lot of the help on Meta is in the process of being moved to MediaWiki, because its not specific to Wikimedia projects. I think there also has been some talk of using Wikibooks to aid in improving the mediawiki documentation. In either case the lack of stability is one more reason not to rely on Meta or MediaWiki much. --darklama 05:58, 6 February 2009 (UTC)
- As far as I know, the transfer from Meta to MediaWiki to the help namespace was abandoned after the MediaWiki Help namespace was put in the public domain (the manual and extension namespace are still transwikied). The MediaWiki Help namespace has since been built from scratch. It is steadily improving and I recommend that we help direct our non-Wikibooks specific documentation efforts there. --Swift (talk) 07:22, 6 February 2009 (UTC)
- I think it may still be happening. I have tried to improve some help on Meta for instance only to go back to it to find that the page had moved to MediaWiki. I have also occasionally went to a help page to double check something only to find that that its been marked to be moved to MediaWiki or a notice that updates should now be done at MediaWiki. I think Wikibooks needs to maintain some local documentation, because linking to Meta and MediaWiki doesn't seem to be enough for most users. I just think that these pages are unneeded because local documentation is either already present on another page or should be. --darklama 09:08, 6 February 2009 (UTC)
Done Whew! I've updated a bunch of links, but left most of those on pages in the help namespace as we'll have to go through those anyway. --Swift (talk) 10:17, 13 February 2009 (UTC)
[edit] Page headings
- The following discussion was moved here from Help talk:Revision review#Page title after it moved on to a more general discussion about the sectioning of content in the help namespace. --Swift (talk) 04:06, 11 February 2009 (UTC)
I've just reverted part of an earlier edit. I removed a section header titled "Stable revisions" that spanned the entire page, and bumped the sub-section headers. If a section really spans the entire page, then the page should probably be renamed to its title.
I wasn't sure what the most proper page name would be so I checked mw:Help:FlaggedRevs which states that stable revisions are only revisions of a certain limit. On Wikibooks that limit is currently at the lowest (sighting). It might be more useful to keep a more general title for this page in line with the help page currently being developed over at MediaWiki.org. --Swift (talk) 07:57, 3 February 2009 (UTC)
- Don't think of it as a section heading, but as a page heading. Books do that all the time, its considered part of Wikibooks' style. This goes back to my suggestion on Help talk:Contents to make the help pages follow some basic book style guidelines. --darklama 14:47, 3 February 2009 (UTC)
- Oh. But we already have page headings: the page titles. I'd agree that designating the help pages with a certain look could be useful. Instead of a "page heading" we could adda navigation bar with a nice, helpful looking icon. The {{useful}} template helps with creating a standard look, but, ironically, isn't very useful itself. I'd be rather opposed to adding these page headings. --Swift (talk) 17:09, 3 February 2009 (UTC)
- I disagree. A page's title isn't a heading. The page title doesn't even show up in the TOC that is generated. Why would you be rather opposed to using page headings? I've been trying to add them to every help page I've edited for awhile now. I don't think a page's title is very useful on paper. I'd like the help pages to eventually be printed as a way of attracting more contributors to Wikibooks. Also I thought we had already agreed on Help talk:Contents that navigational templates shouldn't be used? I guess you could say I'd be rather opposed to not using page headings. --darklama 04:57, 6 February 2009 (UTC)
- The page title is a h1-heading. That isn't just technical hair-splitting as that's how it's rendered both on screen and in print. I remember reading in some W3C document on HTML recommending that pages have only a single h1 level heading each, but be sectioned up with h2 and higher. A single section for the entire page just seems redundant.
- I consider it a feature that this doesn't show up in the generated TOC. The "Stable Revisions" heading in this version serves little purpose on that page and forces every other section to be pushed down one level. I, furthermore, prefer having the h2 level headings to section pages up into logical parts as they render by default with that line underneith and the higher level headings as rather small and similar.
- If you'd like this type of page-wide sections to make it easier to stich them together in a print version, how about adding h1 level headings on the print version page (which I'd consider the the most logical approach) or create a collection?
- I think I misunderstood you a bit in the Help cleanup discussion. I think we shouldn't necessarily conform to a linear navigation, but consider a template with an overview of useful pages, well ... useful. It might be useful to move these two topics to Help talk:Contents. --Swift (talk) 06:34, 6 February 2009 (UTC)
- First off I was using a h2 heading, not a h1 heading, so there was only a single h1 level heading on the page. I think h1 headings should correlate to the sections of the help (like Introduction, Browsing Wikibooks, etc) and h2 headings should be used to divide the help into "chapters". I think this is what books do, although the separation tends to depend on whether a book merely has chapters or needs to be divided further than that. From there h3 level headings and higher are used to further divide content. This still is in line with W3C's recommendation, so I don't see the problem.
- As I'm sure you know, modules are often chapters or parts of a chapter. I think if a module is a chapter it should only have one h2 heading, or if a module is part of a chapter it should only have one h3 heading. I think doing this does serve a purpose beyond pushing page sections down. I think how the help is divided should be reflected in the use of different heading levels, and should reflect more than just how a module is divided, so that readers and writers understand how a book is divided. Plus I don't think there should be any text above a page's generated TOC. I think having text above a page's generated TOC interferes with or interrupts the flow of reading the page.
- Some printed books don't require that they be read from front to back in a liner order. I think this could be considered that type of book, so that's why I don't see a problem with not using navigational aids. Feel free to move this discussion there. --darklama 08:51, 6 February 2009 (UTC)
- I know you used a h2 heading. I thought that was clear from the first paragraph of my previous comment: "[Pages should be] sectioned up with h2 and higher. A single section for the entire page just seems redundant." I do realise that you had a useful purpose in mind with your edits. I never said that "pushing page sections down" was the purpose, but noted it as the effect.
- I think we may have been mis-reading each others comments a bit lately on issues where we don't see eye to eye. Do you see how I think you've misinterpreted my previous comment? Do you find that I'm misunderstanding you as well? I'd appreciate it if we could sort that out as I feel like we're talking in two different dialects and not progressing much because of it. --Swift (talk) 04:30, 11 February 2009 (UTC)
(reset)
I still don't understand what you're trying to say or why you disagree with my approach. Whatever you were trying to explain wasn't clear to me from the context. I guess I don't understand your reference to h1 and W3C's recommendations. To me what you said suggested that you think I wasn't following W3C's recommendation because I was using h1 headings, which wasn't apparently what you meant.
I only see that I must have misunderstood your previous comment, but not how I misunderstood it because you haven't provided any further explanation. I can't tell yet if you've misunderstood me as well. What have I said that you think you've misunderstood? If I understand you right, you seem to think the purpose of section headings is to separate a module into sections. I on the other hand think section heads are for separating a book into sections and subsections. I guess you see each help page as standing on it own, and I see the each help page as being part of a book where each chapter can be read independently. Other than trying to explain in our own words what we each think the other person is saying and trying to clarify what is misunderstood, I'm not sure what else we can do to reduce any misunderstandings, without knowing what is misunderstood. --darklama 13:24, 11 February 2009 (UTC)
- Sorry for the late reply. What I disagree with is your use of a single section to encompass the entire page. I was concerned that you might think my approach didn't tie the sections of the page together tightly enough. Rather than just saying that I disagreed with the use of only a single h2 heading per page, I therefore lead into it with the reference to the W3C's recommendation. My aim was to clarify that I didn't think the sections should just dangle unconnected, and that the h1 section (the title) should — and already does — do this.
- I don't, actually, think that I'm misunderstanding you a lot, but figured I'd still ask.
- I think we have fairly similar ideas of what to do with these pages, but differ mostly on formatting (well, and that I want to "outsource" simple syntax stuff to mw:; but that's a different topic). I don't think the help pages shouldn't form a whole, but feel (like you) that it's important that they be able to stand on their own. I'm less interested in printing these out, than in having them accessible and organised for users. Would you be willing to accept the compromise to place h1 headings between page inclusions on print versions? I don't think we should be formatting individual pages in for how they should come out on print and should focus on how they display to online users of Wikibookians. --Swift (talk) 06:58, 19 February 2009 (UTC) | http://en.wikibooks.org/wiki/Help_talk:Contents | crawl-002 | refinedweb | 3,417 | 59.84 |
: : DOMUtils.getAttributeWithInheritance instead. My one scenario I came
: : across where I wanted some context passed down was "fieldName" and this
: : is handled simply by leaf nodes walking up the w3c.dom.Node tree until
: : you find an Element with this attribute set.
:
: Hmm, i can see how that would work in some cases, but it's not very
: general purpose. Builders deep in the tree can "look up" at attributes
: higher in the tree, but that assumes the state info they want is already
: in the XML Elements (allthough i suppose builders higher up could
: mutate the attributes on their Element for discovery by lower level
: builders ... but that seems sketchy).
:
: It also doesn't address the problem of passing state up the tree ... which
: may be a less common case, but it still seems important to support it.
I should have thought about this more before i replied. Two things
have occured to me in the last 30 minutes...
1) the idea i had once upon a time for maintaining stack frames of
"state" would have required builders that wanted to propogate state up
from their children to do so explicitly -- there's no reason why the same
thing couldn't be done by checking the attribute values of their child
Elements and setting them on their own Element before returning. So
yeah ... using attributes to pass state up/down would probably work in all
the cases i can think of..
....BUT...
it still seems sketchy to me to use attributes, because it could be
important to be able to tell the difference between "true" attributes
from the XML and "state" attributes passed up/down from other builders.
2) In java 1.5, org.w3c.dom.Node has setUserData/getUserData methods
specifically for the purpose of letting you annotate nodes as you move
arround the tree.
So yeah ... that's something to think about.
Perhaps attribute namespaces could be used to distinguish genuine
attributes from state information ... of course that raises a whole new
question about wether the builderfactories should support namespaces when
registering builders.
-Hoss
---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-dev-help@lucene.apache.org | http://mail-archives.us.apache.org/mod_mbox/lucene-dev/200602.mbox/%3CPine.LNX.4.58.0602272240580.28685@hal.rescomp.berkeley.edu%3E | CC-MAIN-2020-05 | refinedweb | 368 | 70.23 |
Hello newb question here. I’m not sure where to look in the API for this but I’m would like to get the name of each object and add it to my list. I’m not sure what Class or method gets the names of objects?
You should specify which language you want to see an example in.
Sorry for some reason I thought I added it in the tags. C#
You would refer to the name via RhinoObject.Attributes.Name
If backing out from the above page following the “path”, or object structure, you would find the following pages:
-
-
-
In short: RhinoObject.Attributes.Name
And, the different ways to obtain an (Rhino) object reference in the first place, can be found in the follwoing namespace:
- Rhino.DocObjects.Tables -
// Rolf
So this is my attempt to get the name attribute in a list but I’m getting a null error in the foreach each in rhinoObjects?
private ObjectTable rhinoObjects;
private System.Windows.Forms.ListBox listbox1;
private void InitializeComponent() { this.listbox1 = new System.Windows.Forms.ListBox(); // List of Objects this.listbox1.Click += new System.EventHandler(this.ObjectList_SelectedIndexChanged); this.listbox1.Name = "Outliner"; this.listbox1.Size = new System.Drawing.Size(94, 23); foreach (RhinoObject o_ref in rhinoObjects) { RhinoObject obj = o_ref; string curvename = obj.Attributes.Name; this.listbox1.Items.Add(curvename); this.listbox1.Text = curvename; } }
Because your rhinoObjects property is not initalized and therefor empty.
You need the doc from the rhino command and assing doc.Objects to rhinoObjects or just foreach-loop over doc.Objects. | https://discourse.mcneel.com/t/get-list-of-objects-in-project-c/79272 | CC-MAIN-2020-34 | refinedweb | 253 | 52.15 |
in reply to
Pivoting parts of a database table into an HTML table
Here is a script which collates the data and produces the desired compact view:
#! perl
use strict;
use warnings;
use feature 'switch';
{
my %h;
populate (\%h);
stringify (\%h);
print_view(\%h);
}
sub populate
{
my ($h) = @_;
while (<DATA>)
{
my ($date, $type, $word) = split /\s+\|\s+/;
chomp $word;
unless (exists $h->{$date})
{
$h->{$date}{woody_words} = [];
$h->{$date}{tinny_words} = [];
}
given ($type)
{
when ('woody') { push @{ $h->{$date}{woody_words} }, $word
+; }
when ('tinny') { push @{ $h->{$date}{tinny_words} }, $word
+; }
default { warn "Datum '$word' of unknown type '$typ
+e'"; }
}
}
}
sub stringify
{
my ($h) = @_;
for (keys %$h)
{
$h->{$_}{woody_str} = join(', ', sort @{ $h->{$_}{woody_words}
+ });
$h->{$_}{tinny_str} = join(', ', sort @{ $h->{$_}{tinny_words}
+ });
}
}
sub print_view
{
my ($h) = @_;
my $max = 0;
for (keys %$h)
{
my $woody_str = $h->{$_}{woody_str};
my $new_length = length $woody_str;
$max = $new_length if defined $woody_str && $new_length > $max
+;
}
printf " date | %-*s | tinny words\n", $max, 'woody words';
printf "-------------+-%s-+-------------\n", '-' x $max;
printf "%s %-*s %s\n", $_, $max, $h->{$_}{woody_str}, $h->{$_}{
+tinny_str}
for sort { cmp_dates() } keys %$h;
}
{
my %months;
BEGIN
{
%months = (Jan => 1, Feb => 2, Mar => 3, Apr => 4,
May => 5, Jun => 6, Jul => 7, Aug => 8,
Sep => 9, Oct => 10, Nov => 11, Dec => 12);
}
sub cmp_dates
{
my ($day_a, $mon_a) = $a =~ /^(\d{1,2}) (\w{3})/;
my ($day_b, $mon_b) = $b =~ /^(\d{1,2}) (\w{3})/;
return ($months{$mon_a} < $months{$mon_b}) ? -1 :
($months{$mon_a} > $months{$mon_b}) ? +1 :
($day_a < $day_b) ? -1 :
($day_a > $day_b) ? +1 : 0;
}
}
__DATA__
28 Sep (Fri) | woody | caribou
28 Sep (Fri) | tinny | litterbin
29 Sep (Sat) | woody | wasp
29 Sep (Sat) | woody | yowling
29 Sep (Sat) | woody | gorn
30 Sep (Sun) | woody | intercourse
30 Sep (Sun) | woody | bound
30 Sep (Sun) | woody | pert
30 Sep (Sun) | tinny | newspaper
01 Oct (Mon) | woody | ocelot
01 Oct (Mon) | woody | concubine
01 Oct (Mon) | tinny | antelope
02 Oct (Tue) | woody | vole
02 Oct (Tue) | woody | sausage
03 Oct (Wed) | tinny | recidivist
03 Oct (Wed) | tinny | tit
[download]
Output:
23:09 >perl 345_SoPW.pl
date | woody words | tinny words
-------------+--------------------------+-------------
28 Sep (Fri) caribou litterbin
29 Sep (Sat) gorn, wasp, yowling
30 Sep (Sun) bound, intercourse, pert newspaper
01 Oct (Mon) concubine, ocelot antelope
02 Oct (Tue) sausage, vole
03 Oct (Wed) recidivist, tit
23:10 >
[download]
Of course, the really interesting question is: How do you distinguish words which are ‘woody’ from those which are ‘tinny’? ;-)
Hope this helps,
Updates: Minor code improvements.
Athanasius <°(((>< contra mundum sort
Thank you, that helped a lot. I guess my problem was trying to force two state change points (date and type), which makes quite a bit sense but apparently introduces too much complexity I can't deal with.
Yes
No
Other opinion (please explain)
Results (99 votes),
past polls | http://www.perlmonks.org/index.pl?node_id=1000337 | CC-MAIN-2015-40 | refinedweb | 453 | 58.79 |
Right now the only thing in my CRNA app that flow complains about is my expo import…
import { SecureStore } from ‘expo’;
Just wondering if there is anything I should know about compatibility of versions.
In my package.json I have flow-bin ^0.66.0 and flow-typed ^2.3.0.
Environment:
OS: macOS High Sierra 10.13.3
Node: 9.11.1
Yarn: 1.5.1
npm: 5.6.0
Watchman: 4.9.0
Xcode: Xcode 9.2 Build version 9C40b
Android Studio: Not Found
Packages: (wanted => installed)
expo: 26.0.0 => 26.0.0
react: ^16.3.0 => 16.3.1
react-native: => 0.54.2
Diagnostics report: | https://forums.expo.io/t/using-flow-with-expo-sdk-v26/8648 | CC-MAIN-2019-09 | refinedweb | 110 | 74.05 |
.
It is not working on vistaPosted by drop2kumar on 06/04/2007 01:40am
I have tried the same with vista....but IShellExInit is bot workingReply
What's the install procedure?Posted by Harmonitron on 08/30/2005 03:30pm
Do I just dump the DLL in the Windows folder or System32? Thanks from Ireland.Reply
Sorry, error!!!Posted by maxxx on 06/07/2005 08:33pm
I have VC++ 7, and my linker send message: error C2787: 'IContextMenu' : no GUID has been associated with this object Please help.
Re: 'IContextMenu' : no GUID has been associatedPosted by brianbondy101 on 06/13/2007 10:54am
Hi maxxx;Reply
The response is for people that find this thread by searching for the same error.
Just define at the top of the stdafx.h file above your includes:
#define _ATL_NO_UUIDOF
Thanks,
Brian R. Bondy
Managed C++Posted by Filbert Fox on 05/12/2005 05:11am
I have you thought about converting this too managed C++, I am trying to write a windows shell for C++ .NET and having problems...Reply
This is great!Posted by Legacy on 12/10/2000 12:00am
Originally posted by: Lee Johnson
I've haunting such simple and useful utility for several month and you solved the problem for me!
But could I add a short-cut key to this command? Or could you please just advise on how to do this?Reply
copy files (all including dependent file)Posted by Legacy on 01/22/2000 12:00am
Originally posted by: Md.Hidayathulla
Hello sir,
I would like copy all files in an vc++ project.i.e all
source files,header files ,resource files & external
dependent files.
ex..
program1.cpp
#include <stdafx.h>
#include "..\test.h"
.....
....
....Reply
so when i copy program1.cpp to any target,i would like to
copy all included files to target.
so please help me out in doing this.
wating for ur reply.
with regards.
hidayath.
Nice but MS PowerToys has a SendTo extension that does it.Posted by Legacy on 01/20/2000 12:00am
Originally posted by: Shmulik Flint
Anyway, this is easier and nicer (and we have the code :-})Reply
Cool Util, but how do i register the shell ext?Posted by Legacy on 01/16/2000 12:00am
Originally posted by: Shay Erlichmen
Can you attach the a reg file for registering the shell ext.Reply | http://www.codeguru.com/cpp/com-tech/shell/article.php/c1315/Shell-Extension-to-Copy-Full-Filename-Paths.htm | crawl-003 | refinedweb | 398 | 74.9 |
A non-normative version of this document showing changes made since the previous draft is also available.
Copyright © 2005W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark and document use rules apply.
VoiceXML 2.1 specifies a set of features commonly implemented by Voice Extensible Markup Language platforms. This specification is designed to be fully backwards-compatible with VoiceXML 2.0 [VXML2]. This specification describes only the set of additional June 2005 W3C Candidate Recommendation of "VoiceXML Version 2.1". W3C publishes a technical report as a Candidate Recommendation to indicate that the document is believed to be stable and to encourage implementation by the developer community. Candidate Recommendation status is described in section 7.1.1 of the Process Document. Comments can be sent until 11 July 2005. Voice Browser Activity (activity statement), is for public review, and comments and discussion are welcomed on the (archived) public mailing list <www-voice@w3.org>.
This document is based upon the VoiceXML Version 2.1 Last Call Working Draft of 28 July 2004 and feedback received during the review period (see the Disposition of Comments document). The Voice Browser Working Group (member-only link) believes that this specification addresses its requirements and all Last Call issues. 11 July 2005.
In this document, the key words "must", "must not", "required", "shall", "shall not", "should", "should not", "recommended", "may", and "optional" are to be interpreted as described in [RFC2119] and indicate requirement levels for compliant VoiceXML implementations. The sections in the main body of this document are normative unless otherwise specified. The appendices in this document are informative unless otherwise indicated explicitly.
1 Introduction
1.1 Elements Introduced or Enhanced in VoiceXML 2.1
2 Referencing Grammars Dynamically
3 Referencing Scripts Dynamically
4 Using <mark> to Detect Barge-in During Prompt Playback
5 Using <data> to Fetch XML Without Requiring a Dialog Transition
5.1 <data> Fetching Properties
6 Concatenating Prompts Dynamically Using <foreach>
7 Recording User Utterances While Attempting Recognition
7.1 Specifying the Media Format of Utterance Recordings
8 Adding namelist to <disconnect>
9 Adding type to <transfer>
9.1 Consultation Transfer
9.2 Consultation Transfer Errors and Events
9.3 Example of a Consultation Transfer
A VoiceXML Document Type Definition
B VoiceXML Schema
C Conformance
C.1 Conforming VoiceXML 2.1 Document
C.2 Using VoiceXML with other namespaces
C.3 Conforming VoiceXML 2.1 Processors
D ECMAScript Language Binding for DOM
E References
E.1 Normative References
E.2 Other References
E.3 Acknowledgements
F Summary of changes since the Last Call Working Draft.
<?xml version="1.0" encoding="UTF-8"?> <vxml xmlns="" version="2.1"> >
As described in section 5.3.12 of [VXML2], the <script> element allows the specification of a block of]., the VoiceXML interpreter must expose the retrieved content via a read-only subset of the DOM as specified in Appendix "text/xml" but the content is not well-formed XML, the interpreter throws error.badfetch. catch handler, the VoiceXML interpreter throws error.semantic.
Like the <submit> element, when an ECMAScript variable is submitted to the server its value is first converted into a string before being submitted. If the variable is an ECMAScript Object the mechanism by which it is submitted is not currently defined. If a <data> element's namelist contains a variable which references recorded audio but does not contain an enctype of multipart/form-data, the behavior is not specified. It is probably inappropriate to attempt to URL-encode large quantities of data.
In the examples that follow, the XML document fetched by the <data> element is in the following format:
<?xml version="1.0" encoding="UTF-8"?> <quote> TagName(, t, nodata) { try { return d.getElementsByTagName.
Attributes of <foreach> are:
Both "array" and "item" must be specified; otherwise, an error.badfetch event is thrown.
The <foreach> element can occur in executable content and as a child of <prompt>. using the <assign> element."/> <assign name="movie_idx" expr="movie_idx+1"/> <Attribute( "bridge" or "type" may be specified; otherwise an error.badfetch event is thrown.
As specified in 2.3.7 of [VXML2], the <transfer> element is optional, though platforms should support it. Platforms that support <transfer> may support any combination of bridge, blind, or consultation transfer types. compatiblity [ATT_50075], [MCI_ECR], and [ETSI300_369]. VoiceXML DTD is located at.
Due to DTD limitations, the VoiceXML DTD does not correctly express that the <metadata> element can contain elements from other XML namespaces.
This section is Normative.
The VoiceXML schema is located at.
The VoiceXML schema depends upon other schema defined in the VoiceXML namespace:
The complete set of Speech Interface Framework schema required for VoiceXML 2.1 is available here.
Note:In order to accomodate the addition of the nameexpr attribute, the definition of the <mark> element has been modified in synthesis-core.xsd: the name attribute is now optional.
This section is normative.
A conforming VoiceXML 2.1 document is a well-formed [XML] document that requires only the facilities described as mandatory in this specification and in [VXML2]. Such non-Conforming VoiceXML 2.0 or 2.1 document, its behavior is undefined.
There is, however, no conformance requirement with respect to performance characteristics of the VoiceXML 2.1 Processor..
Interpreters that support both VoiceXML 2.0 and VoiceXML 2.1 must support the ability to transition from an application of one version to an application of another version.
Note:The xsd:anyURI type and thus URI references in VoiceXML documents may contain a wide array of international characters. Implementers should reference [RFC 3987] and the [CHARMODEL] in order to provide appropriate support for these characters in VoiceXML documents and when processing values of this type or mapping them to URIs.
This appendix contains the ECMAScript binding for the subset of Level 2 of the Document Object Model exposed by the <data> element.
VoiceXML 2.1 was written by the participants in the W3C Voice Browser Working Group. The following have significantly contributed to writing this specification:
The following is a summary of the major changes since the Last Call Working Draft was published. | http://www.w3.org/TR/2005/CR-voicexml21-20050613/ | crawl-002 | refinedweb | 1,010 | 50.84 |
Asked by:
Microsoft.Office.Interop.Word
Question
- User-1542157572 posted
Hi all i am working on the Microsoft.Office.Interop.Word COM component. I need to find the line number of a found text. How do i go about doing this?. Please help me.Thursday, June 12, 2008 6:17 AM
All replies
- User-319574463 posted
Are you trying to use Word from an ASP.NET web page. It can be done (I have done it), but you will great problems setting all the required permissions to get it to work. This is why Word is not supported running behind an ASP.NET web page.
Thursday, June 12, 2008 7:42 AM
- User-1542157572 posted
I am using windows application.I have the following requirements
1. I need to find the text in the MS word document.
2. I need to get the line number of that text.
3. In the next line i need to write some text.Thursday, June 12, 2008 11:54 PM
- User-319574463 posted
>I am using windows application.I have the following requirements
Are you using Winforms or Webforms?
Which version or versions of word are you using?Sunday, June 15, 2008 9:26 AM
- User-1542157572 posted
HI i am using Winforms, MS office word 2003 version, and i am using the namespace MS word 11.0 object library. Please do help me in resolving this issue..its very urgentMonday, June 16, 2008 5:42 AM
- User-319574463 posted
>Hi I am using Winforms
As you are using WinForms, you should be posting your question on the MSDN forums rather than here where it is for WebForms.
One of the problems you will probably face is that of Com+ activation permissions. To set these, go:
- Start, Run, enter MMC, OK
- File, Add/Remove Snap-in
- Click Add
- Select Component Services, Add, Close
- Click OK
- Save you MMC for alter use.
When you get a COM error, not the object class and find it in the COM explorer. Once found you can fix the permissions.
I urge you to do this using a virtual machine if at all possible as you if make a copy of the VM before you start, you can very easily roll back.
Monday, June 16, 2008 7:20 AM
- User-319574463 posted
You start your investigation at June 16, 2008 7:27 AM
- User-319574463 posted
...or even at
Also look at - this has an example of a programmatic find
Monday, June 16, 2008 7:31 AM
- User-1542157572 posted
How do i know about the object model on the word documentMonday, June 16, 2008 8:16 AM
- User-319574463 posted
>How do I know about the object model on the word document
You will need to explore the Microsoft Documentation.
Have you tried the example code from CodeProject?Tuesday, June 17, 2008 7:28 AM
- User-1542157572 posted
string str = @"D:\Canarys\Project\doc1.doc"; object str1 = str; object missing = Missing.Value;
Word.ApplicationClass wordApp = new Word.ApplicationClass(); Word.Document doc = wordApp.Documents.Open(ref str1, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing);
doc.Activate();object str2 = "Microsoft.Office.Interop.Word"; //wordApp.Selection.Find.Execute(ref str2,ref missing,ref missing,ref missing,ref missing,ref missing,ref missing,ref missing,ref missing,ref missing,ref missing,ref missing,ref missing,ref missing,ref missing); //MessageBox.Show(Convert.ToString(wordApp.Selection.Find.Execute(ref str2,ref missing,ref missing,ref missing,ref missing,ref missing,ref missing,ref missing,ref missing,ref missing,ref missing,ref missing,ref missing,ref missing,ref missing))); //doc.Range( //Word.Sections sec1 = doc.Sections; //Word.Range rng = sec1.Range; //Word.Section sec2 = doc.Sections.Add(); //Word.InlineShape ils = new Word.InlineShape(); //ils = doc.InlineShapes.AddPicture(@"C:\Documents and Settings\geethan\My Documents\My Pictures\HLD.JPG", true, true, rng); //doc.Save(); object wdWrap = Microsoft.Office.Interop.Word.WdFindWrap.wdFindStop; //object missing = System.Reflection.Missing.Value; object replaceAll = Microsoft.Office.Interop.Word.WdReplace.wdReplaceAll;
storyRange = oRange.Duplicate;do
findRange.Find.Text ="Microsoft.Office.Interop.Word"; //findRange.Find.Replacement.Text = ""; findRange.Find.Wrap = (Word.WdFindWrap)wdWrap;
findRange.Find.Execute(ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing); object unit=Word.WdUnits.wdWord;//.wdCharacter; object count = 1; object type = Word.WdBreakType.wdLineBreak; if (findRange.Find.Found == true)
findRange.Move(ref unit,ref count);//.MoveEnd(ref uni,ref count);//.Duplicate;//findRange.GoToNext(Microsoft.Office.Interop.Word.WdGoToItem.wdGoToLine); findRange.GoToNext(Microsoft.Office.Interop.Word.WdGoToItem.wdGoToLine); object rng1 = findRange.Duplicate;
findRange.InlineShapes.AddPicture(@"C:\Documents and Settings\geethan\My Documents\My Pictures\HLD.JPG", ref missing, ref missing, ref rng1);//.GoToNext(Microsoft.Office.Interop.Word.WdGoToItem.wdGoToLine);
storyRange = storyRange.NextStoryRange;} while (storyRange != null);Wednesday, June 18, 2008 3:32 AM
- User-319574463 posted
Geetha What do you need done with your code listing?Wednesday, June 18, 2008 7:08 AM
- User-1542157572 posted
HI can u please tell me how to use this word PIA in ASP.net web page.Its very urgentTuesday, July 8, 2008 12:27 AM
- User-319574463 posted
Did you install the PIA according to the instructions?
Have you run Word interactively on the Server?
Tuesday, July 8, 2008 1:59 AM
- User-1542157572 posted
Yes i downloaded the oxppia.exe and run them and then installed register.bat this made the Microsoft.Office.Interop.Word.dll to install in GAC. But i am not able to refer this in the website project. Please do help meTuesday, July 8, 2008 2:17 AM
- User-319574463 posted
First question - does Word run on that machine?
Second question - when you add the reference within the project, are selecting the COM tab and waiting for it to populate?
I suggest that you put the Word Document reasding code into a separate class project. This will allow you test it using a WinForms test harness and / or unit test it before calling from your web project.
Tuesday, July 8, 2008 2:40 AM
- User-1542157572 posted
Yes the word application(2003) is running on this machine.
Yes i am referening the Microsoft object 11.o object library from COM tab.
I created new project(Winforms) and when i add the reference i got Microsoft.Office.Core dll.Tuesday, July 8, 2008 2:51 AM
- User-319574463 posted
What are the symptoms of it not working?
Have you checked the event logs? If there is an activation problem, it should be logged in the even t log.
Tuesday, July 8, 2008 2:55 AM
- User-1542157572 posted
When i refer from the dll from using browse tad and refer the Microsoft.Office.Interop.Word dll from the location where i extracted the oxppia file then the reference will be added to windows application but not to the website.Tuesday, July 8, 2008 3:03 AM
- User-1542157572 posted
HI i am able to refer to the dll to my website also my problem is resolved now. One more thing do i need to always download this oxppia.exe and then register them or will these PIAs be installed during word installation if so how do i go about doing this. Please let me know about this since in the server i just refffered to the COM object directly without downloading and register themTuesday, July 8, 2008 5:34 AM
- User-319574463 posted
I do not recall the PIA being automatically installed when Word is installed. For each installation, you will need to specify at the minimum:
Tuesday, July 8, 2008 7:30 AM
- Install Word
- Install PIA
- Start up word on the machine.
- Check for COM+ permission messages in the event logs.
- User-1542157572 posted
Hi thank you i am able to refer to the dll(Microsoft office 11.0 object library) finally. Now i have got an exception in the code saying "the command is not available" I ill send you the code i didn get this.
1 protected void btnUpload_Click(object sender, EventArgs e) 2 { 3 if (FileUpload.HasFile) 4 { 5 if (FileUpload.PostedFile.FileName.EndsWith(".doc")) 6 { 7 string strFilename = FileUpload.PostedFile.FileName; 8 object objFileName=strFilename; 9 object missing =Missing.Value; 10 Word.ApplicationClass wordApp = new Word.ApplicationClass(); 11 Word.Document doc = wordApp.Documents.Open(ref objFileName, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing); 12 doc.Activate(); 13 object objWrap=Word.WdFindWrap.wdFindStop; 14 Word.Range storyRange; 15 foreach (Word.Range rng in doc.StoryRanges) 16 { 17 storyRange = rng.Duplicate; 18 do 19 { 20 Word.Range findRange = storyRange.Duplicate; 21 if (findRange.Text != "") 22 { 23 findRange.Find.Text = "Yours truly,"; 24 findRange.Find.Wrap=(Word.WdFindWrap)objWrap; 25 (Here i will get the Exception)findRange.Find.ExecuteOld(ref missing, ref missing, ref missing, ref missing, ref missing,ref missing, ref missing, ref missing, ref missing, ref missing,ref missing);//, ref missing, ref missing, ref missing, ref missing); 26 if (findRange.Find.Found == true) 27 { 28 object objUnit = Word.WdUnits.wdWord; 29 object count=1; 30 object rngAdd = findRange.Duplicate; 31 findRange.Move(ref objUnit, ref count); 32 findRange.GoToNext(Word.WdGoToItem.wdGoToLine); 33 findRange.InlineShapes.AddPicture(@"C:\Documents and Settings\support\Desktop\canaryslogo.jpg", ref missing, ref missing, ref rngAdd); 34 } 35 } 36 storyRange = findRange.NextStoryRange; 37 }while(storyRange!=null); 38 } 39 } 40 } 41 }Tuesday, July 8, 2008 8:38 AM
- User-319574463 posted
Lines 8 through 38 need to be within a method in a class project
Tuesday, July 8, 2008 9:14 AM
- User-1542157572 posted
This error i got when i open a document which is protected. When i remove the protection it works fine. Is there any method to execute this without removing the protection. Or can i place this image in a place which is a macro(image)Wednesday, July 9, 2008 2:46 AM
- User-319574463 posted
By protected, do you mean the file is read-only?
Wednesday, July 9, 2008 2:54 AM
- User-1542157572 posted
No not readonly my document consists of formfields and macros(image). This formfields are enabled only when we protect the document based on filling in form restriction. I now want to add an image to the Image macro programmatically.Wednesday, July 9, 2008 3:20 AM
- User-319574463 posted
If it is internal to the document, then you will need to further delve into the Word Object Model and either post a further question on this forum and/or post a question on one of the MSDN forums.
Wednesday, July 9, 2008 3:55 AM
- User-1542157572 posted
Hi the above code which i have snet is inserting an image above the text. Can u please let me know how to insert it below a text.Thursday, July 10, 2008 1:16 AM
- User-319574463 posted
Again you will need to delve into the word Object model and find a way to move the insertion point down one line.
Thursday, July 10, 2008 1:56 AM | https://social.msdn.microsoft.com/Forums/en-US/8c6ace04-8209-4f27-9022-fbcd527d792e/microsoftofficeinteropword?forum=aspenterprise | CC-MAIN-2022-21 | refinedweb | 1,874 | 59.7 |
?
The Penguin Classics Library (Score:5, Insightful)
As far as educational works go, I'm all for the textbooks. Grade school & high school, of course. But what I'd really like to see is the "Canonical works" of each field. I'm talking about the standard books that are used to teach each major in the United States. They could do a survey of books and then attempt to contact the authors & publishers to work a deal. Some titles I've seen on everyone's shelves are, of course, the Donald Knuth [amazon.com] series and this list [amazon.com] has a lot of standards I recognize just by the covers.
The most important thing for them to do would to pay lawyers and literature experts to scan the internet for potential authors willing to put out books for free. I've seen some classic computer science books go up like this and I'm sure that if Wikipedia asked for permission to host, they would be able to with mild restrictions. Like the author having the final say on what is kept and removed from the Wiki page. I mean, look at O'Reilly's OpenBook Project [oreilly.com], don't you think they would allow Wikipedia to host that for a tiny one time fee? I'd bet that sales would increase if they even put a link to buy the book. I've heard a lot of authors argue for their books to be put online so that people will feel compelled to buy a hardcopy. Wasn't that the point of Google's textbook preview search?
Other people they could target is an open invitation to any estates that own the rights of long dead authors to have their ancestor's works published. Dr. Suess, anyone? I mean, how do you license a loved one's works and continually soak up money for them? To me, the work of Disney in this respect is just plain rotten and ruined some good guidelines to release works to the public domain.
I don't know, I just think that they should spend money over a period of time searching for permission to host books for free or nearly free. I have hope that this is done very very well and augments the OLPC project nicely.
Re:The Penguin Classics Library (Score:5, Interesting)
Re:The Penguin Classics Library (Score:5, Informative))
Re:The Penguin Classics Library (Score:5, Insightful)
Re:The Penguin Classics Library (Score:5, Informative)
No, they aren't. The texts of those works derived from manuscripts--in series like the Teubner texts or the Oxford Classical Texts--are often still under copyright, and many translations into English are still copyright. One is either dependent on Victorian-era stuff, or one has to translate the material himself (and distribute only the translation, since the text may be copyright).
Re:The Penguin Classics Library (Score:5, Informative)
Re:The Penguin Classics Library (Score.
Re:The Penguin Classics Library (Score:5, Informative)
The translations aren't. For out-of-copyright versions, you still have to go back to versions published a century ago, where the translations are uniformly full of "thou"s and "thee"s and written in bad verse more incomprehensible than the original languages. In fact even modern critical editions of the texts in their original languages are under copyright.
Re:The Penguin Classics Library (Score:5, Insightful)
Something else to go with these "books" would be high quality lectures by some of the best lecturers in respective field.
Free "books" and lectures would allow anyone anywhere, that just have access to the internet, to learn whatever he/she want.
(Another wish would be to "liberate" all papers ever written and put those on a nice website)
However..... if a copyright holder is made an offer for a given piece ($1,000, $10,000, whatever) - a very straightfoward commercial decision can be made; One free of copyright religion and politics. "Is the future returns on the copyright of this piece worth less than the offer."
Someone who has a copyrighted item earning $12.50 per year might easily be swayed to release it into the public domain for $200. Almost *nothing* under copyright is actually earning any real money, and thefore may be liberated with a very modest purse.
Perhaps if there was a simple online process in place, individuals could seach for their items of choice, pay up and free them.
Most people that have the cash and some inclination simply don't have the time. If those who have the time could make this process trivial, everyone could win.
Now please excuse me - I have to RTFA
Re: :The Penguin Classics Library (Score:5, Insightful)
There is an old rule of thumb that a classic has to be re-translated and re-introduced in every generation to remain inviting and accesible to the student and general reader. Preserving the original texts is a trival problem in comparison.
If you know Plato, Dante, Chaucer, Shakespeare only as assigned English reading you'll recognize the truth of this.
Dr. Suess, anyone? I mean, how do you license a loved one's works and continually soak up money for them? To me, the work of Disney in this respect is just plain rotten and ruined some good guidelines to release works to the public domain.
The truths about Disney that the Geek ignores is that the Disney archives remain intact and the Disney product remains accessible and to affordable. You want Bambi in pristine digital restoration? You'll find it at your corner drugstore selling for under $20.
Bambi was filmed in three-strip technicolor. The matte paintings on glass survive. The pencil tests survive. Steamboat Willie was distributed on unstable nitrate stock with synchronized sound on phonographic disks. Conservation costs money. Restoration costs money.
The skills required are rare and demanding.
But you don't need Big Daddy Warbucks to "rescue" Mickey Mouse. The Mouse is still on stage.
How about the original Mickey Mouse cartoon? (Score:5, Interesting)
Re: (Score:3, Insightful)
Book one. (Score:4, Interesting)
I wonder how many people might get drawn into reading sequels if the first book in a series or trilogy were made available for free?
Well (Score:5, Funny)
Use the money to generate new works (Score:5, Insightful)
What a waste! Buy an existing base. (Score:5, Interesting)
$100 million not enough for most popular textbooks (Score:5, Insightful)
3,860,567 = Number of 20 year olds (2000 census rough estimate based on 1/5th of 20-24 year olds)
27% = Percent of population over 25 with a bachelor's degree (2000 census)
25% = Percent of students taking the most popular/useful classes (estimate)
50% = Percent of these students using the most popular textbook (estimate)
5 = Years a textbook edition remains in print (estimate)
6% = Risk free rate of return (estimate)
$100 = Average textbook price (estimate)
20% = McGraw hill net margin (per)
The textbook company would sell 131,259 textbooks per year, for a net profit of $2,625,186 annually. Given the 5 year life span and 6% risk free rate, the textbook company would be willing to sell a textbook with the above expected sales for no less than $11 million. This means we could purchase roughly 9 of the most popular textbooks for $100 million. May be off by a fair margin, but it's clearly not going to be near 100 textbooks. Seems like there are much better uses of the money.
Depends on the Author I suppose (Score:5, Insightful)
I wouldn't be surprised if you could find academically minded authors who'd take a relatively small payoff and the feeling that they'd done good for the world.
Re:Depends on the Author I suppose (Score:5, Interesting)
Let's pay for something new.
I'm betting most academics don't earn much over $100,000 a year. Take the $100M and pay the thousand smartest people on the planet to each spend an entire year writing about everything and anything they feel is important for the future of humanity - with the stipulation that every word they write in that year goes immediately into the public domain.
Think of the qualitative improvement in Wikipedia if we added tens of thousands of new articles by the smartest people in their fields.
Re::5, Interesting)
Re:Use the money to generate new works (Score:5, Insightful)
How many people could actually make a working windmill, water wheel or atmospheric engine to kick start any sort of failed society?
How did we mine basic ores, make good charcoal and smelt them into metal?
How did our first carts and harnesses work?
How does one craft rock by hand?
What about the basics of farming? Most people in the west now live in cities and have no clue about food production.
This information needs recording permanently.
Re:Use the money to generate new works (Score:4, Insightful)
Re: (Score:3, Interesting)
Common misconceptions (Score:5, Interesting)
2) Do you know how long it took us to do it the first time? The big problem of building the world isn't the technology - the problem is the shear cost of it all. It took something like 15,000 years to go from good stone tools to steam ships. That also required an increase in population from around 20 million to around 1 billion.
3) If there were a "post-apocalypse," the cost minimization strategy wouldn't be about knowing about technology, but rather establishing institutions that would enable collective effort. Same reason Africa has modern technology, but the farmers can't afford steel hoes let alone GM crops and combine harvesters.
If half of the world died, we'd have big problems. But half the coal miners, and half the geneticists and nuclear physicists, and half the politicians would likely survive. The shear numbers of these "specialists" in as large a population as we have on Earth would make the proportion of survivors roughly equal to the proportion of survivors in the general population.
Additionally, if our national product was cut in half, we'd be living like they did in the 1984. If cut into a quarter, life would regress to 1962. If to one tenth, to 1940. If to one twentieth, 1915. If to 100th, to 1872. Assuming we get back to 1872 means (in general) 1% of our population, and 1% of our capital (assuming technology benefits and lack of new job experience cancel each other out).
The worst known disease outbreak (smallpox in the Americas) killed about 95% over several centuries. Nuclear warfare between superpowers *might* be able to accomplish the same, but I personally doubt it. If both happened simultaneously and instantaneously, we'd be back to 1839. The amount of destructive effort necessary to take us back to before the Industrial Revolution is mind-bogglingly huge. Getting back to the stone-age is nigh impossible.)
Text books of course (Score:5, Insightful)
How about one book per academic subject (Score:5, Insightful)
One book per academic subject.
One for each kind of math.
One for each kind of music.
One for each kind of computer science.
One for masonry, or automotive, or other trades.
and so on...
So, someone can go to the "tutorial" section of wikipedia and learn how to do whatever they would normally need textbooks or college to learn.
Granted that you could likely only reach an ametuer level this way most of the time, it would be a great starting point for a lot of people into business and hobby.
Re: (Score:3, Insightful)
Core concepts do not go out of date (Score:5, Insightful)
My core computer science texts date back more than ten years. They are still perfectly relevant today. Core subjects in computer science have not changed in ages. Data structures, operating systems, networking, relational databases all go back more than two decades. And they are just as, if not more, relevant today.
The key is to acquire texts on core concepts. These are things that should hold true forever. You would not want to waste money on Teach Yourself Java in 21 Days. For things like that, someone will write up a tutorial. Instead you would acquire works on the concepts of higher-level languages, virtual machines, design patterns, etc.)
james bond bad guy radar (Score:4, Interesting)
I had a hard time finding additional imagery after teraserver sold out. (to MS iirc?) I would like to have even been able to order it, but USGS charges a fortune for their quarter quads and you don't get the high resolution coordinates for each area on the map due to them not being photographed perfectly square. This is something that I would like to see opened up.
One thing to bear in mind unfortuantely is that this information goes stale. google maps is about 15 years out of date for half my city. So this would have to be renewed occasionally to stay of value..
Dictionaries (Score:5, Interesting)
Wikipedia could be a great platform to host dictionaries on. Every article/term should have an option to translate the term.
I know that the feature is half-way there already in the way that you can find the same article in a different language, but that doesn't work that great as a two way dictionary.
Buy a good base of dictionaries translating criscross between all (ok most of) the languages on wikipedia.
Re: (Score:3, Informative) [wiktionary.org]
Lawyers, bureaucrats, and lobbyists (Score:5, Interesting)
Re:Lawyers, bureaucrats, and lobbyists (Score:5, Interesting)
When "the public" pays me to referee papers by other astronomers, and "the public" pays the page charges for the papers I write ($110 per page, by the way), and "the public" pays the editors and typesetters of the journals, then "the public" might assert a right to those papers.
Just to forestall the inevitable responses, no, the federal government is not paying my salary, and no, it hasn't paid for the page charges of my most recent publications. The NSF and NASA do support a great deal of research in astronomy, of course, and grants from those agencies do pay for good fraction of the publications in this area.
On second thought, almost all recent work in astronomy and physics is freely available to public at the LANL preprint archive site [lanl.gov], so maybe this whole discussion is moot....
Re:Lawyers, bureaucrats, and lobbyists (Score:5, Insightful)
Strike 1. You don't understand how the refereed astronomical journals work. I pay THEM $110 per page so that they will publish my paper; they do not pay me.
Strike 2. RIT has a long history of teaching and has only recently -- in the past 5 or 7 years -- started heading in the direction of research. The school has a very detailed breakdown of income from tuition and expenses on items such as faculty salaries. Most of the money spent on my salary comes from tuition.
Would you care to try for a third statement illustrating your ignorance of this topic?)
This is a shame, really (Score:5, Insightful)
He was a big sponser of the Copyright Term Extension Act, DMCA, the patriot act II on steroids, FBI carnivore, extended wiretapping, and his office wanted to get the Claritin patent extended because he was using their jet when running for president.
Anything to get this IP black hole out of office will reap a 10x benifit in the future, and not just for better copyright law.
Once that is done, get a repeal of the bastard CTEA law (it won't happen while he is in the senate). In fact, bet on a CTEA II to come down the pike to protect that nasty rodent [wikipedia.org]
Happy Birthday (Score:5, Interesting)
Would be a nice touch to put that one into the public domain.
Cheers,
Ian
Re:Happy Birthday (Score:5, Interesting)
It's my son's first birthday on Tuesday and I'll be singing Happy Birthday to him. That's a copyrighted song, with royalties payable on public performance I believe.
Would be a nice touch to put that one into the public domain.
I completely disagree. There is no better spokesperson for the absurdity of our copyright laws than example, and this is the best example of absurdity that I can imagine.
When you tell someone they are infringing on copyright and have to pay royalties for singing Happy Birthday, they clue into the ridiculous laws that have been imposed on them. This awareness is the first step to creating momentum for reform.
The more absurd examples we can provide that the general public understands, the better armed activists are to achieve reform.]
Teaching English to access more content (Score:5, Insightful)
I understand why people are suggesting basic textbooks, but they're taking too much for granted.
Start by acquiring the best English skills courses so that these billions of third world kids will be able to understand first world content.
Giving a kid a laptop only gets them so far: they have to be able to understand what they're viewing. That's where the $100 mil could really leverage all of Wikipedia's existing content. Make it easy for these kids to learn English, no matter which language they're starting from.
Re:Teaching English to access more content (Score)
Classic Games (Score:5, Interesting)
classic "no-longer-for-sale" games should be handed over to the public domain.
The intellectual property for future projects and sequels should of course
remain in the hands of the copyright holder. It seems to me that this is a win/win
for publishers since the properties would gain a new lease on life.
Really, I just want to be able to download M.U.L.E., some Infocom titles
and Master of Orion (although I'm not sure I need another addiction in my life
right now).
the obvious (Score:5, Interesting)
Physics (Score:4, Interesting)
National {fire|electrical|building} codes (Score:5, Informative)
Buy JSTOR, WoS, allow annotating papers (Score:4, Informative)
Finnegan's Wake (Score:5, Interesting)
There is a drawback to this, though. James Joyce did not intend that the novel be understood. It was meant to model a dream -- albeit a boringly long one -- and if someone wakes you up every two seconds to tell you what something means, it's not as fun. Annotated, it's like reading Nabokov's version of Eugene Onegin, and if given the choice, I would not have that one wikified, with all due respect to that Lolita guy.
While the Wake wiki is good for comprehension and finally understanding what that huge word in the second paragraph was, the addition of technology makes it inferior to the original. Obviously, you can ignore the links, but in several other cases with e-books, reading a book is made more inconvenient by wikifying it. There is no real electronic substitute for "flipping through a book", and the simple format of a single finite page, as opposed to turtles all the way down. (Just check out an e-book: most of the time, the webpages are huge.)
Oh, and Gutenberg [gutenberg.org]? If anything, have Wikipedia partner with them, if the two are not in cahoots already. No use forming a needless schism in the world of free online e-books.! | http://slashdot.org/story/06/10/22/215238/wikipedias-100-million-dream | CC-MAIN-2015-11 | refinedweb | 3,257 | 61.87 |
What is the Relative Strength Index?
The Relative Strength Index (RSI) on a stock is a technical indicator.
The relative strength index (RSI) is a momentum indicator used in technical analysis that measures the magnitude of recent price changes to evaluate overbought or oversold conditions in the price of a stock or other asset.
A technical indicator is a mathematical calculation based on past prices and volumes of a stock. The RSI has a value between 0 and 100. It is said to be overbought if above 70, and oversold if below 30.
Step 1: How to calculate the RSI
To be quite honest, I found the description on investopedia.org a bit confusing. Therefore I went for the Wikipedia description of it. It is done is a couple of steps, so let us do the same.
- If previous price is lower than current price, then set the values.
- U = close_now – close_previous
- D = 0
- While if the previous price is higher than current price, then set the values
- U = 0
- D = close_previous – close_now
- Calculate the Smoothed or modified moving average (SMMA) or the exponential moving average (EMA) of D and U. To be aligned with the Yahoo! Finance, I have chosen to use the (EMA).
- Calculate the relative strength (RS)
- RS = EMA(U)/EMA(D)
- Then we end with the final calculation of the Relative Strength Index (RSI).
- RSI = 100 – (100 / (1 + RSI))
Notice that the U are the price difference if positive otherwise 0, while D is the absolute value of the the price difference if negative.
Step 2: Get a stock and calculate the RSI
We will use the Pandas-datareader to get some time series data of a stock. If you are new to using Pandas-datareader we advice you to read this tutorial.
In this tutorial we will use Twitter as an examples, which has the TWTR ticker. It you want to do it on some other stock, then you can look up the ticker on Yahoo! Finance here.
Then below we have the following calculations.
import pandas_datareader as pdr import datetime as dt ticker = pdr.get_data_yahoo("TWTR", dt.datetime(2020 print(ticker)
To have a naming that is close to the definition and also aligned with Python, we use up for U and down for D.
This results in the
This tutorial was written 2020-08-18, and comparing with the RSI for twitter on Yahoo! Finance.
As you can see in the lower left corner, the RSI for the same ending day was 62.50, which fits the calculated value. Further checks reveal that they also fit the values of Yahoo.
Step 3: Visualize the RSI with the daily stock price
We will use the matplotlib library to visualize the RSI with the stock price. In this tutorial we will have two rows of graphs by using the subplots function. The function returns an array of axis (along with a figure, which we will not use).
The axis can be parsed to the Pandas DataFrame plot function.
import pandas_datareader as pdr import datetime as dt import matplotlib.pyplot as plt ticker = pdr.get_data_yahoo("TWTR", dt.datetime(2019 ticker['RSI'] = 100 - (100/(1 + rs)) # Skip first 14 days to have real values ticker = ticker.iloc[14:] print(ticker) fig, (ax1, ax2) = plt.subplots(2) ax1.get_xaxis().set_visible(False) fig.suptitle('Twitter') ticker['Close'].plot(ax=ax1) ax1.set_ylabel('Price ($)') ticker['RSI'].plot(ax=ax2) ax2.set_ylim(0,100) ax2.axhline(30, color='r', linestyle='--') ax2.axhline(70, color='r', linestyle='--') ax2.set_ylabel('RSI') plt.show()
Also, we we remove the x-axis of the first graph (ax1). Adjust the y-axis of the second graph (ax2). Also, we have set two horizontal lines to indicate overbought and oversold at 70 and 30, respectively. Notice, that Yahoo! Finance use 80 and 20 as indicators by default. | https://www.learnpythonwithrune.org/pandas-calculate-the-relative-strength-index-rsi-on-a-stock/ | CC-MAIN-2021-25 | refinedweb | 640 | 65.83 |
Writing a script in plain Java is simple like what we have done in the previous post. But there is more we can do using WebDriver.
Say If you want to run multiple scripts at a time ,better reporting and you want to go for datadriven testing (Running the same script with multiple data) then plain Java script is not enough .
So it is recommended to use any of the existing frameworks like TestNG or JUnit. Select one of the framework and start using it.
In this post , I start with TestNG. If you are planning to use TestNG then you need to
- 1.Install TestNG Eclipse plug-in
- 2.Customize the output directory path
- 3.Start writing scripts
1.Install TestNG Eclipse plug-in
Detailed instruction on how to install TestNG plug-in can be found here
2.Customize the output directory path
This is not a mandatory step. But it is good to have it.Using this you can tell TestNG where to store all the output results files.
In Eclipse Goto Window>>Preferences>>TestNG>>
Set the Output Direcory location as per your wish and click on "Ok".
3.Start writing scripts
Writing scripts using TestNG is so simple. We will start with the editing of script we have written in the previous post .
Here is the edited script using TestNG.
package learning; import org.openqa.selenium.WebDriver; import org.openqa.selenium.firefox.FirefoxDriver; import org.testng.annotations.AfterTest; import org.testng.annotations.BeforeTest; import org.testng.annotations.Test; public class GoogleSearchTestNG { WebDriver driver; @BeforeTest public void start(){ driver = new FirefoxDriver(); } @Test public void Test(){ System.out.println("Loading Google search page"); driver.get(""); System.out.println("Google search page loaded fine"); } @AfterTest public void close(){ driver.quit(); } }
If you look at the above code the main chnages we have done is
1.Imported new TestNG files
start() -- Initialize the WebDriver
Test() -- Perform our exact requirement.
close() -- Close the Browser once
4.There are 3 different annotations named "BeforeTest" , "Test" , "AfterTest".@BeforeTest -- Keep this annotaion before a method which has to be called initially when you run the script. In our script ,we put it before start() method because we want to initialize the WebDriver first.
@Test -- Keep this annotation before a method which will do the exact operation of your script.
@AfterTest -- Keep this annotation before mehod which has to run at the end. In our script , closing browser has to be done at the end. So we put this annotation before close() method.
TestNG is so powerful . But now we stop here :) and experiment more on WebDriver functions. Once that is done we will come back to TestNG.
Going forward all the example scripts in this site will refer to TestNG.
nice Article.........
This post helped me a lot. Thanks!
this is very helpful for beginners to understand
great man ...helped a lot :) | http://www.mythoughts.co.in/2012/08/webdriver-selenium-2-part-4-working.html | CC-MAIN-2018-43 | refinedweb | 477 | 69.58 |
> town-1[1].0.4.rar > KeyDef.java
package com.workingdogs.town; import java.util.Vector; /* Town, a Java JDBC abstraction layer Copyright (C) 1999 Serge Knystautas, Jon S. Steven. */ /** A KeyDef is a way to define the key columns in a table. The KeyDef is generally used in conjunction with a TableDataSet. Essentially a KeyDef is what forms the WHERE clause for an UPDATE or DELETE.
In order to use the KeyDef, you simply use it like this:KeyDef kd = new KeyDef().addAttrib("key_column_a"); TableDataSet tds = new TableDataSet ( connection, "table", kd ); Record rec = tds.getRecord(0); rec.setValue("column_name", "new value" ); rec.save(); tds.close();In the above example, Record 0 is retrieved from the database table and the following update statement is generated:
UPDATE table SET column_name=? WHERE key_column_a=?
@author Jon S. Stevens jon@working-dogs.com @author Serge Knystautas sergek@lokitech.com @version 1.0 */ public class KeyDef { Vector data = new Vector (); public KeyDef() { super(); // KeyDef is 1 based. data.addElement (""); } /** Adds the named attribute to the KeyDef. @returns a copy of itself */ public KeyDef addAttrib(String name) { data.addElement (name); return this; } /** Determines if the KeyDef contains the requested Attribute. @returns true if the attribute has been defined. false otherwise. */ public boolean containsAttrib (String name) { return !(data.indexOf ((Object) name) == -1); } /** getAttrib is 1 based. Setting pos to 0 will attempt to return pos 1. @returns value of Attribute at pos as String. null if value is not found. */ public String getAttrib (int pos) { if (pos == 0) pos = 1; try { return (String) data.elementAt (pos); } catch (ArrayIndexOutOfBoundsException e) { return null; } } /** KeyDef's are 1 based, so this overrides the Vector.size() method and returns size - 1; @returns the number of elements in the KeyDef that were set by addAttrib() @see #addAttrib(java.lang.String) */ public int size() { return data.size() - 1; } } | http://read.pudn.com/downloads3/9542/town-1%5B1%5D.0.4/town-1.0.4/src/com/workingdogs/town/KeyDef.java__.htm | crawl-002 | refinedweb | 305 | 61.43 |
WSGetUTF8Function (C Function)
int WSGetUTF8Function( WSLINK l , const unsigned char ** s , int * v , int * n )
gets a function with a symbol as a head encoded in the UTF-8 encoding form from the WSTP connection specified by l, storing the name of the symbol in s, the length of the UTF-8 codes in v, and the number of arguments of the function in n.
Details
- WSGetUTF8Function() allocates memory for the character string corresponding to the name of the head of the function. You must call WSReleaseUTF8Symbol() to disown this memory. If WSGetUTF8Function() fails and the function's return value indicates an error, do not call WSReleaseUTF8Symbol() on the contents of s.
- Programs should not modify the contents of the character string s.
- WSGetUTF8Function(l, &s, &v, &n) has the same effect as WSGetNext(l); WSGetArgCount(l, &n); WSGetUTF8Symbol(l, &s, &v, &c), where c is the number of characters encoded in s.
- WSGetUTF8Function() returns 0 in the event of an error, and a nonzero value if the function succeeds.
- Use WSError() to retrieve the error code if WSGetUTF8Function() fails.
- WSGetUTF8Function() is declared in the WSTP header file wstp.h.
Examples
Basic Examples (1)
#include "wstp.h"
/* read a function from a link */
void f(WSLINK l)
{
unsigned char *s;
int length;
int n;
if(! WSGetUTF8Function(l, &s, &length, &n))
{ /* Unable to read the function from the link */ }
/* ... */
WSReleaseUTF8Symbol(l, s, length);
} | https://reference.wolfram.com/language/ref/c/WSGetUTF8Function.html | CC-MAIN-2019-35 | refinedweb | 232 | 54.32 |
2 Brick Breaker Game
Figure 1:From xkcd.com
Flash Tutorial:Brick Breaker Game 47
1.Setting up the Game
(a) Create a new Flash File (Action Script 3.0).Ensure that the frame rate is set to 24 and change
the stage colour to black.
(b) Save this newly created file.
Flash Tutorial:Brick Breaker Game 48
2.Creating the Paddle
(a) Using the Rectangle Tool,draw the paddle and convert it into a MovieClip named mcPaddleC.A
good paddle size is 130x10.It should be positioned in approximately the lower third of the stage.
Flash Tutorial:Brick Breaker Game 49
(b) Next,give the paddle the instance name mcPaddle.Note that this is case sensitive so it is important
to use exactly the same cases as used for the name here.This same name will be used to control
the paddle using ActionScript so the name of the object on the stage and the name in the code
must match.
(c) Now,some basic ActionScript is required.Create a new layer named “actions” where all of the
code will go.Right click on the current frame in the actions layer,select “Actions” from the
menu.In the text window that appears,type in this code:
function beginCode ( ):void{
//Adds a l i s t e ne r t o t he paddl e which
//runs a f unct i on every time a frame passes
mcPaddle.addEventListener(Event.ENTER
FRAME,movePaddle );
}
function movePaddle ( event:Event ):void{
//The paddl e f ol l ows t he mouse
mcPaddle.x = mouseX;
}
beginCode ( );
To close the ActionScript panel,double-click on the tab labeled “ACTIONS - FRAME” as indi-
cated by the arrow in the figure below.
Flash Tutorial:Brick Breaker Game 50
✬
✫
✩
✪
Notice some features of ActionScript:
• a comment in a line begins with//and continues until the end of the line;
• instructions end with the semi-colon character;
• mcPaddle is the same name used for the instance of mcPaddleC that is on the
stage (this is essential);
It is also useful to know that the properties that can be modified for the mcPaddle
object are the same as are shown in the properties panel.Any instance of the
mcPaddleC class has the same property parameters but not necessarily the same
values for them.
Flash Tutorial:Brick Breaker Game 51
(d) Test the movie to see how the paddle moves with the mouse.There will be a few problems.These
can be solved with a proper understanding of the stage coordinates (shown in Figure 2d).
(e) First of all,the paddle is not centered with the mouse,but is left aligned with it.To fix this,
you must modify the line mcPaddle.x = mouseX;.You must determine how to change the code.
Think about the width property of the paddle.The new code should use the middle of the paddle
to follow the mouse instead of the paddle’s x value.
(f) Another problem with this code is that the paddle sometimes runs off stage which is annoying to
the user.Add the following code to the onEnterFrame function.
//I f t he mouse goes of f t oo f ar t o t he l e f t
i f (mouseX < mcPaddle.width/2){
//Keep t he paddl e on s t age
//INSERT CODE HERE
}
//I f t he mouse goes of f t oo f ar t o t he r i g ht
i f (mouseX > stage.stageWidth − mcPaddle.width/2){
//Keep t he paddl e on s t age
//INSERT CODE HERE
}
In place of//INSERT CODE HERE,insert the correct coordinate code.Remember to complete
each instruction line with a semi-colon.You can use math equations and properties of the stage
if you feel it is necessary.This code should keep your paddle in bounds regardless of how large
the stage is or how wide the paddle is.
(g) After completing these steps,the code for the first actions layer frame should be as follows (with
‘***’ replaced by the correct code as developed in the previous two steps).
function beginCode ( ):void{
//Adds a l i s t e ne r t o t he paddl e which
//runs a f unct i on every time a frame passes
mcPaddle.addEventListener(Event.ENTER
FRAME,movePaddle );
}
Flash Tutorial:Brick Breaker Game 52
function movePaddle ( event:Event ):void{
//The paddl e f ol l ows t he mouse
mcPaddle.x = ∗∗∗
//Keeping t he paddl e i n t he s t age
//I f t he mouse goes of f t oo f ar t o t he l e f t
i f (mouseX < mcPaddle.width/2){
//Keep t he paddl e on s t age
∗∗∗
}
//I f t he mouse goes of f t oo f ar t o t he r i g ht
i f (mouseX > stage.stageWidth − mcPaddle.width/2){
//Keep t he paddl e on s t age
∗∗∗
}
}
beginCode ( );
✬
✫
✩
✪
You might be wondering...
...why is all this code necessary to make cool graphics and animations?
ActionScript was not always such an integral part of Flash.It has recently be-
come more central to the creation of flash animations to provide more power
and capabilities.To better understand the background of Flash,check out the
first of Colin Moock’s “Lost ActionScript Weeekend” videos (available at http:
//tv.adobe.com/#pg+16245 — scroll down and watch the video entitled “Course
1 Introduction”)
A more general history of Flash is documented at
macromedia/events/john_gay/.
Flash Tutorial:Brick Breaker Game 53
3.Programming the Ball
(a) The next step in creating a brick breaker game is making the ball.Make a small 10x10 pixel
white circle,change it into a Movie Clip symbol named mcBallC,and give the ball an instance
name of mcBall.
(b) Now,before actually creating the code for ball movement,two variables are required.They are the
x “speed variable” and the y “speed variable”.They are actually used to determine the number
of pixels that the ball will move for each frame.Add the following code to the very beginning
(line 1) of the code that was added to the actions frame for the paddle.To keep organized,it’s
good to keep all variables at the beginning of the code so that they are all in the same place.
These values can be adjusted if you wish.
//These v ar i ab l e s are needed f or moving t he b a l l
var bal l XSpeed:Number = 8;//X Speed of t he Bal l
var bal l YSpeed:Number = 8;//Y Speed of t he Bal l
(c) To use these variables to make the ball move,add the following code to the beginCode() function:
//Adds a l i s t e ne r t o t he b a l l which
//runs a f unct i on every time a frame passes
mcBall.addEventListener(Event.ENTER
FRAME,moveBall );
Note that the ActionScript lines above tell the program to run the moveBall function every time
the mcBall object enters a new frame (every 1/24
th
of a second according to the frame rate).To
do this,a moveBall function must be written.Add this function by including the code below.
function moveBall ( event:Event ):void{
mcBall.x += bal l XSpeed;
mcBall.y += bal l YSpeed;
}
Flash Tutorial:Brick Breaker Game 54
✬
✫
✩
✪
You might be wondering...
...how can a new frame be entered when there is only one frame on the
timeline?
The concept of frames in Flash can appear confusing at first.Although only one
frame is used for the entire game,the frame rate is 24 frames per second.Because
no other frames have been created,the flash movie stays at this frame but the
ENTER
FRAME event is raised every 1/24
th
of a second.At this point,even though
the playhead in the timeline doesn’t move,the frame is refreshed.
(d) When you test the movie,you should notice that the ball just moves diagonally without being
stopped by anything.The next step is to make the ball bounce off the walls.All that is necessary
is to multiply the x speed by -1 if it hits a vertical wall,and the same for the y speed with a
horizontal wall.Add the following code to the moveBall() function above the lines currently in the
function:
//Bouncing t he b a l l of f of t he wal l s
i f ( mcBall.x >= ∗∗∗){
//i f t he b a l l hi t s t he r i g ht s i de
//of t he screen,then bounce of f
bal l XSpeed ∗= −1;
}
i f ( mcBall.x <= ∗∗∗){
//i f t he b a l l hi t s t he l e f t s i de
//of t he screen,then bounce of f
bal l XSpeed ∗= −1;
}
i f ( mcBall.y >= ∗∗∗){
//i f t he b a l l hi t s t he bottom
//then bounce up
bal l YSpeed ∗= −1;
}
i f ( mcBall.y <= ∗∗∗){
//i f t he b a l l hi t s t he t op
//then bounce down
bal l YSpeed ∗= −1;
}
Replace the four locations of ‘***’ with the correct conditions that would inform the program
that the ball has rached a side of the stage.The location of the mcBall is determined by its top
left corner and this must be taken into account when determining the location of the ball.
(e) Now the ball will just keep on bouncing off the walls.The next step is to make the ball bounce
off the paddle.To add some excitement to the game,the ball should not keep moving at the
same angle the entire time.We’re going to make it change depending on which part of the paddle
it hits.Because this will require more calculation,we’re going to make a new function called
calcBallAngle().First add this code to the beginning of the moveBall() function.
i f ( mcBall.hitTestObject ( mcPaddle ) ){
cal cBal l Angl e ( );
}
This runs the calcBallAngle() function whenever the ball hits the paddle.
Flash Tutorial:Brick Breaker Game 55
(f) Below is the code for the calcBallAngle() function which must also be included in the ActionScript
for this frame.
function cal cBal l Angl e ( ):void{
//b al l Pos i t i on i s t he pos i t i on of t he b a l l i s on t he paddl e
var bal l Pos i t i on:Number = mcBall.x − mcPaddle.x;
//hi t Percent convert s b al l Pos i t i on i nt o a percent
//Al l t he way t o t he l e f t i s −.5
//Al l t he way t o t he r i g ht i s.5
//The cent er i s 0
var hi t Percent:Number =
( bal l Pos i t i on/( mcPaddle.width − mcBall.width) ) −.5;
//Gets t he hi t Percent and makes i t a l ar g e r number so t he
//b a l l ac t ual l y bounces
bal l XSpeed = hi t Per cent ∗ 10;
//Making t he b a l l bounce back up
bal l YSpeed ∗= −1;
}
4.Placing the Bricks on the Stage
(a) The first step to this is actually making the brick MovieClip.A plain white rectangle with
dimensions of 55x20 pixels will suffice.When converting this to the mcBrickC MovieClip symbol,
press the Advanced button on the “Convert to Symbol” window and choose the “Export for
ActionScript” option and then click OK.Exporting the brick will allow it to be added to the
stage dynamically.A warning will appear but this is fine.
Flash Tutorial:Brick Breaker Game 56
Now the brick should appear in the library.Finally,delete the brick from the stage (but not the
library).
(b) Add a variable which will store the number of bricks to be placed on the stage to the beginning
of the ActionScript (below the variable declarations for the ball speed).The code required to do
this is var numBricks:Number = 7;.
(c) Also add the code for a placeBricks() function given below:
function pl aceBr i cks ( ):void{
//Loop pl aces t he b r i c k s onto t he s t age
f or ( var i:int =0;i <numBricks;i ++){
//cr eat i ng a v ar i ab l e which hol ds t he br i ck i nst ance
var br i ck:MovieClip = new mcBrickC( );
//s e t t i ng t he bri ck ’ s coordi nat es
var space:Number =
( stage.stageWidth − numBricks ∗ br i ck.width)
/( numBricks + 1);
br i ck.x = space + i ∗ ( br i ck.width + space );
br i ck.y = 20;
//add t he br i ck t o t he s t age
addChild ( br i ck );
}
}
(d) Add the line placeBricks ();to the beginning of the beginCode() function.
Flash Tutorial:Brick Breaker Game 57
5.Breaking the Bricks
(a) Download the Brick.as file from
~
adsett/downloads/Brick.as and
save it in the same folder as your brick breaker.fla game file.This is the code required to handle
breaking the bricks.It is provided to simplify the game development but it is not too complicated
to understand.Open the file in Flash so you can see the code included.The code will appear
in the same area that the stage is displayed for.fla files (not the Actions panel used for.fla file
ActionScript).
(b) Switch back to the.fla game file and add the lines below to the beginning of the ActionScript.
This allows the game to interact with the code in the Brick.as file.
import Bri ck;
Flash Tutorial:Brick Breaker Game 58
(c) We also have to change our previous brick MovieClip,mcBrickC,to the class,Brick.This way,all
of the code in Brick.as will be used in the Brick MovieClip.To do this,right click the mcBrickC
MovieClip in your library and click on Properties.In the resulting pop-up window,change the
class from “mcBrickC” to “Brick”.
(d) In keeping with this modification,we now need to change the line in the placeBricks() function
from:
var br i ck:MovieClip = new mcBrickC( );
to:
var br i ck:Bri ck = new Bri ck ( );
(e) Test the movie to see that the bricks now “break” (vanish) when they are hit by the ball.
(f) Return to the Brick.as file.The next steps are provided to help you gain a better understanding
of how ActionScript is being used to control the bricks.
(g) The beginning of the specific description of the Brick is on line 5 (public class Brick extends MovieClip {)).
This means that the Brick class has all the functionality of any MovieClip but with more specifics.
(h) A special function,called the constructor,begins on line 8 and is given the same name as the class
(Brick).Whenever a new brick is created,this function is automatically run once.For example,in
the placeBricks() function in the game code of the.fla file,the var brick:Brick = new Brick();line
creates a new brick and,at this instant,the Brick constructor function is run.This constructor
sets up two event listeners,one that calls a function to run when the brick is added to the stage
and another that runs a function whenever a new frame is entered.
Flash Tutorial:Brick Breaker Game 59
(i) The beginClass function,which spans lines 15 to 20,is the function that runs when the brick is
added to the stage.The most important part of this function is the initialization of the
root
variable.Understanding this is key to grasping a number of other commands in this code.Right
click on the word MovieClip in this line and select “View Help”.This should bring up a browser
window open to the MovieClip entry in the Flash CS4 Professional ActionScript 3.0 Language
Reference.In this article the MovieClip class is explained.Scroll down to the “Public Properties”
section.The MovieClip(root) code is a request to obtain the value of the root property of the
MovieClip.Despite this,notice that there is no root property listed amongst the public properties.
(j) The root property is missing because it is a property that is not unique to the MovieClip class.
It has been inherited from another class.Select “Show Inherited Public Properties” immediately
below the “Public Properties” section heading.Now scroll through the list to find the entry for
root.The description states that the root is “the top-most display object”.In this case,of the
brick,the top-most display object is the stage.
The actions in the.fla file are also part of the stage.Therefore,variables in the game file Ac-
tionScript become properties of the stage and functions in the file become methods.The root
property provides access to these properties and methods.
(k) Go to line 32 of the Brick.as file.This line (
root.ballYSpeed ∗= −1;) changes the ballYSpeed
variable in the game file (in order for the ball to bounce off the brick it has hit).This is how a
variable from the game file is accessed and modified.
(l) Using this knowledge,you should now notice that there are several other variables modified by
using
root that do not actually exist in the.fla file code yet.These are brickAmt (lines 18 and
38) and gameOver (line 23).At this point,references in the code to these variables do nothing
but they will be used to control winning and losing the game (the next part of this tutorial).
Flash Tutorial:Brick Breaker Game 60
6.Winning and Losing the Game
(a) To beat the game (or a level) the number of bricks on the stage must be monitored.We know,
fromthe Brick.as file,that we need a brickAmt variable.It can be included by adding the following
line to the variable section (near the top) of the game code:
var brickAmt:int = 0;
(b) We also need to know when the game is over.To do this,we can use a boolean variable which
is true when the game is over and false when it is not.To maintain consistency with the code in
Brick.as,this must be called gameOver.Insert this line in the same location as the last:
var gameOver:Boolean = f al se;
(c) Now we can better understand how the Brick.as code uses these variables.In line 18,the value
of brickAmt is incremented because the brick has been added to the stage.If the game is over,as
detected by checking the value of the gameOver variable (line 23),the brick will be removed from
the stage.Finally,if the ball hits the brick,the number of bricks is decremented (line 38) because
there will be one less brick on the stage.
(d) When brickAmt reaches 0,we know that the player has finished (and won) the game.We can
detect when this occurs by adding a listener that will check if the value of bricks is 0 whenever a
new frame is entered.This can be inserted into the beginCode() function.
addEventListener(Event.ENTER
FRAME,checkBri cks );
(e) The listener calls the checkBricks function when a new frame is entered.To define this function,
place this ActionScript at the end of the code,but before the beginCode();command.
function checkBri cks ( event:Event ):void{
i f ( brickAmt == 0){
gameOver = true;
removeEventListener(Event.ENTER
FRAME,checkBri cks );
mcPaddle.removeEventListener(Event.ENTER
FRAME,movePaddle );
mcBall.removeEventListener(Event.ENTER
FRAME,moveBall );
gotoAndStop ( ‘ win ’ );
}
}
Flash Tutorial:Brick Breaker Game 61
(f) Note the command gotoAndStop(‘win’);in the above code.To gain an understanding of what this
does,test the game and play it until all the bricks have been broken.Observe that an Output
panel has opened in the same location as the Timeline panel.In this,an error will be listed.To
read it,close the game.It should say:
ArgumentError:Error#2109:Frame label win not found in scene Scene 1.
...
Fromthis,we can see that the programis looking for a frame with the label ‘win’.Because there is
no such frame in the game,an error occurs.It is called an argument error because the name ‘win’
cause the problem when it was used as an argument for the gotoAndStop function.Evidently,
this function controls the game by moving to the frame with the specified label.We can also
infer (based on the use of ‘stop’ in the function name) that this function will direct the game to
stop at this frame (not to continue on to the next frame as it would be default).Therefore,it is
unnecessary to include the line stop();at the top of the code for this new frame.
(g) Now that we know what is needed,we can add it.Switch from the Output panel to the Timeline
panel and insert a blank keyframe immediately after the existing frame in Layer 1.Change the
name of this frame to ‘win’ in the Properties panel.
(h) Go back to the previous frame on the actions layer and add the line stop();to the very top of the
code.This will prevent the timeline from automatically progressing to the win frame.
Flash Tutorial:Brick Breaker Game 62
(i) On the new frame,add some text to communicate to the player that they have won the game and
that they may click to play again.
(j) If the user is to be able to click to play the game again,there must be ActionScript to permit
this.Add a new blank keyframe to the actions layer as well.In this frame add the following code:
stage.addEventListener(MouseEvent.CLICK,resetGame );
function resetGame ( event:MouseEvent ):void{
stage.removeEventListener(MouseEvent.CLICK,resetGame );
gotoAndPlay( 1 );
}
The gotoAndPlay(1);command moves the game back to the first frame where the game is played.
(k) Now that winning the game is possible,it must also be possible to lose.The steps to implement
this functionality are not complicated.Several things are necessary:
i.add a variable to keep track of the number of lives;
ii.decrement this variable each time the ball hits the bottom of the stage;
iii.move to the ‘lose’ frame when the number of lives is 0;
iv.add a ‘lose’ frame;
v.add code to the ‘lose’ frame to allow the user to click to play the game again (note that the
resetGame function has already been implemented in the code for the ‘win’ frame and will
therefore not need to be implemented again).
Using the knowledge acquired in the previous steps,add the capability to lose to the game.
Flash Tutorial:Brick Breaker Game 63
7.Finishing Touches
(a) To start the game only when the user first clicks on the screen,add a listener for a mouse click
by replacing the line beginCode();with:
stage.addEventListener(MouseEvent.CLICK,beginCode );
(b) In keeping with this,we have to change the beginCode() function itself so it will accept a mouse
event.To do this,change the beginCode function definition to:
function beginCode ( event:MouseEvent ):void{
stage.removeEventListener(MouseEvent.CLICK,beginCode );
[..Code..]
}
If you test the movie,however,it looks a bit weird.The bricks appear only after clicking.This
can be easily fixed.Just take the placeBricks() out of the beginCode() function and put it at the
bottom of the code.
(c) Next,the player needs to know that they have to click the screen to start.To do this,add a
text box to the middle of the stage.Give it an instance game of txtStart,and make it a dynamic
text box by selecting this option from the drop down box below the instance name field in the
Properties panel.
(d) To include the clicking instructions when necessary,add this code at the end of the frame.
t xt St ar t.text = ‘ ‘ Cl i ck To Begin ’ ’;
and,to clear this text once the game begins,add the following code to the beginCode() function:
t xt St ar t.text = ’ ’;
Flash Tutorial:Brick Breaker Game 64
✬
✫
✩
✪
You might be wondering...
...now that I’ve worked with the basics of Flash,what else is out there
and how does it compare?
Microsoft provides Silverlight which is more recent than Flash but has
the same purpose.The online Smashing Magazine provides a good
comparison of the two at
flash-vs-silverlight-what-suits-your-needs-best/.
Flash Tutorial:Brick Breaker Game 65
Log in to post a comment | https://www.techylib.com/en/view/hihatclover/2._creating_the_paddle_a_using_the_rectangle_tool_draw_the | CC-MAIN-2017-22 | refinedweb | 4,157 | 72.16 |
windows8 remove unused symbols
A unified JavaScript layer for Apache Cordova projects.
cordova-js | |-build/ | Will contain any build modules (currently nothing here as it is all | hacked into the JakeFile) | |-lib | |-cordova.js | | Common Cordova stuff such as callback handling and | | window/document add/removeEventListener hijacking | | | |-common/ | | Contains the common-across-platforms base modules | | | |-common/builder.js | | Injects in our classes onto window and navigator (or wherever else | | is needed) | | | |-common/channel.js | | A pub/sub implementation to handle custom framework events | | | |-common/common.js | | Common locations to add Cordova objects to browser globals. | | | |-common/exec.js | | Stub for platform's specific version of exec.js | | | |-common/platform.js | | Stub for platform's specific version of platform.js | | | |-common/utils.js | | General purpose JS utility stuff: closures, uuids, object | | cloning, extending prototypes | | | |-common/plugin | | Contains the common-across-platforms plugin modules | | | |-scripts/ | | Contains non-module JavaScript source that gets added to the | | resulting cordova.<platform>.js files closures, uuids, object | | | |-scripts/bootstrap.js | | Code to bootstrap the Cordova platform, inject APIs and fire events | | | |-scripts/require.js | | Our own module definition and require implementation. | | | |-<platform>/ | | Contains the platform-specific base modules. | | | |-<platform>/plugin/<platform> | | Contains the platform-specific plugin modules.
The way the resulting
cordova.<platform>.js files will be built is by combining the scripts in the
lib/scripts directory with modules from the
lib/common and
lib/<platform> directories. For cases where there is the same named module in
lib/common and
lib/<platform>/plugin/<platform>, the
lib/<platform> version wins. For instance, every
lib/<platform> includes an
exec.js, and there is also a version in
lib/common, so the
lib/<platform> version will always be used. In fact, the
lib/common one will throw errors, so if you build a new platform and forget
exec.js, the resulting
cordova.<platform>.js file will also throw errors.
Then from the repository root run:
grunt
This will run the
build,
hint and
test tasks by default. All of the available tasks are:
build: creates platform versions of cordova-js and builds them into the
pkg/directory
test: runs all of the unit tests inside node
btest: creates a server so you can run the tests inside a browser
clean: cleans out the
pkg/directory
hint: runs all of the script files through JSHint
fixwhitespace: converts all tabs to four spaces, removes carriage returns and cuts out trailing whitespace within the script files
npm install. Using node v0.6.6 works, though.
npm install, you may get errors regarding contextify. This is necessary for running the tests. Make sure you are running node v0.6.15 at the least (and npm v1.1.16 which should come bundled with node 0.6.15). Also, install Python 2.7.x and Visual C++ 2010 Express. When that is done, run
npm installagain and it should build contextify natively on Windows.
The
build
lib/common/channel.js, which is a publish/subscribe implementation that the project uses for event management.
The Cordova native-to-webview bridge is initialized in
lib/scripts/bootstrap.js. This file attaches the
boot function to the
channel.onNativeReady event - fired by native with a call to:
cordova.require('cordova/channel).onNativeReady.fire()
The
boot method does all the work. First, it grabs the common platform definition (under
lib/common/common.js) and injects all of the objects defined there onto
window and other global namespaces. Next, it grabs all of the platform-specific object definitions (as defined under
lib/
To run them in the browser:
grunt btest
Final testing should always be done with the Mobile Spec test application.
Build the .js file and drop it in as a replacement for cordova.js.
lib
lib/
lib/:
You should probably add a
packager.bundle('<platform>') call to the
Jakefile under the
build task.! | https://apache.googlesource.com/cordova-js/+/835f00ec9b630b79dfeac1a455c43bb80dce30f6 | CC-MAIN-2022-40 | refinedweb | 637 | 50.94 |
This post was originally written by Louis Lazaris for CodeinWP
If you’re like many web developers in the industry, you probably discover new front-end tools every day. I’m in the same boat, especially since I’m deeply involved in regularly researching what’s new in the tools landscape.
In this post, I’m going to round up (with some screenshots and demos) some of the most interesting front-end tools I’ve found that I think you’ll find useful. These aren’t necessarily the most popular tools or the hottest tools, but I think each of them is unique in their use case and deserve a little more attention. These are essentially my favorite finds from the past months in front-end tools.
Hotkey
Detecting keystrokes with JavaScript isn’t an overly complex task, but this little utility from the team at GitHub makes it super simple.
With it you can trigger an action on an element with a keyboard shortcut.
The types of shortcuts include a key, key combo, or even key sequence. You can also have multiple shortcuts for a single action.
The JavaScript is just one declaration along with an import:
import {install} from './hotkey.js'; for (const el of document.querySelectorAll('[data-hotkey]')) { install(el) }
Once that code is in place, the main work is done in the HTML. Here’s a list of links that I created to display some content depending on the shortcut used:
<ul> <li><a href="#a" data-Example</a></li> <li><a href="#b" data-Example</a></li> <li><a href="#f" data-Example</a></li> <li><a href="#and" data-Example</a></li> <li><a href="#enter" data-Example</a></li> </ul>
Notice the
data-hotkey attributes added to each of the links. These are what enable the hotkeys for the targeted actions (in this case, triggering a :target selector via CSS). Multiple hotkeys are separated by a comma; key combinations are separated by a plus symbol; and key sequences are separated by a space.
Here’s a live demo:
Try out each of the shortcuts and notice that the code in the JavaScript panel is minimal. Very simple to set up, once the module is imported. And as a side point here, if you have an app with multiple shortcut keys that you want to display in a modal window (as is done on Twitter, GitHub, etc.), you might want to check out QuestionMark.js, an old project of mine.
Of course, with keyboard shortcuts, you’ll want to take note of accessibility concerns so be sure to check out the repo’s README for info on that.
Freezeframe.js
Embedding brief videos in web pages is common to show an action taking place. Sometimes an animated GIF is also appropriate. But GIFs tend to be distracting because they play their content automatically.
This little utility allows you to add video-like functionality to animated GIFs embedded in your HTML.
Once you include the Freezeframe.js source in your page, you need only a single JavaScript declaration:
new Freezeframe('.freezeframe', { trigger: 'hover', overlay: false });
If you drop the second argument (e.g.
new Freezeframe('.freezeframe')) it will default to no play button and the animation triggers on hover. The only flaw with this is that, because it’s an animated GIF, you technically can’t “pause” it, you can only “stop” it (which means it starts again from the beginning). But usually with GIFs, this isn’t a big deal.
Here’s a demo with three different examples:
Using this tool alone, however, might not save on performance as it seems the full GIF loads behind the scenes. But I’m assuming this could be used along with a lazy load library if the GIF is off screen when the page loads.
ARC Toolkit
Your go-to front-end tools should include plenty of accessibility options.
This is a Chrome extension that adds a tab to your developer tools to help you find accessibility errors and warnings related to the WCAG 2.1 Level A and AA guidelines.
Two reasons why this tool is so great:
- It integrates with your existing testing/debugging workflow inside the developer tools
- It’s made by the The Paciello Group, who are well known in the developer community for their accessibility insights
Once the extension is installed, just choose the tab in your developer tools and select “Run Tests”. The initial output will be similar to what you see in the previous screenshot. From there you can drill down to view any potential accessibility problems related to a specific feature, as shown in the next screenshot:
Notice the “Links” option on the left has the checkmark next to it. That’s what I’ve chosen to examine in this instance. This also adds an overlay on the page showing where all the selected objects are, as you can see above the developer tools on the live page.
Scene.js
Every year there seems to be a new animation library of sorts on the front-end tools landscape.
My pick for this year (so far) is Scene.js.
This is not one you can just pick up and work with in a matter of minutes like the others featured so far.
There’s a learning curve to get used to the API, which looks something like this:
let scene = new Scene({ ".searchbox": { "0%" : "width: 50px", "70%": "width: 300px", }, ".line": { "30%" : "width: 0%", "100%": "width: 100%", } }, { duration: 1, easing: Scene.EASE_IN_OUT, selector: true, }).exportCSS(); scene.setTime(0); let toggle = false; document.querySelector(".submit").addEventListener("click", function() { toggle = !toggle; scene.setDirection(toggle ? "normal" : "reverse"); scene.play(); });
That’s the code for one of the examples on the home page. It’s a simple little animated search box. Here’s their CodePen demo:
Again, this won’t be an easy tool to learn quickly, but if you’re interested in trying out a new animation library with what seems to be a pretty straightforward API, this might be a good option.
Commento
The current privacy-aware online landscape could use more tools like this one. I’ve been considering options for improved commenting systems on my WordPress website for a while now and Commento looks solid.
I like the functionality of something like Disqus (upvotes/downvotes, top comments, etc.) but it has too much bloat.
I also like that WordPress comments are self-hosted by default, but they lack those extra features of Disqus. I think Commento is a step in the right direction to fix these problems.
If you are considering switching from an existing commenting platform to Commento, it is quite a bit of work from what I’ve read, so that’s a big downside.
Also, although Commento allows you to import from Disqus, you won’t be able to import the “votes” on old comments from Disqus or the avatars from the users who posted comments.
There’s also no way to import old WordPress comments into Commento unless you first export to Disqus, then import from Disqus to Commento (which can be done using a Disqus import tool when you sign up for Commento).
The final drawback is the fact that Commento is not free unless you self-host it. But when you consider the bloat and privacy issues of Disqus, the small monthly fee is worthwhile.
Git History
Although this is not solely in the front-end tools category, it’s one of my favorites on this list because of its simplicity and novelty in the way it works.
Git History allows you to view the history for any file in a public Git repo (GitHub, GitLab, or Bitbucket).
For example, let’s say you want to view the history of changes to the source file for Normalize.css. The file is located at:
In order to view its history, replace
github.com in the URL with
github.githistory.xyz:
The output at the new URL loads up a neat, interactive way to view the file’s changes over time. Some cool animations are triggered every time you choose a history point, allowing you to see which changes took place and which user committed them.
CSS Feature Toggles
If you’re still working in an environment where you have to do some legacy browser testing, this might be a nice little Chrome extension to add to your testing toolbox.
CSS Feature Toggles, similar to ARC Toolkit mentioned above, adds a new tab to your browser’s developer tools.
In the tab, you’ll notice a list of modern CSS features.
You can toggle these to instantly see how your page looks when a user visits the page in a browser that doesn’t support that particular feature. This is a great way to get a quick overview of how your layouts degrade in older environments.
When selecting the different features, the page will update automatically to display the changes. A site built with Flexbox, for example, will benefit from some older CSS to keep the layout sane while progressively enhancing in newer browsers.
Create App
No doubt your front-end tools workflow includes plenty of options for builds. This website is a combination of a learning site and a project generation tool for developers using (or wanting to learn how to use) webpack or Parcel, the popular asset bundlers.
Drill down into the categories on the left to choose the options you want for your build, then see the necessary files and configuration options appear in the main window.
The page is fully interactive, so you can click on any of the virtual files to view their contents, or you can hover over a selected option to view a description along with highlighted portions of the build that are relevant to that option.
Very useful both for learning and for creating new projects!
CSSJanus
In the area of internationalization, this is an online tool that allows you to convert stylesheets from left-to-right to right-to-left, and vice-versa.
This allows you to easily create stylesheets for right-to-left (rtl) languages like Arabic and Hebrew.
Here’s a CSS example:
.example { float: left; text-align: left; padding: 1px 2px 3px 4px; margin-left: 1em; background-position: 5% 100px; cursor: ne-resize; border-radius: 1px 2px; }
The above will get converted to the following:
.example { float: right; text-align: right; padding: 1px 4px 3px 2px; margin-right: 1em; background-position: 95% 100px; cursor: nw-resize; border-radius: 2px 1px; }
Notice that the differences include not only lines like
float: left and
text-align: left but others like horizontal
padding declarations and
background-position values.
And usefully, if you want the tool to ignore a style block or a single declaration, you can use the
@noflip directive:
/* @noflip */ .ignored { float: left; } .not-ignored { float: left; /* @noflip */ background: #fff (poster-ltr.png); }
Color Thief
Color Thief is really neat and fairly simple to use but is very specific in its use cases.
Basically, using this utility, you can use JavaScript to grab a color palette of anywhere from 2 to 20 colors based on a given image.
This isn’t something you’ll use on every website or app, but it’s a nice idea and apparently has been around for a while and was updated over the past year.
Using the simple API, you can grab a palette from the image with a single line:
let myPalette = colorThief.getPalette(img, 10);
From there, it’s just a matter of manipulating the array that’s returned. You can see a demo I built in CodePen below that grabs a user-entered number of colors from the image shown. The code I’m using on the array is:
myPalette.forEach( element => colors.innerHTML += "<div class='color' style='background-color: rgb(" + element + ")'></div>" );
I’m building the palette using
<div> elements and inline styles. The colors are returned as RGB values.
In the CodePen demo, I’m using a workaround to get around the cross-origin problems I ran into on CodePen, but normally you won’t need those lines (commented) in a customary environment.
RegexGuide
It seems like every year I find a cool interactive app to add to my collection of front-end tools that helps build regular expressions, so here’s this year’s entry. And if you’re like me, you’ll take all the help you can get building these.
This one is a little odd to get your head around at first because it goes through the steps one by one, like a wizard.
When you’re done and have all conditions in place, you’re able to try different values to meet the specified conditions and the page will interactively indicate what works.
These kinds of tools are always some of my favorites because they work not only as a way to create code that would otherwise be tedious, but they help you learn the syntax too.
Front-end tools: honorable mentions
So those are, in my opinion, some of the more interesting front-end tools I’ve found that I think didn’t get enough attention over the past year. I’m sure you have your own such finds so feel free to drop them in the comments below. Meantime, here’s a final list of stuff that didn’t quite make the main list but I thought were worth mentioning:
- wehatecaptchas - A captcha alternative with no image or letter/number deciphering, not even a checkbox to “confirm I’m not a robot”
- simpleParallax – An easy way to do parallax effects with JavaScript.
- Lite YouTube Embed – Apparently 224X faster than the traditional embed code.
- Browser Default Styles – Enter any HTML element and this tool will tell you each browser’s default CSS for that element.
- Who Can Use – Enter a two-color combination and this tool will tell you which kinds of visually impaired users can use that combo for text/background.
Top comments (2)
Awesome list 👍 thanks :)
Awesome list | https://dev.to/codeinwp/15-front-end-tools-you-should-know-about-my-favorite-finds-for-2020-554e | CC-MAIN-2022-40 | refinedweb | 2,340 | 60.75 |
Details
- Type:
Improvement
- Status: Closed
- Priority:
Minor
- Resolution: Fixed
- Affects Version/s: None
- Fix Version/s: 2.3.0-beta-1
-
- Labels:None
Description
Currently to create json from a list of objects we have to do something like this:
def songs = [new Song (title: "Title 1"), new Song (title: "Title 2"), ...] JsonBuilder json = new JsonBuilder () def map = json { songs songs.collect { Song s -> json { title s.title } } }
I would like to write:
JsonBuilder json = new JsonBuilder () def map = json { songs songs, { Song s -> title s.title } }
It is less code and easier to use and read.
Implementation with tests and doc update is here: | https://issues.apache.org/jira/browse/GROOVY-6159 | CC-MAIN-2019-04 | refinedweb | 105 | 66.13 |
Java Reference
In-Depth Information
p:
<f:validator
</h:inputText>
<h:message
<h:panelGroup/>
<h:commandButton
</h:panelGrid>
</h:form>
</h:body>
</html>
As we can see in this example, we need to add the xmlns:jsf= ".
jcp.org/jsf/passthrough" namespace to our JSF page in order to use pass-
through attributes. We can then use any arbitrary attributes with our JSF-specific tags
by simply prefixing it with the prefix we defined for the namespace (in our case, p ).
Summary
In this chapter, we saw how NetBeans can help us easily create new JSF projects by
automatically adding all the required libraries.
We saw how we can quickly create JSF pages by taking advantage of NetBeans' code
completion feature. Additionally, we saw how we can significantly save time and
effort by allowing NetBeans to generate JSF 2 templates, including the necessary
CSS to easily create fairly elegant pages. We also saw how NetBeans can help us
develop JSF 2 custom components.
We also covered some new JSF 2.2 features such as resource library contracts,
which allow us to easily develop "themable" applications, as well as the outstanding
HTML5 support provided by JSF 2.2—specifically the ability to develop JSF views
using HTML5 markup and the ability to use arbitrary HTML5 attributes in JSF
markup by employing pass-through attributes.
Search WWH ::
Custom Search | http://what-when-how.com/Tutorial/topic-13317l/Java-EE-7-Development-with-NetBeans-8-120.html | CC-MAIN-2017-39 | refinedweb | 227 | 52.19 |
JavaScript: Developing a Custom Framework for Single-Page internactions. Actions govern external API calls. You define these entities in plain js, load up the central controller, and you app is ready to be served. Read the development journey of SPAC in my series:
This article introduces the SPAC framework. Before we dive into the design of the framework itself, we will briefly touch upon how JavaScript is loaded in your browser — this understanding is the foundation how to structure your code. Read along and get some ideas and inspirations how to make PlainJS projects more maintainable.
This article originally appeared at my blog admantium.com.
Essentials: JavaScript in your Browser
In your browser, each tab opens a new browser session. And for each session, a new thread with a JavaScript interpreter is started. This interpreter is invoked by the browser during HTML processing whenever it is instructed to execute JavaScript.
As a developer, you have different options to load JavaScript — and they all behave a bit different.
Load JavaScript file with the
<script src=""> tag
- The browser stops loading any other resource. It will execute all code in the context of the
globalobject. Variable declaration will happen in this global space.
Define inline JavaScript with ` code tag
- The browser stops loading any other resource. The code can access all variables defined in the global scope. It is not possible to either load additional modules, or to declare modules that can be imported with statements in other
<script>tags. It will execute all code in the context of the
globalobject. Variable declaration will happen in this global space.
Register inline event listener on input elements, like
<button onclick=parseData>
- The browser will define an event listener for the DOM object by the given function name. In JavaScript, function definitions in the
globalnamespace will be hoisted up, which means you can use a function name before its declaration. However, the browser also happily allows a
undefinedfunction to be used in this context - this can result in hard to figure out bugs.
Load JavaScript modules with the
<script src="" type="module"> tag
- The browser stops loading any other resource. It will execute all code in the context of the
globalobject, but allow the definition and loading of modules.
Depending which methods you use, different challenges need to be considered:
- Page load interrupt: Some methods will stop the browser from loading any additional resources before the JavaScript is parsed completely. If you load either very complex code or a lot of code, this might interrupt the page load speed
- Execution context pileup: When you constantly load new scripts from newly rendered pages, the total amount of JavaScript inside the browser thread continues to pile up and can slow down the page performance
- Namespace pollution: Inside the browser, the
globalobject will be
window. Any JavaScript that is executed can change the definition of the
windowobject. It can happen that you accidentally overwrite function definitions when scripts on different pages use the same function names, because they will be re-defined the global object.
With this knowledge, we can now design the essential requirements of our custom framework.
Architecture of the Custom Framework
The custom frameworks needs to consider the above-mentioned challenges as well as adhering to the principle separation of concerns. Its architecture is influenced by the model-view-controller pattern and uses concepts similar as in React.
In a nutshell, the requirements are:
- Use JavaScript modules to keep the namespace clear
- Separate the application into the controller, action, and pages & components
- Encapsulate HTML and JavaScript in the relevant components
- Dynamically load and execute only required JavaScript
Let’s consider the central building blocks of the framework one-by-one.
JavaScript Modules
First of all, all entities of the framework are defined as modules. Using modules enables the application to expose only required functions for each entity, which can be considered as an interface. This interface helps to standardize and to make the entities compatible with each other.
Controller
The
controller is the central entity of the framework and the only JavaScript that will be loaded to the application. It provides the complete functionality to control which pages are rendered and loads the required JavaScript. Furthermore, it is responsible to keep the applications state and to communicate with any external API. Finally, it also serves as a gateway by importing and exposing shared JavaScript functions that are exposed by other entities.
Actions
When your application needs to connect to an external API, you will be using actions. Actions are JavaScript promises that execute API interactions and deliver the results. The action caller, a component or page, then defines how to process the results, like updating the state or refreshing the HTML.
Pages and Components
Composing the presentation and UI functions is the task of
pages and
components. The controller requests to load a page by calling it with a root DOM element and passing the state. Then, the page creates its own DOM elements, attaches them to the root DOM, and also executes additional JavaScript. Afterwards, it loads all the components that are present on the page.
Components work similar to pages: They also receive a root DOM and the state. They build their own DOM and attach JavaScript to it. The difference is that they provide additional helper functions that are specific to this component, complex UI functions or even functions that operate on the state.
State
The state is the globally available and persistent data of the application. Everything from user input to application operational data is kept inside the state. Between page refresh, data is persisted inside the user’s browser storage. Logically, each active page holds the state, and passes its’ state to the components. The page can call methods in the controller to persist the state in other stores, such as databases like MongoDB.
Conclusion
The custom JavaScript framework is a generic approach to structure client-side applications that need to provide complex UI interactions. It is persistent in its abstractions and consistent in dividing the concerns of a web application. Read more about this in the next article. | https://admantium.medium.com/javascript-developing-a-custom-framework-for-single-page-apps-ca1c26982ad9 | CC-MAIN-2022-05 | refinedweb | 1,020 | 53.61 |
So I have been re-reading some of the python intro stuffs after a time away philandering with other languages. I have been thinking about the Zen of Python piece where it says 'namespaces are a honking great idea - let's do more of those'. A question -- since then do we have any sort of list of major innovations like that? Do we keep any sort of 'honking great idea list' anywhere? Or maybe folks could suggest what some of those milestones have been over time? I think a 'honking great list' of python innovations would be fun ... maybe presumptuous, but fun anyways. :P -- A musician must make music, an artist must paint, a poet must write, if he is to be ultimately at peace with himself. - Abraham Maslow | https://mail.python.org/pipermail/python-list/2013-March/643895.html | CC-MAIN-2019-30 | refinedweb | 129 | 80.72 |
In this article, I will walk through how to set up and do a python unit test with Eclipse.
Prerequisite: Pydev has been installed in Eclipse. If not, please open up the Eclipse and go to: Help -> Eclipse MarketPlace and search for ‘PyDev’ and install it as below
Now, we are ready to create a python unit test. To start with, let’s create a new PyDev Project for holding the project source and the unit tests.
1) We go to: File -> New, in the New window, choose PyDev Project as below
and give the project a name, such as TestPython
2) In the project TestPython, create a new python module on top of it in order to have all our unit tests placed inside this model.
and give the package as test and name as testCalculator
3) We now have the generated package: test and two files created. In theory, we can put as many unit tests in this package. In the testCalculator.py, put the following code there
import unittest
class TestCalc(unittest.TestCase):
def testAdd(self):
print("it is a test")
result = True
self.assertEqual(result, True, "Ohno")
Basically, in the above code, we just created a test class: TestCalc which extends the unittest.TestCase as the base class and the TestCalc class will have all the testing API available such as self.assertEqual..etc. For more information of the API of unittest, please feel free to refer:
4) The last step would be to just right click the testCalculator.py in the Package Explorer and choose Run As -> Python unit-test. If the setup is correct, we should see something like this:
And yes, congrats! you have just created the basic python unit test!
Stay tuned and more to come (such as running it in the command line instead) later! | https://wwken.wordpress.com/page/2/ | CC-MAIN-2018-09 | refinedweb | 304 | 79.5 |
Made a simple script for Autotracker. Can generate x amount of songs, specified by user. I’ll make it more fancy as needed. Enjoy.
Instructions
- Download and extract
- run frontend.py
- Success
Updated Download Links
Download link(Rar format)
Download link(Zip format)
The links are not working. Why not upload to dropbox?
The same for me. You need to be registered to acces the files.
By the way, thanks for pointing me to the original script.
Links updated, should work this time. Uploaded to Dropbox. Enjoy. @kddekadenz, No problem :P, Enjoy.
“Error (403)
It seems you don’t belong here!”
Not a public download?..
Fixed. Bit new to dropbox, sorry :P.
import os
a = 0
song = int(input(“How many songs should Autotracker generate?”))
while a < song :
a += 1
os.system('autotracker.py')
Had to make that change, it was refusing to work on windows otherwise. It would be interesting to ask the key in wich the song is to be created, currently i'm randomly shifting one octave up or one octave down (i'm just doing random.choice([MIDDLE_C, MIDDLE_C*2, MIDDCLE_C/2]), but it would be nice to have control)
cool, thanks
Come again? I wrote it on Windows XP. What OS/Python version are you running?
Python 2.7, Windows 7
Hmm. Do you not have python in your PATH variable? Maybe thats causing the issue. Either way, Ill upload the fix tomorrow, thanks for fixing it. | http://ludumdare.com/compo/2012/04/08/simple-frontendautomator-for-autotracker/ | CC-MAIN-2017-13 | refinedweb | 242 | 71.21 |
You're bound to have heard the term GraphQL. Unless you live under a rock. I doubt that though. GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data.
This tutorial will show you a step-by-step guide on how to use a GraphQL API to build Postgres metrics dashboards.
Here's what the end product will look like. You can also check out the live preview.
GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more. GraphQL makes it easier to evolve APIs over time and enables powerful developer tools.
The incredible growth in GraphQL's popularity has been noticed by the amazing Cube community. Our community longed for a GraphQL integration, and we listened.
How the GraphQL API was added to Cube
I'd like to give a huge shout-out to Luc Vauvillier who contributed initial support for the GraphQL API and laid out the principal design. Luc is the co-founder at Gadsme. Check this out to read more about how Gadsme builds awesome stuff with Cube.
We at Cube were thrilled to jump on the problem at hand and build out a GraphQL API. As of the
0.29 release of Cube, you can use the GraphQL API!
Set up Cube with GraphQL and Postgres
To configure Cube, I first needed to connect a database. I used a demo Postgres metrics dashboard.
First of all, I want to run a sample query to list orders by status grouped by day.
Let's open up the Cube GraphiQL interface and run this same query with GraphQL. You can read more about the GraphQL API in our docs.
Here's what the GraphQL query looks like for the example above.
query CubeQuery {cube(limit: 10) {orders(orderBy: { count: desc }) {countstatuscreatedAt {day}}}}
Running this query in GraphiQL will return the values just like in the playground. Now we're done with the setup, let's move on to some more complex queries.
Running Analytical GraphQL Queries
Lets add a
where clause with a time dimension to the GraphQL query.
query CubeQuery {cube(limit: 10where: { orders: { createdAt: { inDateRange: "This year" } } }) {orders(orderBy: { count: desc }) {countstatuscreatedAt {day}}}}
Running this query in GraphiQL will return this result set.
I also want to run another query with only completed order statuses.
query CubeQuery {cube(limit: 10where: {orders: {status: { equals: "completed" }createdAt: { inDateRange: "This year" }}}) {orders(orderBy: { count: desc }) {countstatuscreatedAt {day}}}}
These queries will be perfect for building my metrics dashboard. Next, we need to build an app to display the metrics.
Visualize Postgres Data with GraphQL
I'll start by building a React app and using Chart.js to display the metrics.
npx create-react-app dashboard-appcd dashboard-appyarn add @apollo/client graphql chart.js react-chartjs-2 apollo-link-httpnpm start
This will give me a blank React app. I then add the Apollo client and all the required dependencies for building the charts.
Next up, create the file
src/ApolloClient/client.js and insert the following code.pYXQiOjE2Mzk1Nzc2NTh9.ARCF3pyi9rpNAPEF2rBoP-EKjzfJQX1q3X7A3qCDoYc';const authMiddleware = new ApolloLink((operation, forward)=> {if (appJWTToken) {operation.setContext({headers: {Authorization: `${appJWTToken}`}});}return forward(operation);});export const clientWithCubeCloud = new ApolloClient({cache: new InMemoryCache(),link: authMiddleware.concat(httpLink),});
Make sure to edit your
uri and
appJWTToken to use your values from Cube Cloud. You can get those values from the Overview tab by clicking the
How to connect button.
Now you have to make the Apollo client available to the rest of your app. Update your
src/index.js to look as follows.
import React from 'react';import ReactDOM from 'react-dom';import './index.css';import App from './App';import reportWebVitals from './reportWebVitals';import { client } from './ApolloClient/client';import { ApolloProvider } from '@apollo/client';ReactDOM.render(<ApolloProvider client={client}><App /></ApolloProvider>,document.getElementById('root'));// If you want to start measuring performance in your app, pass a function// to log results (for example: reportWebVitals(console.log))// or send to an analytics endpoint. Learn more:;
Nice! Now you can move on to create the chart component, and then call the GraphQL query from the chart itself.
Create a folder called
src/Charts and create a file called
src/Charts/BarChart.js. Paste this code into it.
import React from "react";import { gql, useQuery } from '@apollo/client';import { getRandomColor, formatDate } from './Helpers'import { Bar } from 'react-chartjs-2';import { Chart as ChartJS, BarElement, Title, CategoryScale, LinearScale, Tooltip, Legend } from 'chart.js';ChartJS.register(BarElement, Title, CategoryScale, LinearScale, Tooltip, Legend);const COMPLETEDORDERS = gql`query CubeQuery {cube(limit: 10where: {orders: {status: { equals: "completed" }createdAt: { inDateRange: "This year" }}}) {orders(orderBy: { count: desc }) {countstatuscreatedAt {day}}}}`;const GenerateChart = () => {const { data, loading, error } = useQuery(COMPLETEDORDERS);if (loading) {return <div>loading</div>;}if (error) {return <div>{error}</div>;}if (!data) {return null;}const chartData = {labels: ['Daily Completed Orders in 2021'],datasets: data.cube.map(o => o.orders).map(o => {return {data: [o.count],label: formatDate(new Date(o.createdAt.day)),backgroundColor: [getRandomColor()],};})}return (<Bardata={chartData}/>);}const BarChart = () => {return (<div style={{ margin: "10px", paddingTop: "65px" }}><h2 style={{ margin: "10px", textAlign: "center" }}>Bar Chart</h2><div style={{ margin: "10px 100px", padding: "10px 100px" }}><GenerateChart /></div></div>);};export { BarChart };
You'll also need to create a file called
src/Charts/Helpers.js. Paste this code.
function getRandomColor() {function hexToRgbA(hex){let c;if(/^#([A-Fa-f0-9]{3}){1,2}$/.test(hex)){c= hex.substring(1).split('');if(c.length === 3){c= [c[0], c[0], c[1], c[1], c[2], c[2]];}c= '0x'+c.join('');return 'rgba('+[(c>>16)&255, (c>>8)&255, c&255].join(',')+',0.5)';}throw new Error('Bad Hex');}const letters = '0123456789ABCDEF';let color = '#';for (let i = 0; i < 6; i++) {color += letters[Math.floor(Math.random() * 16)];}return hexToRgbA(color);}function formatDate(date) {let d = new Date(date),month = '' + (d.getMonth() + 1),day = '' + d.getDate(),year = d.getFullYear();if (month.length < 2)month = '0' + month;if (day.length < 2)day = '0' + day;return [day, month, year].join('-');}export {getRandomColor,formatDate};
Once you're done with that, you need to create a file called
src/Charts/index.js to export the chart.
export { BarChart } from './BarChart';
Lastly, edit the
App.js and import the chart.
import './App.css';import {BarChart,} from './Charts';function App() {return (<div className="App"><BarChart /></div>);}export default App;
Go back to the browser where the React app is running. You should see this fancy chart pop up!
As of today, this means you can use Cube's GraphQL API for building any type of metrics dashboard. Alongside the REST API and SQL API, you have an entire toolset at your disposal to choose from.
But I'm not done here yet. Let's add some role-based access control, also called RBAC.
Add Role-Based Access Control with JWT Tokens
You get multi-tenancy and RBAC in Cube out of the box. Let's add a security context in the Apollo client to enable RBAC.
Just like with the REST API you can add security and RBAC with JWT tokens. First open up the Env vars tab in your Cube Cloud deployment settings.
Find the
CUBEJS_API_SECRET environment variable and copy it. This is the API secret that secures your API. Take the value and create a JWT token with jwt.io , make sure to add
{ "role": "admin" } to the payload.
You can now copy the JWT token and open up the
src/ApolloClient/client.js file. Add the token value to the
appJWTToken variable. Here's what it should look like.yb2xlIjoiYWRtaW4ifQ.FUewE3jySlmMD3DnOeDaMPBqTqirLQeuRG_--O5oPNw';const authMiddleware = new ApolloLink((operation, forward)=> {if (appJWTToken) {operation.setContext({headers: {Authorization: `${appJWTToken}`}});}return forward(operation);});export const clientWithJwt = new ApolloClient({cache: new InMemoryCache(),link: authMiddleware.concat(httpLink),});
Next, add the security context and RBAC to the
cube.js file.
// Cube.js configuration options: = {queryRewrite: (query, { securityContext }) => {if (!securityContext.role) {throw new Error('No role found in Security Context!');}if (!securityContext.role === 'admin') {throw new Error('You\'re not the Admin!');}return query;},};
Here's what it looks like in Cube Cloud.
You can add any particular logic you might want like RBAC, multi-tenancy, row-level access, and more.
Now, jump back to the dashboard-app. Restart it, and voila, you've added RBAC. The end outcome is this lovely chart showing orders per day for the whole of 2021 that you can only access with the
admin role. If you remove the
appJWTToken, your dashboard-app won't be able to fetch data anymore.
If you want to check out the source code in one place, here's the GitHub repo. And, you can also have a look at the live preview here.
Conclusion
In this tutorial, I wanted to show a step-by-step guide on how to use Cube's GraphQL API to build a metrics dashboard with data from a Postgres database.
With Cube Cloud you get a metrics layer that integrates with every major data visualization library including GraphQL compatible tools like Chart.js and react-chartjs-2. On top of all that, it also comes with multi-tenancy support out-of-the-box. Among the different multi-tenancy options, you can enable tenant-based row-level security, role-based access, using multiple database instances, multiple schemas, and more.
If you want to learn more about GraphQL with Cube check out our announcement that explains how we added the GraphQL API to Cube.
You can register for Cube Cloud right away if you want to have a look!
I'd love to hear your feedback about using GraphQL with Cube Cloud in the Cube Community Slack. Click here to join!
Until next time, stay curious, and have fun coding. Also, feel free to leave Cube a ⭐ on GitHub if you liked this article. ✌️ | https://statsbot.co/blog/graphql-postgres-metrics-dashboard-with-cube/ | CC-MAIN-2022-05 | refinedweb | 1,638 | 59.8 |
#include <rte_pci.h>
#include <rte_memory.h>
#include <rte_mempool.h>
#include <rte_ether.h>
#include <rte_kni_common.h>
Go to the source code of this file.
RTE KNI
The KNI library provides the ability to create and destroy kernel NIC interfaces that may be used by the RTE application to receive/transmit packets from/to Linux kernel net interfaces.
This library provides two APIs to burst receive packets from KNI interfaces, and burst transmit packets to KNI interfaces.
Definition in file rte_kni.h.
Initialize and preallocate KNI subsystem
This function is to be executed on the main lcore only, after EAL initialization and before any KNI interface is attempted to be allocated
Allocate KNI interface according to the port id, mbuf size, mbuf pool, configurations and callbacks for kernel requests.The KNI interface created in the kernel space is the net interface the traditional Linux application talking to.
The rte_kni_alloc shall not be called before rte_kni_init() has been called. rte_kni_alloc is thread safe.
The mempool should have capacity of more than "2 x KNI_FIFO_COUNT_MAX" elements for each KNI interface allocated.
Release KNI interface according to the context. It will also release the paired KNI interface in kernel space. All processing on the specific KNI context need to be stopped before calling this interface.
rte_kni_release is thread safe.
It is used to handle the request mbufs sent from kernel space. Then analyzes it and calls the specific actions for the specific requests. Finally constructs the response mbuf and puts it back to the resp_q.
Retrieve a burst of packets from a KNI interface. The retrieved packets are stored in rte_mbuf structures whose pointers are supplied in the array of mbufs, and the maximum number is indicated by num. It handles allocating the mbufs for KNI interface alloc queue.
Send a burst of packets to a KNI interface. The packets to be sent out are stored in rte_mbuf structures whose pointers are supplied in the array of mbufs, and the maximum number is indicated by num. It handles the freeing of the mbufs in the free queue of KNI interface.
Get the KNI context of its name.
Get the name given to a KNI device
Register KNI request handling for a specified port,and it can be called by primary process or secondary process.
Unregister KNI request handling for a specified port.
Update link carrier state for KNI port.
Update the linkup/linkdown state of a KNI interface in the kernel.
Close KNI device. | https://doc.dpdk.org/api-20.11/rte__kni_8h.html | CC-MAIN-2021-39 | refinedweb | 409 | 58.89 |
Tuples C# 7
by
In order to return multiple values from a method in C#, we can use approaches like creating out parameters, creating a custom class with the required properties, or using tuples.
However, in C# 7.0, we have the concept of tuples (quite similar to existing tuples), which can help in returning multiple values from a method.
Suppose we have a method, which will return the Department and Age of an employee, based on the EmployeeID. Using the old ways, we could have two out parameters for department and age, along with the EmployeeID, or create a custom class of an employee type and return the values or even use the existing tuple types. However, with new features, we can have a method with two return type values; i.e., int age and string department. The signature of the method would look, as shown below.
public static (int, string) GetEmployeeById(int employeeId) { int empAge = 32; string dept = "HR"; return (empAge, dept); }
After calling the method, we can access the return values, as shown below.
var employeeDetails = GetEmployeeById(21); Console.Write("Age is: " + employeeDetails.Item1+ ", Department is:" + employeeDetails.Item2);
The code does not seem to be too user friendly here, as we have to use the keywords Item1, Item2 etc. (similar to the existing tuples concept). However, we can change our method signature to include the use of names for the return values. Hence, the code will change to what is shown below.
public static (int age, string department) GetEmployeeById(int employeeId) { int empAge = 32; string dept = "HR"; return (empAge, dept); }
Now, in order to access the elements, our code will change to what is shown below.
var employeeDetails = GetEmployeeById(21); Console.Write("Age is: " + employeeDetails.age + ", Department is:" + employeeDetails.department);
this was about the concept of tuples in C# 7.0.
?Source :C#Corner,MSDN | http://a-hamoud.com/Post/PostByCategory?category=17&page=1 | CC-MAIN-2019-13 | refinedweb | 308 | 65.83 |
Tetris Is Hard To Test 169
New submitter JackDW..
One line? (Score:5, Funny)
Re: (Score:2)*AB
Re: (Score:3, Informative)
Really, who couldn't love code like this:
Except that is not "one line". It is six lines. Any program can be a "one-liner" if there is no limit on the line length. Well, unless you writing it in Python.
Also, as long as I am on a rant, Tetris is NOT NP-Hard, since the arrival of the blocks is probabilistic. It is only if the entire sequence of blocks is known in advance that it becomes NP-Hard. But that doesn't happen in actual play.
Re:One line? (Score:5,."
Re: (Score:2)
The line length limit is 256 bytes, of course.
The program is 430 bytes.
Re:One line? (Score:5, Informative)
In ASCII, but many BASICs will reduce keywords down to a single byte.
Re: (Score:2)
Re: (Score:2,."
And here we get to the core of the problem. The presenter passes off information in an incorrect light, so only the audience that cares to accept it will continue accepting his later statements.
One of those statements is that code coverage tells you what to test. It doesn't. It tells you what you haven't bothered to test. What you need to test is driven by the user stories, customer requirements, and other bits of development documentation. Testers writing tests for permutations of non-important functio
Re: (Score:3, Insightful)
perhaps. I wonder if it NP-hard (Score:5, Interesting)
The fact that probability is involved doesn't mean there's not an optimal strategy, of course, where optimal is defined as "highest expected score" (score X probability). So figuring out an optimal strategy is a hard problem - how hard is it?
If the probability of a certain series of shapes coming next were 100%, we'd have an NP-hard problem, agreed? Does another probability make it easier or harder? Harder, if anything. That's provable because the probability version can be solved by solving each of the potential series as if each were known. What's harder than NP-hard? It may well still be NP-hard. It can't be of any more solvable complexity class.
Re:perhaps. I wonder if it NP-hard (Score:5, Funny)
What's harder than NP-hard?
Intractable.
also, Smiling Bob (Score:2)
What's harder than NP-hard?
Intractable.
True. Smiling Bob is also harder.
Re:perhaps. I wonder if it NP-hard (Score:4, Informative)
I disagree.
For a stochastic process your greedy "take the option with the best chance" algorithm may work, it may fail completely, just depending on the random numbers. If you have an stochastic polynomial algorithm, you have a chance to get the same or better expectation value than your "optimize global then choose greedy" algorithm. Both approaches may win or fail, but in the deterministic game the np-complete version always wins, while the "shortcut" version cannot compete. In the stochastic version, the shortcut may be as good as the optimal solution, because you cannot get the global optimum anyway so choosing a local one may be a good choice.
defined as expected value (Score:2)
As I mentioned, the test of fitness is expected value - the average score of 100,000,000,000 games played with that starstrategy, not the result from one specific, randomly chosen game.
In Hold'em folding preflop with pocket aces may turn out for the best occasionally, but it's still a bad policy because it will lose more than it will win. The best strategy is the one that does well long term.
Re: (Score:3)
Jep, and your strategy gets worse, while another strategy may stay average.
Assume you flip a coin. heads or tails.
You NP-complete algorithm knows the sequence.
The probalistic one just guesses "heads".
Now the Expectation value of both are Zero in the stochastic case. In the deterministic one its infinite win for the np-complete one and zero for the probalistic one.
So this is an example, where a perfect algorithm for deterministic data is just as good as another for stochastic data.
This does not mean, you can
if I understand your point (Score:3)
Let me see if I correctly understand your point. Are you saying:
The best algorithm for a deterministic sequence may be / is NP-hard.
Best best algorithm for the stochastic sequence may be different.
Therefore, the best algorithm for the stochastic sequence may be easier than NP-hard.
That seems to make sense. Until you realize the deterministic sequence IS one case of the stochastic - where the probabilityof a certain sequence happens to be 1.00. If you had a polynomial algorithm for probability X, you could
Re: (Score:3)
no, you got it only the half way.
deterministic:
best = np-hard, perfekt
other: polynomial, good average
stochastic:
best: np-hard, not perfect, quality unknown
other: polynomial, good average
the point is not the runtime complexity, but the result. while the best algorithm cannot be beaten on the det. sequence, it may fail completely (in terms of quality) on a sequence without full information. If you got a good polynomial one with an average result, it may be better for many sequences.
one example may be an perfe
so just don't solve it (Score:2)
So in other words, you're pointing out that you could just not solve it, not come up with the optimum move each time based on expected value. Instead, you could settle for a "good enough" move and sometimes you'd get lucky. This is true.
you stated:
stochastic:
best: np-hard, not perfect, quality unknown
other: polynomial, good average
You called the first algorithm "best", acknowledging that the best (best long- term average) is NP-hard. The other can't be better than the best (by definition) , so the probl
Re: (Score:2)
Best is only with respect to the deterministic case.
disregard runtime complexity.
deterministic:
there is a best strategy (A).
all other strategies (B) are worse or equal.
A stochastic strategy (C) may have a good average quality.
stochastic:
the deterministic best strategy (A) cannot be perfect anymore, just as no other strategy can.
(A) has now unknown quality for a random sequence.
(C) still has the same average quality.
So there is now a possibility, that (C) may beat (A) in an average over a lot of games. (as a
Re: (Score:2)
Note that this limit is on the tokenised form stored in memory, not the ASCII representation. This is why the code e.g. uses "GOSUB FALSE" rather than "GOSUB 0": the FALSE token is shorter than the encoding of a line number.
Programs for the unexpanded (1K) ZX81 frequently used that type of memory-saving. All numbers were stored as floating point and took up 5 bytes of memory, and saying (e.g.)
LET A = CODE("$")
(where CODE is the equivalent of the ASC function for the ZX81's non-ASCII character set and $ is character 13) instead of
LET A = 13
actually saved you memory.
Re: (Score:2, Flamebait)
You're no fun. If I worked for you, I'd quit as soon as possible.
Anyone who had to read it, update it, or debug it?
Anyone who had to play the fucking game (it's full of game-breaking bugs -... [survex.com] )?
Re: (Score:2)
DMCA incoming (Score:3)
Re: (Score:3)
Re: (Score:2)
If anybody wrote code like that for me, they'd be made to sit on the naughty step and think very, very hard about what they'd done.
Unless, of course, you were developing for embedded hardware, where you are trying to do way too many things with way too few resources***. Then you'd give that programmer a promotion.
***Although those days are gradually coming to an end, as even the tiniest systems are getting more and more resources, and eventually they'll all join the rest of us, where readability, verifiability, and maintainability take top priority. But for now, they're not all quite there yet.
Re: (Score:3)
Actually, it is written for resource efficiency...specifically program size, which uses memory. The goal was to write a 1 line program, and in BBC Basic, that meant they were limited to 256 characters. Yes, maybe they could have wrote things with more verbose naming and had it compile to the same size, but the particular goal there was to write something big with little code. I think they accomplished it fairly well, and probably 95% (at least) of programmers would be hard pressed to replicate their results
Re: (Score:2)
BBC basic was interpreted, not compiled (though there may have been compilers written for it since).
Re: (Score:2)
BBC basic was interpreted, not compiled (though there may have been compilers written for it since).
It was my original instinct to say the same (since nearly all basic languages are), but I looked it up on wikipedia before posting and found that there was indeed a compiler for it:... [wikipedia.org]
A Compiler for BBC BASIC V was produced by Paul Fellows, team leader of the Arthur OS development, and published initially by DABS Press.[citation needed] This was able to implement almost all of the language, with the obvious exception of the EVAL function – which inevitably required run-time programmatic interpretation. As evidence of its completeness, it was able to support in-line assembler syntax. The compiler itself was written in BBC BASIC. The compiler (running under the interpreter in the early development stages) was able to compile itself, and versions that were distributed were self-compiled object code.[original research?] Many applications initially written to run under the interpreter benefitted from the performance boost that this gave, putting BBC BASIC on a par with other languages for serious application development.
There's not a whole lot of info about it on wikipedia, and it doesn't even say when it was written (and there are no citations), so I have no idea if it was something recent or very old.
Re: (Score:2)
You've never written code in perl, have you?
Re: (Score:2)
I'd hire the person in the blink of an eye. That kind of discipline is sorely missing among younger programmers these days.
Blockheads (Score:1)
I'm sure there is a joke in there somewhere.
Re: (Score:2)
9-INKEY6MOD3:FORr=TRUETO1 --- lol
Re: (Score:1)
Re: Blockheads (Score:2)
RTFM.
in Soviet Russia (Score:5, Funny).
Tetris doesn't need coverage tool to test you. Everything about you.
Code-coverage tool is crutch for weak capitalist engineer. Tetris is Soviet technology, forged by people's will.
Re: (Score:3)
Tetris doesn't need coverage tool to test you. Everything about you.
So what you're saying is...
In Soviet Russia, Tetris game tests you!
Nice advertisement (Score:5, Insightful)
From a company promoting automated WCET analysis. Hah!
Re:Nice advertisement (Score:5, Insightful)
Normal users don't test all cases of a game.
Maybe not, but as soon as you tell yourself, "I don't need to test this code, a normal user will never get to it;" you can be certain that after saying that, a user will find a way to break it. The Gods of Eternity will laugh at you.
Re: (Score:2)
This is just more marketing spam that's found its way onto Slashdot.
Re: (Score:3).
Perl-standard line length (Score:2)
Any language that doesn't require carriage return + linefeed can do anything in one line.
And Basic comes with a ton of library fuctions that makes things easier to do. No need to initialize memory, dispaly, setup graphic or keyboard interrupts, etc.
Re: (Score:2)
And let's not even get started talking about line numbers.
Re: (Score:2)
I've seen C64 basic. One line of code can be two lines on the screen. Maybe more than two lines when you realize you can compress names like POKE into the two-character acronym (second being shifted) and using it in list would happily decompress to something that can't be typed within the 2 screen-line limit.
BASIC on the Atari 800 and its descendants exhibited the same behaviour with respect to abbreviations and its three-screen-line limit on a single BASIC line.
Atari User magazine had a feature called "five liners" for very short programs. Many of the more elaborate ones pushed this as far as it would go by *requiring* them to be entered using abbreviations in order to fit this three-screen-line limit. IIRC most of these would be expanded upon processing, often taking them over the limit.
Re:Perl-standard line length (Score:4, Informative)
Re: (Score:2)
Well - it does use quite a nifty trick to implement a subroutine, given that you can only GOSUB a line number, and there's only one line number.
Re: (Score:2)
Any language that doesn't require carriage return + linefeed can do anything in one line.
Exactly... In fact there is a lot of very complicated one-line javascript libraries just download one of those
.min.js files :)
br Seriously, a readable 30 line implementation would have been more impressive...
Re: (Score:2)
Maybe you'd prefer a binary version at 256 bytes?... [untergrund.net]
Re: (Score:2)
This is no time to read or to drink, sir.
replacing line feeds with terminators is not a 1-l (Score:5, Informative).
Re: (Score:3).
If it's a one-line program, why is it more than one line?.
from:... [survex.com]
Re: (Score:3)
cats are mammals, not all mammals are cats (Score:3)
A line can be no more than 256 characters. That doesn't mean that the following is one line:
foreach mammal in pets
print mammal ' "is a mammal"
if (is_cat(mammal) {
print " and also a cat"
}
}
Just because all cats are mammals doesn't mean that all mammals are cats.
Just because all one-liners are less than 257 characters doesn't mean that all programs less than 257 chara
Re: (Score:2)
Re: (Score:2)
> Yes, because line breaks in BASIC are significant. That means, of course, that they actually *do* something;
Specifically, they *do* approximately the same thing as colons, they are generally synonymous with colons, of which this program has plenty.
Re: cats are mammals, not all mammals are cats (Score:2)
Yes, I see how the two are similare enough so as to be interchangeable. Maybe, if you don't want your program to run.
Re: (Score:2)
Except they don't.
Lines are an important concept of the language - they are referenced by gotos and gosubs. That program is one line. It is more than one statement, but the claim wasn't a "one statement program".
Re: (Score:2).
Re: (Score:2)
So you run cc -E first then
Re: (Score:3).
#include "kernel.c"
That's a one liner!
Re: (Score:2)
what do you expect from a oneliner? Tetris()? A Perl Oneliner does have semicolons as well.
Re: (Score:2)
Ignoratio elenchi
Re: (Score:3)
Re: (Score:3)
46 lines (statements), actually
No, statements are not the same as lines. Lines have real semantic significance in BBC Basic, in a few different ways: for one, GOSUB-type subroutines can only start at the start of a line (because that's where the line number is), and you also can't terminate an "if" without starting a new line. That (plus the 256-byte limit) makes writing one-liners in the language more of a challenge than in other languages where line breaks genuinely aren't significant.
One Line (Score:1)
Perlers are so jealous right now; they need 2 lines.
Re:One Line (Score:4, Informative)
It's simple enough to implement in a shell script. At least three or four of us have done it over the years.
Re: (Score:1)
I guess this aint the kind of joke that works on Slashdot
Re: (Score:2)
It's simple enough to implement in a shell script. At least three or four of us have done it over the years.
True. Here is one example:... [homelinuxserver.org]
Infomercial for a code coverage tool? (Score:5, Interesting)
So at some point you reach a point of diminishing returns. It might not be worth making sure every line got tested when there are procedures that have a bug that happens in one in a billion calls. My philosophy is, "Perfection is the goal. Doing better than the last release is the shipping criterion".
Re: (Score:2)
While everything you just said makes sense, nothing beats good testing, and like any tool, this is another one. All that code coverage does is let you focus on what has not been touched, then you'll be able to test it somehow. Also, I could create a similar problem, just like the one you wrote about above and would happen more often. I'm thinking of traffic management in the air. Or maybe even traffic management on land.
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
All that code coverage does is let you focus on what has not been touched, then you'll be able to test it somehow.
The trouble is that what you really need to test isn't how much coverage of the code you've got, but how much coverage of the possible input space. More specifically, you ideally want to know that each distinct combination of inputs that will cause a different type of behaviour in the code has been considered.
Of course, this is typically an implausibly difficult problem to solve in real world projects. To see why, consider that this article proudly claimed that finding the special case of clearing 4 lines t
Re: (Score:2)
Re: (Score:2)" o
Re: (Score:2)
FWIW, I agree with almost everything you wrote. I have nothing against coverage tools, and I use them occasionally myself. I just think it's important to have a realistic view of the benefits you do and don't receive.
The only thing I disagree with is your final paragraph, where you talk about safety-critical code. If you really were working on systems where a failure would have catastrophic consequences, I would hope you had a QA process a lot more sophisticated than running a test suite and this kind of co
Re: (Score:2) d
Re: (Score:2)
You know, I like what you wrote since it brought up a safety issue once I read about. It was about a plane making a crash landing, and the pilots heard a "GONG" sound, they were never trained for that sound, but they were able to find it in the manual. It seemed that that gong sound was the sound of everything is failing including redundancy. Now that gong sound is in all the simulations.
So I look at it as a tool, a tool to test all the code and see if it works in general for most situations, then test agai
Re: Infomercial for a code coverage tool? (Score:2)
Find better programmers, pay them better, manage the project better to allow time to fix bugs after they've run through QA.
Nonsense -- make your own test suite (Score:5, Insightful)
Has slashdot really become a means for tech companies to inject free advertisement by a simple blog post made to look like real journalism?
Re: (Score:3)
Defining all of the possible scenarios is often a lot harder than it looks. There aren't too many UI coders out there that haven't said "yeah, we need to fix it, but what made the user decide to do that?" at one time or another.
Speaking of UI's (Score:2)
Defining all of the possible scenarios is often a lot harder than it looks. There aren't too many UI coders out there that haven't said "yeah, we need to fix it, but what made the user decide to do that?" at one time or another.
This reminded me of a UI bug I discovered in Steam - if you have 2 monitors, one rotated 90 degrees, but not the primary, and try to maximize steam on that window, bad things happen.
Seeing as how I've seen only like 3 people doing that, most not in a home setting, I don't think it comes up much.
Re: (Score:2)
Add me to the list of those using this configuration in a home setting, unless you need to exclude me for actually having three monitors -- one is a TV that is usually off or being used for other purposes. (Having both AGP and integrated graphics active at the same time is... interesting. I get lots of odd behavior out of it.)
Re: (Score:2)
Why, did you not get enough
:CueCats [wikipedia.org] and i-Openers [wikipedia.org]? This is hardly the first Slashvertisement, and it's the only one from this company that I've seen.
Re: (Score:2) m
Re: (Score:2)
Beware coverage tools (Score:3, Insightful)
The article makes it sound like coverage tools help! If you're not familiar with them, they tell you which bits of code have been run, not how many of the N cases of that code have been executed.
So the code might fail with a particular combination of inputs, but the coverage tool is more interested in which bits of the code have been execute.
It's one of these tools and metrics that non-technical managers use to substitute for an ability to read code.
Re: (Score:2)
This is quite true, but at least it's something that can help. Programmers already make enough mistakes, so any help is welcome. Whether that help is worth the price tag in dollars and time has to be determined on an individual case by case basis.
Re: (Score:1)
All of my best testers have been people who use the product to do the job it was intended for, and that means they're testing the same common pieces of code through every use case. The coverage tool is simply the wrong metric, it assumes one use-case = one piece of code, and treats code as 'covered' if its been run because it doesn't know about the use cases.
Worse, the testers end up trying to run obscure code simply to get the right test metric. So all the belt-and-braces checks I put in to prevent future
Re: (Score:2)
Validation is way more important than writing code. Coding is grunt work that literally anyone can do. There is a huge demand for programmers, and very few are "good" programmers, 90% are just grunts who will never get any better, and that's life due to demand. So you need validation. I wrote and managed RTL development for 15 years at Intel and code coverage is simply mission critical. No other way around it.
If you think being able to "read code" is enough to see all the corner cases, you're either very yo
piece of cake (Score:2)
How I know it's an ad (Score:2)
As I explain in this article, the game is filled with special cases that rarely occur in normal play, and these can only be easily found with the help of a coverage tool.
This doesn't seem like news to me! I'm shocked and appalled!
C64 BASIC version within a screen of code (Score:4, Interesting)
It's a 15 liner.
Note that the {CBM-x} represents the graphic on that particular key (press the c= key and the letter to produce it)
1 a$="efijefijefijefijbfjnhijkbfjnhijkijfgaefjijfgaefjefjkiefbefjkiefbbfjidefj"
2 a$=a$+"abeieijkaeijijkgabfjiefgehijebfj@abe@dhe":o=207:dime(o):forx=0to111
3 print," {CBM-M}"," {CBM-G}":p=asc(mid$(a$,x+1)):e(x)=(pand3)+(pand12)*10:next:m=2024
4 print," {CBM-T}{CBM-T}{CBM-T}{CBM-T}{CBM-T}{CBM-T}{CBM-T}{CBM-T}{CBM-T}{CBM-T}":gosub6:goto7
5 pokei+e(r),c:pokei+e(r+1),c:pokei+e(r+2),c:pokei+e(r+3),c:c=160:r$="d":return
6 i=1152:r=h:c=32:gosub5:j=int(rnd(0)*7)*16:r=j:gosub5:r=h:h=j:i=i+9:return
7 gosub6:w=i:t=i:g=r:k=240:l=1278
8 gosub15:c=32:gosub5:r=-r*b-g*notb:g=r
9 i=w:w=i+40:gosub5:gosub15:ifbthen12
10 getk$:g=randkor((r-4*(k$="s"))and15)
11 t=w:w=w+(k$=l$)-(k$=r$):l=l+40:goto8
12 c=o:gosub5:m=m-(l0:w=-t*b-w*notb:l$="a":return
erratum: missing lines (Score:3)
13 fors=ltol+(m-160-l)*qstep-1:pokes+11,peek(s-29):next:y=y-q:m=m+q*40:z=z+q
14 l=l+40*not-q:next:print"{home}";z:goto7
15 b=0:forx=0to3:b=bor1andpeek(w+e(g+x)):next:b=b>0:w=-t*b-w*notb:l$="a":return
{home} is the home character blob (appears as reverse S character) within quotes
Use A,S,D keys to rotate and drop the blocks
Have fun!
Sort of spammy, also not convincing (Score:2)
Re:Sort of spammy, also not convincing (Score:4, Insightful)
Code coverage tools will not tell you if your tests are sufficient. They simply tell you what lines of code were hit. They don't tell you whether or not the line of code was hit while doing a meaningful test. In fact, it is trivially easy to write "tests" that exercise 100% of the code but have no expectations at all.
What code coverage tools tell you is what code you definitely haven't tested. If you haven't run that line of code in your tests, you definitely haven't tested it. This is useful information, but not essential if you have a good team. My current team is quite comfortable writing tests. We do most things TDD and without trying hard our average code coverage is 96%. I occasionally wander through the other 4% to see if it is worth testing and most of the time it isn't. Occasionally I will find the odd piece of logic that was jammed in hurriedly without tests, but on our team it is quite rare. On the other hand, I have worked on teams that were not comfortable writing tests and mostly wrote them after writing production code. On those teams we would get about 75% test coverage with holes you could drive a bus through. A code coverage tool was very useful for educating people on the need to improve the way they wrote tests.
I feel very confident I could TDD my way through a tetris implementation and get 100% code coverage without undue effort. I don't think I would find all of the corner cases without help, though. A code coverage tool wouldn't help me in that instance.
Don't see much BBC BASIC these days! (Score:2)
My dad and I wrote a BBC BASIC interpreter for PC-DOS. I'll have to dig it out and see if I can get this working in it.
If it's hard to test (Score:2)
Re: (Score:2)
External interactions (DBs, UI, etc) and highly performant code (embedded systems, kernels, etc) are where I wouldn't instinctively look to use TDD.
The benefits around testing are massive in themselves - you can set up automated unit tests that assure you not just the code coverage, but also the broad range of inputs that might cause different behaviours within that code.
The design benefits however are significant too, and worthwhile in their own right. I find that TDD leads to code that's easier to read, u | https://games.slashdot.org/story/14/10/25/2219221/tetris-is-hard-to-test | CC-MAIN-2016-30 | refinedweb | 4,695 | 70.84 |
pari/gp | mathics | excel | r | hp-11c
bc | shell | browser | emacs | postgresql | glpsol
What are the best calculators?
pari/gp
Install and run gp:
$ brew install pari $ gp ? (2/7)^20 %1 = 1048576/79792266297612001 ? %1 * 1. %2 = 1.3141323697825355135314451523953967139 E-11
The value for each expression evaluated is stored in a variable with a leading percent sign. Integers and rational numbers are arbitrary length. Multiply by 1. or add 0. to get a decimal approximation.
The number of digits in a decimal approximation can be set to an arbitrarily large value. To see the first 100,000 digits of π:
? \p 100000 ? Pi
Use ? to get online help, and ?\ to get a description of the "keyboard shortcuts", which are commands which start with a backslash.
gp has complex numbers. The real and imaginary parts can be rationals or decimal approximations:
? real(1 + 3 * I) %3 = 1 ? imag(1 + 3 * I) %4 = 3 ? abs(1 + 3 * I) %5 = 3.1622776601683793319988935444327185337 ? arg(1 + 3 * I) %6 = 1.2490457723982544258299170772810901231
There are matrices and vectors:
? [1,2;3,4] * [4,3;2,1] %1 = [ 8 5] [20 13] ? [1,2;3,4]~ %2 = [1 3] [2 4] ? [1,2,3] * [3,2,1]~ %3 = 10
The same operator * is used for multiplying two scalars, multiplying a scalar by a matrix, and multiplying two matrices.
Vectors are 1×n matrices. * can be used for the dot product, but the transpose postfix operator ~ must be used on the 2nd matrix.
base conversion
number theory
combinatorics
ascii plots
mathics
Mathics is an implementation of the Mathematica language. The underlying engine is SymPy.
Use pip to install it. Run the command line version with mathics:
$ pip install mathics $ mathics
Or use mathicsserver to run a webserver which supports a notebook interface.
Solving equations:
In[1]:= Solve[x^2 + 2x + 1 == 0, x] Out[1]= {{x -> -1}} In[2]:= Solve[{2x+3y==7, 4x-y==7}, {x, y}] Out[2]= {{x -> 2, y -> 1}}
Calculus:
In[1]:= D[x^3 + x + 3, x] Out[1]= 1 + 3 x ^ 2 In[2]:= D[x^3 + x + 3, x] /. x -> 2 Out[2]= 13 In[3]:= D[Log[x], {x, 3}] Out[3]= 2 / x ^ 3 In[4]:= Integrate[x^3 + x + 3, x] Out[4]= 3 x + x ^ 2 / 2 + x ^ 4 / 4 In[5]:= Integrate[x^3 + x + 3, {x, 0, 1}] Out[5]= 15 / 4
- DSolve, RSolve
- Simplify
- Factor, Expand
- Together, Apart
excel
- sort
- filter
- group
- graph
r
Best for statistics and graphing.
R installs itself on the command line on Mac OS X at r and on Ubuntu at R. Note that zsh has a built-in called r for repeating the last command. Use command r to by-pass the built-in.
hp-11c
A description of the HP-11C.
Dedicated calculator hardware is obsolete. For that matter calculators which display one number at a time are obsolete.
That said, I like having an HP-11C style calculator app on my iPhone. Why waste cerebral cortex learning something else. I chose the most skeuomorphic HP-11C app that I could find. The best HP-11C apps run the original ROM on a hardware emulator.
I like reverse Polish notation calculators because there is no need for paren keys. One can do more calculations without storing values in registers.
One can install a more capable HP-15C app instead. However, the HP-11C was what I owned back in the day. The HP-15C can perform complex and matrix arithmetic, but since the screen cannot display an entire matrix or even an entire complex number, it is an exercise in frustration.
bc
An arbitrary precision algebraic notation Unix command line calculator.
shell
Shells can perform arithmetic. The syntax is cumbersome, but it can save a context switch:
$ echo $(( 347 * 232 )) 80504
The zsh shell can perform floating point arithmetic but bash cannot and must resort to forking bc:
$ echo $(echo '3.14 * 5' | bc)
The fish shell has a built-in command math which takes a string as an argument:
$ math '1.1 * 2.3'
It is a wrapper to bc, so one may need to use scale to get decimal digits:
$ math '13 / 5' 2 $ math 'scale=3; 13 / 5' 2.600
browser
One can perform arithmetic by searching on the arithmetic expression. The search engine will probably infer that you are trying to do a calculation and show the result. Google puts the result in the display of a working calculator implemented in JavaScript. implements a portion of the Mathematica language. Try searching for
Solve[3x + 2 == 9x - 17, {x}]
Both of these techniques require opening a web page in a new tab. One can calculate without leaving the current page by dropping into the debugger:
Try this:
> Math.log(10)
In Safari, one must first enable the Develop menu in Preferences…
emacs
Using Emacs to perform a calculation is another way to avoid a context switch.
One can use M-: to enter and evaluate an Lisp S-expression in the minibuffer.
One can put the current buffer into M-x lisp-interaction-mode. The *scratch* buffer starts out in Lisp interaction mode. In Lisp interaction mode one can type Lisp S-expressions in the buffer and evaluate them. C-j writes the result in the buffer and C-x C-e echos the result in the minibuffer.
Another way to invoke Lisp from Emacs is M-x ielm. This creates a buffer with a Lisp REPL. Lisp expressions are evaluated as they are entered. It has command line history, but it is implemented in the slightly disconcerting (to me) Emacs way.
Emacs Common Lisp Emulation.
Emacs has a calc mode which sounds impressive on paper. It can perform matrix calculations, symbolic algebra manipulations, and graphing. I don't use it because it has a bit of a learning curve.
postgresql
Using the database can be regarded as a context switch saver:
> select 3.14 * 5;
Databases are good at statistics. count, sum, min, max, and avg are standard. PostgreSQL also has:
stddev_samp stddev_pop var_samp var_pop cor(X, Y) cov_samp(X, Y) cor_pop(X, Y) regr_intercept(X, Y) regr_slope(X, Y)
window functions
import and export
example of importing data and doing linear regression, scatterplot
Finding out what functions are available.
> select count(*) from pg_proc > select count(*) from INFORMATION_SCHEMA.routines | http://clarkgrubb.com/calculators | CC-MAIN-2019-18 | refinedweb | 1,055 | 66.94 |
After you’ve installed UI for ASP.NET MVC and you’ve had a chance to walk through and play with the demos,
you’ll want to create your own project and get started writing
your own code. So you go to the File menu and open the New Project
dialog:
But what do you do with all these options? Do you just accept the
defaults? Do you need to change anything, to support different
project needs or environments? The list of checkboxes and drop-down
lists can be a little overwhelming at first and it may not
look like they matter, since your project will work if you
just accept the defaults. But each of these
options does matter, depending on the type of project you’re
building and how you want the project setup.
Before I get to the options for setting up your new project,
you should know how to get there. There are actually 2 different
ways to start, but they both end up at the same place.
You can use the “File” / “New” menu to open the New Project
dialog.
Once in here, you can drill down to the Telerik menu under
the project templates. This gives you the choice of a C# or
VB project.
The Telerik Menu
The other option is to use the “Telerik” menu that the
ASP.NET MVC helpers installed in to Visual Studio.
This menu option is really just a shortcut to take you in to
the new project dialog that Visual Studio provides, with
the Telerik templates options already open.
After you’ve selected your language for the new project, you’ll
be taken to the Project Configuration Wizard screen.
If this is your first project and you are just experimenting,
you may want to just accept the defaults by clicking the “Next”
button.
The second part of the wizard gives you the option to include
the Telerik DataAccess ORM
which I won’t be covering here.
You can safely ignore this if you don’t have DataAccess
installed, or don’t want to use it. Click the “Finish” button and your project will be generated
from the template.
After generating the project, run it and you will see the standard
ASP.NET MVC project template with one small change: the addition
of an Kendo UI PanelBar
on the homepage, telling you a bit about ASP.NET MVC in the
center of the page. Telerik UI for ASP.NET MVC is powered by Kendo UI, so you have access via Razor to any Kendo UI component. Additionally the script and css references that it uses reflect it's use of Kendo UI.
You can view the project source and see the panel bar configuration
in the “Views” / “Home” folder, and the index.cshtml file.
index.cshtml
There’s a lot to look at with just the default project
template applied. But before you get to the project, files and folders, you need to
understand each of the options in the configuration wizard and how they affect the
project.
Chances are you won’t have to change this option since it
selects the currently installed version by default.
The next option allows you to copy all of the referenced
assemblies for the extensions, in to your solution.
With this option checked, your solution will contain a copy
of all the .dll files for the helpers and the Visual Studio
project files will reference those copies. When this option is
not checked, the references will work from the Global Assembly
Cache.
If you are deploying to a server that does not have the helpers installed on them directly, you will want to check this box.
This will let you deploy to any server that can run your app,
and have all the files needed right there in your solution.
This option is fairly self explanatory.
Pick the version of
ASP.NET MVC that you wish to target. The project wizard will
reference that version of ASP.NET MVC, creating the appropriate
files and default website layout and content.
The View Engine option allows you to specify whether you want
to use the older WebForms views or the newer Razor views with
your project.
Your choice here will change the web.config settings to match
the view engine selected, and also change the file and folder
layout. You will not have controllers with WebForms, but will
instead have code-behind.
The theme gives you a drop-list with all of the available
Telerik ASP.NET MVC themes; it allows you to specify which one should
be used by default.
Note that this does not change the theme of the ASP.NET
MVC project. It only changes the Kendo UI theme. For example,
selecting “default” vs “bootstrap” produces these
differences in the panel bar of the homepage:
If you want to change the ASP.NET MVC theme, you will have to
modify the HTML and CSS of the relevant files for that.
This option will copy predefined editor templates to
your project in the ~/Views/Shared/EditorTemplates folder.
~/Views/Shared/EditorTemplates
The editor templates are used with the @Html.EditorFor calls
in your Razor files. Each of the editor templates is matched
up to a specific type of input, which can either be inferred
from the data type or specified as an option.
@Html.EditorFor
<!-- infer the editor type from the data type -->
@Html.EditorFor(model => model.MyValue)
<!-- explicitly set the editor type -->
@Html.EditorFor(model => model.AnotherValue, "DateTime")
When you include the editor templates (which I highly recommend
doing), you will get easy and consistent Kendo UI controls
in your pages by using the EditorFor method. You can customize
the editor templates to your needs of course, but the basic
templates provide the Kendo UI controls that you will want on
your pages.
EditorFor
for more information on how to use them, and how to customize them.
The Kendo UI CDN
- Content Delivery Network - makes it possible
to distribute the Kendo UI JavaScript and CSS files to your
users, from a server that is geographically near them. This
can result in faster delivery of the files, making the page
appear to load faster.
When you enable this, the <script> and <link> tags for
Kendo UI’s JavaScript and CSS will no longer point to your
project. Instead, they will point to the Kendo UI CDN.
<script>
<link>
Without the CDN:
With the CDN:
If you’re developing a website that is publically available,
and have a user base around the world, then the CDN option
is a good choice for delivering the Kendo UI scripts and CSS
files. If you’re building an internal app, though, you may
get better performance without the use of the CDN. Delivering
content from a local network also allows more control
of security and other aspects of the the application as well.
Also notice the next option, “Copy Global Resources”, is
disabled when you have the “Use CDN Support” checked. This is
because the CDN versions of the scripts include the globalization
resources.
This option copies the globalization / localization resources
to your project, allowing multiple languages to be used.
When you select this option, files will be copied to the
~/Scripts/kendo/{version}/cultures folder in your project.
~/Scripts/kendo/{version}/cultures
For more information on how to use the globalization resources,
see the globalization help topic
in the Kendo UI documentation.
Rendering Right-To-Left with Kendo UI
is an option for RTL language support.
Select this option if you are developing a website that will
show an RTL language. You may want to include the globalization
resources (see the previous option), to get the language specific
labels and other settings for RTL languages.
The final option in the wizard will add an MSTest project
to the solution.
The test project will have an empty test suite added to it,
with a generic “UnitTest1” class and “TestMethod1” sitting
in the suite.
using System;
using Microsoft.VisualStudio.TestTools.UnitTesting;
namespace KendoUIMvcApplication9Tests
{
[TestClass]
public class UnitTest1
{
[TestMethod]
public void TestMethod1()
{
}
}
}
If you’re not going to write any unit tests for your project,
you won’t need this option selected. But why wouldn’t you
write unit tests for your project? :)
That certainly is a lot of options… but that’s one of the
things that makes the MVC helpers so valuable! We’ve listened
to you, dear developer, and we’ve added the
core set of options that you need to support the varying and
complex project types in your work place.
With this information in hand you should
be able to navigate wizard’s choices and create a near-perfect
solution in a matter of minutes instead of spending hours
manually configuring everythiing.
If you need more information about Telerik ASP.NET MVC extensions, | https://www.telerik.com/blogs/understanding-the-options-for-a-new-telerik-asp.net-mvc-project | CC-MAIN-2020-40 | refinedweb | 1,470 | 71.75 |
< Infrastructure | Factory2 | Focus
Contents
What is a Focus Document?
The Factory 2.0 team produces a confusing number of documents. The first round was about the Problem Statements we were trying to solve. Let’s retroactively call them Problem Documents. The Focus Documents (like this one) focus on some system or some aspect of our solutions that cut across different problems. The content here doesn’t fit cleanly in one problem statement document, which is why we broke it out.
This document provides more detail to the Changes/ArbitraryBranching proposal.
Background on PkgDB
PkgDB is the central repository for information on all packages in Fedora. Its primary job is to manage the ACLs of a package’s dist-git branches, but it’s also an interface to request new packages in Fedora and view metadata about packages such as a summary of its purpose and external links to a package’s builds, updates, and source. It is a central part of Fedora’s workflow today.
PkgDB in general. There are also additional branches for Extra Packages For Enterprise Linux (EPEL), which implicitly share the EOL of its RHEL/CentOS counterpart. The SL on EPEL branches are strict as they cannot contain any breaking changes; therefore, a package’s EPEL branches are only updated with bug fixes, security fixes, and features which do not alter the current behavior.
Although the implied SLs and EOLs in these branches have worked well for Fedora in the past, it’s becoming increasingly difficult to juggle these different application lifecycles and their dependencies’ lifecycles under the umbrella of these limited number of SLs and EOLs, especially when trying to keep up with upstream. Please read about the Fedora Modularity project for more argument on this position.
Why Make This Change?
Factory 2.0 is an enabler for the Modularity project and that project will change how a package’s SL and EOL are defined. Since in a modular operating system (OS) a single package is no longer tied to the entire OS distribution, module packagers can now specify which specific packages should be included in the module. This allows for different branches with different SLs and EOLs. For instance, you may have a bug and security fix only branch for a specific version of a package, but you may have another branch that is the same as the upstream master branch for the package. This added flexibility can allow for a package’s SL and EOL to be defined to fulfill the needs of the module rather than just the OS release.
To further clarify this point, there are some graphics from Ralph Bean’s Factory 2.0 presentation from DevConf 2017 which show the proposed changes in branching. Below are two of those graphics; the first illustrates the current branching strategy and the one below illustrates the new branching strategy that will come with Modularity.
To further illustrate, below is a graphic of two versions of a django module and how it utilizes this new branching strategy.
- The 1.9 django module utilizes the 2.12 python-requests branch and the 1.9 python-django branch. The 2.12 python-requests branch in this case would most likely only be updated with bug fix releases (i.e. 2.12.x). The same would apply for the 1.9 python-django branch.
- On the other hand, the 1.10 django module utilizes the master branch of python-requests which means that whenever the module is built, it’ll just be using the latest packaged version available. This introduces a potential risk as their could in theory be breaking changes to python-requests, but it could be deemed that whatever benefits that the latest upstream provides are worth the risk of potential incompatibilities in the future.
These are all decisions that module packagers will make and they should be based on the SL and EOL they specify for their module. Now whether the module packager made the correct decisions is out of the scope of this document and best practices around this should be written by the Modularity team. One thing that is not clear in this graphic is that although these branches are named after versions and it makes logical sense, they do not have to be and could be named anything that the packager of the RPM would like; therefore, modules should choose RPMs because of their EOLs and SLs, not because of the name.
How To Make This Change
Since PkgDB is tied to the workflow of one branch per OS distribution, it needs to be heavily modified or entirely replaced to allow a packager to arbitrarily specify branch names for their package. Since these branches are no longer tied to the lifecycle of the Fedora release, they will need to have SLs and EOLs defined by the packager for the given package. The SLs should be defined through a variety machine readable parameters. The specific parameters will be decided later on and this should be decided upon by the Fedora community before the release of these changes. The Factory 2.0 team thought about modifying PkgDB but Pierre-Yves Chibon/pingou (the author of PkgDB) noted that with Pagure being deployed over dist-git in Fedora (as of this writing it is deployed in staging), it, alongside of PDC, could be a viable option to replace PkgDB.
With Pagure over dist-git and PDC (both with modifications), we could do away with PkgDB entirely for the new branching strategy and retrofit it for older branches (EPEL6/7 and F24/25/26) that we must support. This change would mean that ACLs would no longer be tied to branches but instead to the repository itself. One could either allow any packager with repository commit access to push to any branch, or not allow packagers to push to any branches and instead, only submit pull-requests which can then be merged by maintainers of the repository. To gain rights on the repository, you’d have to be given commit access the specific package repository.
There are other use cases for PkgDB other than ACL management. For instance, PkgDB is responsible for the implicit EOL and SL information tied to the Fedora release branches (e.g f25), however, this new EOL and SL information needs to be explicit and stored somewhere that is accessible programmatically. There are two approaches to solve this problem that have come up in discussions.
- The first option is to create a Product Definition Center (PDC) endpoint to contain the EOL and SL information. These entries would most likely be requested through a ticket and upon approval, the entry in PDC would be created. Pagure would restrict the creation of new branches to those that have an EOL and SL defined in PDC.
- The second option is to keep a separate git repository with a YAML file containing the EOL and SL information of the package branch. This YAML file cannot be in the same repository as the package because it will then never be able to share the same codebase as another branch, which is a requirement for some packagers.
We decided that option one is the right approach, and here are video demos showing that functionality (the term SLA is used in the video, but this would considered an SL in Fedora):
- PDC SLA APIs (currently in production PDC in Fedora)
- PDC SLA APIs with Modularity
PkgDB also has an “Admin Actions” section which is basically a ticketing system that allows new packages to be requested and approved. This could be replaced with a Release Engineering (releng) controlled Pagure repository just for tickets. CLI tools will be provided that make and process these tickets. This method also opens the door for some of these tickets to be automatically processed by a microservice that listens for new ticket messages. This microservice is beyond the scope of this project, but it is worth mentioning as a future enhancement.
PkgDB also keeps track of orphaned packages which are just packages with no owners. Ideally this functionality would be replaced with a query to Pagure or PDC. Querying Pagure would just be an API call asking for all projects that are owned by the "orphan" user. Another feature of PkgDB is that it keeps track of retired packages. Retired packages are currently distinguished by a file named “DEAD.package” in their repository. We would continue to do this, but would also mark all of the package's branches as "not active" in PDC to allow this to be queryable programatically. PkgDB is also a source for determining the default assignee and carbon copy (CC) list for new Bugzilla bugs. The default assignee of a bug would likely be the owner of the Pagure repository, but this point is up for discussion. As to the second point, a new feature is being added to Pagure to allow a user to watch only issues and PRs, only commits, or both on a project (PR #2255). The users that are at least watching the issues in the Pagure repository would then be CC'd in Bugzilla bugs. PkgDB is also a source for generating “PackageName-owners@fedoraproject.org” email aliases. The members to these aliases would be determined by the list of users that have commit access on a package repository in Pagure. PkgDB also determines who is allowed to edit Bodhi updates. This could be determined again by querying for the list of users with commit access in Pagure.
The drawback to these Pagure queries is that the queries could be relatively slow and would use up a lot of resources on the Pagure server(s). An idea to alleviate this is to programmatically query Pagure when package ownership changes or a package is retired. The results would then be stored in a JSON file that is accessible through the proxies or available in PDC instead. Further discussion on this topic is needed.
How This Affects Release Engineering (WIP)
- Need a way to enforce that a module’s SL and EOL is not greater than the lowest common denominator of it’s components
- “cvsadmins” currently process new package requests with the “pkgdb-admin” CLI tool. This will need to be done in Pagure with a new tool. We got the approval from limb on this.
- When package branches go EOL (on their own terms), Release Engineering will need tooling to make the retirement of those branches happen. Things such as sending emails to the relevant people will need to happen.
- Does the mass branch process go away or do we need a new equivalent version of that process for modules?
Additional Tooling Changes
- PkgDB is often queried to determine what the latest active releases are. This data will need to be stored in PDC instead.
- Zodbot queries PkgDB to figure out the latest release for cookies. It will need to query PDC instead.
- Bodhi queries PkgDB to find the list of critpath packages in a collection. This will need to use PDC instead.
- Not really related to PkgDB, but to branch names: How will the branches be tied to Koji? Currently
fedpkguses the branch name to generate the correct dist tag, and checks if the build already exists based on the generated NVR. The mapping from branch names to dist tags in the tools would have to be updated.
General Work Plan
Below is an itemized list of big tasks that roughly need to happen in order. At what point do we get to decommission pkgdb? What subprojects are blocked on what other subprojects?
- [done] Have the Modularity team and/or Fedora Community devise a machine readable scale for SLs with a series of parameters that include: bugs will be fixed, security issues will be fixed, how fast CVEs will be addressed, non-breaking features will be added, breaking changes will be added. These parameters will either be boolean values or time durations.
- [done] Add a PDC endpoint to allow storing the new EOL and SL information (include ability to retire packages)
- Add the following features to Pagure
- [done] Add the ability to view a repository's owners, admins, and committers through the API
- [done] Add the ability to give ownership of a package
- [done] Add the ability to view a repo's branches through the API
- [done] Add the ability to query projects via namespace through the API
- [in progress at the time of publication] Add the ability to watch issues and PRs, commits, or both.
- Add the ability to generate a user API key to create new issues on a specific project
- Need a git hook to deny creating new branches via git if they don’t also appear in PDC. This needs to apply only to mainline dist-git branches. The “forked” dist-git branches under your personal username should allow arbitrary branches.
- Solidify a strategy for retiring and manually decommissioning branches
- Release a new version of Pagure that sits on top of dist-git
- Offload ticketing of new package requests to pagure.io/SOMEPROJECT
- Use a Pagure repo for this on. Ask releng and “cvsadmins” if they want to use pagure.io/releng or if they want a brand new repo just for new package requests.
- Enforce that all tickets have an EOL and SL for the specified package branch
- Create a CLI tool for the workflow of package requests from Pagure
- [done at this publication but not in use yet] Update the script to generate the “PackageName-owners@fedoraproject.org” mail aliases from the list of package owners to use Pagure
- Restrict who can edit Bodhi updates based on the list of owners in Pagure
- Modify the Bugzilla sync script to set the owner/default cc list in Bugzilla based on Pagure data
- [done at this publication but not in use yet] Modify the Koji sync script that sets the packagelist owner in Koji based on Pagure data
- Lastly, decommission PkgDB | https://fedoraproject.org/w/index.php?title=Infrastructure/Factory2/Focus/ArbitraryBranching&oldid=497228 | CC-MAIN-2019-18 | refinedweb | 2,321 | 59.53 |
Façade pattern is used to hide the complexity of the sub system and provide users with the simple and easy to use interface to consume.
Commonly used use cases for Façade are
- Provide simple and easy to use interface to backend legacy system.
- Sometimes used to build the public interface for users to consume.
- To abstract the complexity of the system for simplicity and security
- Depending on use case, it can also be used to improve performance by reducing frequent call invocation and providing remote client single point of access by grouping related functionality in the subset of classes.
Façade Pattern can be implemented for Pojo, stateful and stateless use cases. However for stateful classes it consumes the server resources and ties up to the client during invocation process. So one needs to careful that client do not take up too much of time for processing else server resources will be consumed. Generally if conversational state needs to be maintained then stateful façade can be used.
Generally complexity of the logic dictates the length and abstraction of the wrapper methods in the façade.
Facades can also be used to create factory methods.
Example of Facade can be.
public class Car { public void 4CylinderEngine(){ // Methods the exhibit behavior of 4 Cylinder } public void 6CylinderEngine(){ // Methods the exhibit behavior of 6 Cylinder } } // Using the facade new Car.4CyclinderEngine(); new Car.6CyclinderEngine(); | http://vasanti.org/blog/?p=480 | CC-MAIN-2019-30 | refinedweb | 229 | 53.71 |
view raw
So I guess this is kind of related to my last question, but I was wondering if there was a way to call a method by using a command line option. Say you had a method like this:
def b
puts "Hello brian"
end
ruby mine.rb -b
Hello brian
There are a lot of ways to do this, depending on the use case. The below code is taken from the Ruby docs with the extra method added.
Realistically you'd probably want a class that handles the different options and encapsulates the method instead of having them at the file scope.
#!/usr/bin/env ruby require 'optparse' options = {} OptionParser.new do |opts| opts.banner = "Usage: example.rb [options]" opts.on("-b", "Run method 'b'") do |v| options[:b] = true end end.parse! def b puts "Hello Brian" end if options[:b] b end
I've also added a shebang at the top that will automatically call ruby. As long as your script file is executable you can call it directly (ie.
./mine.rb -b). | https://codedump.io/share/WWS3HED45289/1/use-a-command-line-option-to-call-a-method | CC-MAIN-2017-22 | refinedweb | 177 | 74.59 |
New Product
Your technology certification is waiting. Enroll in Cloud Class ®
Submit
<?xml version="1.0" encoding="utf-8"?>
<mx:WindowedApplication xmlns:
<mx:Script>
<![CDATA[
import mx.controls.Alert;
public var storedUrl:String = "";
function locChangeHandler(e:Event) {
if (htmlbox.location != storedUrl) {
htmlbox.location = storedUrl;
//Alert.show('Blocked');
}
}
]]>
</mx:Script>
<mx:HTML
<mx:Button
<mx:Button
</mx:WindowedApplication>'ve accepted the answer - thank you! - but I have a further comment/question you might have some insight into? I'll post it once this is closed.
Thanks!
Terry
Thanks for the solution. I have been remiss about getting back here and updating this question (sorry!), because I did work out ALMOST this answer myself. This stops the change of location, but does so by refreshing the current location (the state/DOM of which might have changed in the meantime, so that's not ideal). When searching for help though I came across a developer who noted that if an attempt to find a new location fails, the load silently fails. No error message. So my solution has been to try and load something like "httpy://nonsense_url" in response to a location change, and this has worked perfectly. Still seems astonishing, though, that a simple cancelLoad() does not work?
But this leads to a further question, which would make this solution even more useful to me. All one "intercepts" here is the URL. Any ideas re how to intercept the entire HTTP Request. By which I mean, including POST data (if any)?
Thanks in advance!
Terry | https://www.experts-exchange.com/questions/23787171/Need-to-intercept-URLRequests-from-pages-loaded-into-HTMLLoader.html | CC-MAIN-2018-22 | refinedweb | 253 | 69.48 |
The.
Developers who use Visual Studio can also use the Object Relational Designer to generate entity classes. The command-line approach scales well for large databases. Because SqlMetal is a command-line tool, you can use it in a build process. For more information, see Object Relational Designer (O/R Designer).
sqlmetal [options] [<input file>]
To view the most current option list, type sqlmetal /? at a command prompt from the installed location.
Connection Options
Option
Description
/server:
<name>
Specifies database server name.
/database:
<name>
Specifies database catalog on server.
/user:
<name>
Specifies logon user id. Default value: Use Windows authentication.
Specifies logon password. Default value: Use Windows authentication.
/conn:
<connection string>
Specifies database connection string. Cannot be used with /server, /database, /user, or /password options.
Do not include the file name in the connection string. Instead, add the file name to the command line as the input file. For example, the following line specifies "c:\northwnd.mdf" as the input file: sqlmetal /code:"c:\northwind.cs" /language:csharp "c:\northwnd.mdf".
/timeout:
<seconds>
Specifies time-out value when SqlMetal accesses the database. Default value: 0 (that is, no time limit).
Extraction options
/views
Extracts database views.
/functions
Extracts database functions.
/sprocs
Extracts stored procedures.
Output options
/dbml
[:file]
Sends output as .dbml. Cannot be used with /map option.
/code
[:file]
Sends output as source code. Cannot be used with /dbml option.
/map
[:file]
Generates an XML mapping file instead of attributes. Cannot be used with /dbml option.
Miscellaneous
/language:
<language>
Specifies source code language.
Valid <language>: vb, csharp.
Default value: Derived from extension on code file name.
/namespace:
<name>
Specifies namespace of the generated code. Default value: no namespace.
/context:
<type>
Specifies name of data context class. Default value: Derived from database name.
/entitybase:
<type>
Specifies the base class of the entity classes in the generated code. Default value: Entities have no base class.
/pluralize
Automatically pluralizes or singularizes class and member names.
This option is available only in the U.S. English version.
/serialization:
<option>
Generates serializable classes.
Valid <option>: None, Unidirectional. Default value: None.
For more information, see Serialization (LINQ to SQL).
Input File
<input file>
Specifies a SQL Server Express .mdf file, a SQL Server Compact 3.5 .sdf file, or a .dbml intermediate file..
I work daily in the Microsoft Dynamics GP database, which has 25,000 stored procs. I have anywhere from 10 to 500 procs that I add to the database.
If I just create a dbml file, it takes over 10 minutes to read through all of them.
It would be nice if the /sprocs switch allowed you to say 'only import sprocs that begin with "sp_" ', or something like that...
I agree with Steve Gray, and would like the capability of a @file for each of the /views /functions etc. switches so we could enumerate just the artifacts we want. Supporting wildcards would be cool within these tiles, too.
Pretty standard pattern.... /sprocs @sprocs.txt
Provide a command line switch to specify which tables are code genned. | http://msdn.microsoft.com/en-us/library/bb386987.aspx | crawl-002 | refinedweb | 505 | 62.95 |
Asked by:
Remove Favorites, Libraries, and Homegroup from Navigation Pane
-).
Please let me know if you find this useful.
Enjoy your clean Navigation pane!
-Noel
- Edited by Noel Carboni Tuesday, September 6, 2011 1:56 AM
General discussion
All replies
- You might get an error message in regedit:
"Cannot edit Attributes. Error writing the value's new contents".
This is because the administrator user does not have permissions. To solve, do this in regedit:
1) Right click ShellFolder and choose "Permissions..."
2) In the list of users ("Group or usernames:") choose Administrators.
3) In the box "Permissions for Administrators" (box under the Add... and Remove buttons), check the Full Control / Allow check box.
4) Click OK.
Now edit the key values as described above.
Sean
- Thank you Noel and Sean. I am so glad to get rid of Favotites and Libraries as I never used either.
I ended up creating 4 .reg files so I can do add or remove these as needed. No logoff/logon needed either.
Rich
- Never mind, I think I found the answer: How to: Use a Script to Change Registry Permissions from the Command Line
Rich
- THANK YOU !!! This change greatly reduces my annoyance with 7 - the other big improvement being the recreation of the classic start menu from XP using cs-menu from
- I encountered a problem with Internet Explorer (8) after making these Registry changes - clicking the 'Browse' button on any file input element caused the tab to crash then recover itself, every time. Starting IE in admin mode made no difference.
For some reason, I did not immediately connect this behaviour with the Registry changes I had made, and after disabling Add-Ons and then completely resetting IE did not work, I ended up running a registry repair (via the Windows OneCare scanner). It found and fixed some 130 or so errors (this on a system about 3 weeks old). But it still did not fix the problem.
Finally the penny dropped, and I restored these two Registry entries to their original values. Problem solved.
But even better, the jumping navigator-tree behaviour that prompted me to make these changes in the first place appears to have vanished. I wonder if it will come back...? If not, I would assume the registry cleanup fixed the problem, so it might be worth a try.
Another useful trick here to change the default location when you first open explorer. You just pin explorer to the taskbar and change the location the shortcut points to, to make it go to computer change it to:
%SystemRoot%\explorer.exe /e,::{20D04FE0-3AEA-1069-A2D8-08002B30309D}
more detailed instructions:
- Well, I did some more tests for my "save as" crash problem in Notepad and it consistently happens for .txt files on the desktop when the "remove favorites" tweak is activated, anywhere else "save as" works fine. Maybe it has something to do with the Desktop being one of favorite's links. My Win7 is x64, btw.
I'm curious how come nobody has mentioned the GPO "Turn off Windows Libraries features that rely on indexed file data" which is located here:
User Configuration – Administrative Templates – Windows Components – Windows Explorer
Has it not been effective?
It sets the following key to 1:
HKCU\Software\Policies\Microsoft\Windows\Explorer\DisableIndexedLibraryExperience = 1
The description of this GPO is here:
Oddly enough, it doesn't get rid of the "Libraries" icon from appearing in the Explorer, but it does stop the "non-indexed location" error messages from popping up.
I'm really hesitant to use too many registry hacks in our environment because they will be affecting a lot of computers. How stable has the "disable library" registry update listed above been for everyone? I really hope that Microsoft comes up with a more comprehensive solution to disable Libraries until the enterprise is able to catch up.
Thanks!
Mike
I use the a tool named "wenpcgf" provided for free at the end of the following blog post.
I needed to kill and restart explorer.exe to make the changes effective. These can be done via Task Manager as explained below.
Great post Noel, thanks.
How to 'bounce' explorer.exe:
Start task manager (control-shift-escape will do it)
Switch to the Processes tab and find "explorer.exe"
Select explorer.exe and hit "End Process". Confirm. Note that your taskbar will vanish.
Start it again by selecting File->New task(Run)...
In the Create New Task dialogue box type "explorer.exe" and hit return. Taskbar is back.
Here is a .bat to merge multiple .reg files into the Registry from the same directory.
Usage:
1) Create a directory;
2) Save you .reg files to the directory;
3) Save the below as a "Reg_It.bat" into the directory;
4) To merge the .reg files into the Registry in Vista and up environment:
right-click on "Reg_It.bat" and select "Run as administrator";
Script merges the .reg files and restarts the Windows Explorer to pick up registry changes.
Some Registry settings might still require a system restart to become active.
All usual warnings precautions for altering Registry apply when you use this script.
As normal, I test all my .reg files separate before gathering them into one directory and using "Reg_It.bat".
:: --------------------------------------------------------------------------
::------------------------------------------------------------------
:: Run this file to:
::
:: 1) Merge all .reg file from this directory
::
:: 2) Restart Explorer to pick up the new settings
::
:: 25/11/2007
:: ------------------------------------------------------------------
pushd %~dp0&::
for /r %%a in (*.reg) do %ComSpec% /c start /wait %SystemRoot%\regedit /s "%%a"
popd
::------------------------------------------------------------------
:: To update Explorer with the new Registry settings:
taskkill /F /IM "explorer.exe"& :: Close all Explorer instances
start explorer.exe& :: Start the desktop instance
::------------------------------------------------------------------
explorer /e, %~dp0& :: Start Explorer in the same location
@exit
:: --------------------------------------------------------------------------
:: --------------------------------------------------------------------------
To Noel: Thank you, Noel. I tried to add some description to the above. Completely agree, it is a rather sharp tool and it requires a steady hand and caution, as anything to do with the Registry. The tool is in use for a number of years.
Just entering all the registry files in the current folder into the registry seems prone to error.
You should at least make it more obvious that's what it does. "Merge all .reg files from the same directory" isn't NEARLY a clear enough description!
-Noel
Where did you get your information from? Why are you changing five bits in the first Attributes field but only one bit in the second Attributes field? Are you sure it is supposed to be a9400100?
It helps to go back and look at what each bit actually does. I found the relevant MSDN Library article here:
Here's the original flag set for the two relevant portions of the DWORD:
SFGAO_NONENUMERATED | SFGAO_STORAGEANCESTOR
Here's the new flag set for the same portions of the DWORD:
SFGAO_VALIDATE | SFGAO_BROWSABLE | SFGAO_STREAM
Yeah, that just looks plain wrong to me. I have no idea what the "correct" value should be, but it probably isn't this. SFGAO_STORAGEANCESTOR (Children of this item are accessible through IStream or IStorage. Those children are flagged with SFGAO_STORAGE or SFGAO_STREAM.) Looks like whoever came up with this "solution" wanted to turn the root object into something that claimed to be browseable but force-validated by the shell, which would fail and cause it to not show up some places but will definitely cause crashing elsewhere in unsuspecting applications.Here's what concerns me most: 0x00400000 is SFGAO_STREAM (Indicates that the item has a stream associated with it. That stream can be accessed through a call to IShellFolder::BindToObject or IShellItem::BindToHandler with IID_IStream in the riid parameter.)
Suddenly this Shell extension is saying to other parts of the OS that it supports functionality that it didn't previously claim to expose. I'm not surprised that some people are experiencing crashes.Let's look at the second "fix". It changes only one bit, so I suspect it actually works better. 0x00100000 is added: SFGAO_NONENUMERATED (The items are nonenumerated items and should be hidden.)
That looks a LOT more promising. Unfortunately, that bit is already set on Favorites.
The fact that there are people with crashing issues suggests that one (or both) of these fixes is broken and therefore this whole mess should just be avoided - Libraries seems to use physical file system, so that change might be safe, whereas Favorites (ab)uses namespaces, so getting rid of it correctly is going to be a LOT harder than just changing its Attributes. As a software developer who has been working with Windows internals since Win95, applying either fix would likely reduce the stability of my environment.
The simplest solution is to just collapse both trees. They still occupy some space but at least will stay out of the way (and you won't run into the issue of crashing other apps).
- Edited by Tiger Litmus Tuesday, September 6, 2011 12:32 AM
I got these values by a) reading what others had done, and b) additional experimentation. I did not find the documentation you have shown. Are you sure it's pertinent for both of these fields?
Since my original post I have personally reinstated the original "Favorites" registry value, as that seemed to be the source of crashes under certain conditions. Putting it back (and as you say, keeping it collapsed) has proven a good compromise.
I can assure you the listed tweaks other than for Favorites are perfectly stable. Libraries and Homegroup: GOOD RIDDANCE!
-Noel
The reason you didn't find the documentation is because the documentation on Shell Folders is buried rather deep in MSDN Library and that particular link I dug up was obscured by layers of other documentation...and I was looking specifically for it.
Shell Folders Attributes is what the SFGAO_ constants are. SFGAO probably stands for "Shell Folders Get Attributes Of" or something like that and is directly related to the IShellFolder::GetAttributesOf() function. Also, the Registry path includes 'ShellFolders', which was the clue I needed to figure out the correct search terms to use. I was originally thinking they were some weird file attributes thing, then I remembered that the Windows Shell does things differently. Anyway, yes, I'm sure the MSDN Library article I mentioned is pertinent for both Attributes fields.
The Favorites fix is just dangerous - as you've discovered. The Libraries one is possibly safe but I'm not going to apply that one given that whoever originally came up with it also likely came up with the Favorites non-fix. I apparently already did the Homegroup fix - that one is definitely safe to remove as it is a standard set of Windows Services rather than a built-in Shell COM object - they are two completely different things..
Great idea; I did that even before you posted. :)
-Noel
From my self experiments I discovered below keys that control the Navigation Pane Per User in Open and Save As Dialog boxes on Windows server 2008 and Windows 7.
To Disable Navigation Pane:
[HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\CIDOpen\Modules\GlobalSettings\Sizer]
"PageSpaceControlSizer"=hex:00,00,00,00,00,00,00,00,00,00,00
[HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\CIDSave\Modules\GlobalSettings\Sizer]
"PageSpaceControlSizer"=hex:00,00,00,00,00,00,00,00,00,00,00,00
[HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\Modules\GlobalSettings\Sizer]
"PageSpaceControlSizer"=hex:d0,00,00,00,00,00,00,00,00,00,00,00
To Enable Navigation Pane:
[HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\CIDOpen\Modules\GlobalSettings\Sizer]
"PageSpaceControlSizer"=hex:d0,00,00,00,01,00,00,00,00,00,00,00
[HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\CIDSave\Modules\GlobalSettings\Sizer]
"PageSpaceControlSizer"=hex:d0,00,00,00,01,00,00,00,00,00,00,00
[HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\Modules\GlobalSettings\Sizer]
"PageSpaceControlSizer"=hex:d0,00,00,00,01,00,00,00,00,00,00,00
More Details: Registry Tweak to Hide/Remove/Disable Navigation Pane in Windows Explorer, File Open, File Save As Common Dialog Boxes of Windows server 2008 and Windows 7
You are the Knowledge You have…
MyWordPress; MyBlogSpot; MyMicrosoft; MyCitrix;
MyVMWare; MySymantec; MyLinkedIn; MyFaceBook; MyGReader;
- Edited by Govardhan Gunnala Monday, October 17, 2011 8:59 AM
Thanks for the tip. I started down the path of trying to figure out how to remove Libraries because it is always the first thing highlighted in the navigation pane when you bring up a Save As dialog. Unfortunately, now that I've removed Libraries, it just highlights Computer. I use Favorites all the time... any idea if there's a way to make the root of Favorites the default selection when you open that dialog?
Thanks in advance!
Enjoy your clean Navigation pane!
I am happy to report that the tweaks for removing Libraries and Homegroup work in Windows 8 and now there's the ability to remove Favorites by a sanctioned setting as well, so one can have an Explorer navigation pane that's not cluttered up with a bunch of root namespaces that don't really work right anyway.
-Noel
Detailed how-to in my new eBook: Configure The Windows 7 "To Work" Options
In my navigation pane I have
> What is the container "Jack Schmidt"? (It's a fake name btw.)
It sits outside the Libraries but seems to be a library.
It contains all the folders under C:\User\jschmidt plus a folder under the U:\ drive that I have designated as My Documents
My "My Documents" (or Windows 7 equivalent) is a network drive folder under U:\.
The system tells me I cannot add network locations that are not indexed.
Our network is not indexed, but a folder inside my My Documents is in a library.
Also, sometimes it's there, sometimes it isn't... WTF!
Yes, is there any way to hide the libraries from IE? We need to run IE as a RemoteApp but the server's local libraries still show up. We did get rid of the driver letters, so we're halfway there. What good is RDS if you can't hide the server's folders from users?
I'm also finding that the libraries show when windows explorer is first launched in the server console, but they go away when clicking on the computer link in the left panel. Weird.
Thanks,
Russell
Hello Russel,
Have u ever found a fix for this issue?
I am designing a image for a school network with both RDS and Win 7 x64 workstations and I have the same problem.
Curious if you know the fix.
Thanks in advance.
With kind regards, René de Meijer. MIEGroup. | https://social.technet.microsoft.com/Forums/windows/en-US/ac419c2b-4a38-44f0-b1f0-b0ed9fdcfdeb/remove-favorites-libraries-and-homegroup-from-navigation-pane?forum=w7itproui | CC-MAIN-2019-22 | refinedweb | 2,430 | 64.1 |
Introduction
Here I will explain how can we get username and userid of currently logged user in asp.net membership.
Description
I am working on application by using login control in asp.net at that time I got requirement like getting username and user id of logged in user for that I have written code like this to get logged in user username and userid in asp.net membership.
By using above code we can get currently logged user in login control
Here i written much code like from Membership i am getting currently logged in username no need to use MembershipUser event to get currently logged in user we have another simple way to get currently logged in username you just define like this in your page
Here i written much code like from Membership i am getting currently logged in username no need to use MembershipUser event to get currently logged in user we have another simple way to get currently logged in username you just define like this in your page
22 comments :
Hi,
Thanks Really Useful information.
This code helps me a lot- "Page.User.Identity.Name;"
It working perfect. Now i can display the current user name on Home page. But i got a small issue....It display whole user email id on home page like Welcome xyz@gmail.com BUt i want to display only XYZ on home page is it possible using this logic? If so, please reply....
Thank you...
-Ajay
@Ajay..
Split your username with @ operator like this
string strData = "suresh@gmail.com";
char[] separator = new char[] { '@' };
string[] strSplitArr = strData.Split(separator);
string username=strSplitArr[0]
After split your username as shown above you will get username
Thanks
Dear Suresh,
Your Blog is very helpful, God Bless.
I request you to please share how we can extract image file from database into crystal report.
Also write detail blog on crystal reports designing and functionality
Thank you
Imtiaz
if the user forget his password then after submitting his email address how the email address authenticated with database and after authentication sends the login details in the email
Do one thing for any user if forgot password ..i am givinu u code...first store the user name or password on database then if any user may forget password then the password will send him via email id.....this code definitely work try it....
...........................
C# code....
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Net.Mail;
using System.Net;
using System.Data.SqlClient;
using System.Configuration;
using System.Data;
public partial class _Default : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
}
protected void btnPass_Click(object sender, EventArgs e)
{
//Create Connection String And SQL Statement
//string strConnection = ConfigurationManager.ConnectionStrings["goodwillConnectionString"].ConnectionString;
string strSelect = "SELECT UserName,Password FROM Users WHERE Email = @Email";
SqlConnection connection = new SqlConnection("Data Source=ABHISHEK-PC\\SA;Integrated Security=true;Initial Catalog=goodwill");
connection.Open();
SqlCommand command = new SqlCommand();
command.Connection = connection;
command.CommandType = CommandType.Text;
command.CommandText = strSelect;
SqlParameter email = new SqlParameter("@Email", SqlDbType.VarChar, 50);
command.Parameters.Add(email);
//Create Dataset to store results and DataAdapter to fill Dataset
DataSet dsPwd = new DataSet();
SqlDataAdapter dAdapter = new SqlDataAdapter(command);
dAdapter.Fill(dsPwd);
connection.Close();
if(dsPwd.Tables[0].Rows.Count > 0 )
{
MailMessage loginInfo = new MailMessage();
loginInfo.To.Add(txtEmail.Text.ToString());
loginInfo.From = new MailAddress("id@gmail.com");
loginInfo.Subject = "Forgot Password Information";
loginInfo.Body = "Username: " + dsPwd.Tables[0].Rows[0]["UserName"] + "
Password: " + dsPwd.Tables[0].Rows[0]["Password"] + "
";
loginInfo.IsBodyHtml = true;
SmtpClient smtp = new SmtpClient();
smtp.Host = "smtp.gmail.com";
smtp.Port = 587;
smtp.EnableSsl = true;
smtp.Credentials = new System.Net.NetworkCredential("abc@gmail.com", "pwd");
smtp.Send(loginInfo);
lblMessage.Text = "Password is sent to you email id,you can now ";
}
else
{
lblMessage.Text = "Email Address Not Registered";
}
}
Hi, Can you please suggest me ASP-c# snippet to get list of users logged in currently?
Hai Mr.Suresh...
After Successfully Login,how to get the logined username into home page lable.
Thanx man u help me a lot in every a problem i face...!
Hi Mr. Suresh...
I am a newbie to asp.net techonology. I have a query that how can we diplay user picture to image control (image control is on master page) on which he had uploaded during registration on the users successful login???
Thank-u in advance!!!
Prevent Simultaneous Logins by a Single User ID ... How to do that
Hi Suresh annay..I love your articles very much ..those are helping me alot..I need a google login with userdetails in my website.so pls forward me as soon as possible .
Thank you soo much annay..
Hi Suresh,
Im not getting username from Membership.GetUser() . Kindly give complete source code.
Thanks for sharing your knowledge with us. Your complete series of articles on .net, c# are very helpful for me . For my company's any project I prefer to refer your articles when needed. Keep sharing
thank you so much sir | https://www.aspdotnet-suresh.com/2011/01/how-can-we-get-username-and-userid-of.html | CC-MAIN-2019-22 | refinedweb | 842 | 52.15 |
First Class: Graph.java
public class Graph {
public void generateGraph(int VertexNum, int numOfEdges) throws ZeroVerticesException, DisjointGraphException{
//Statements
}
//More methods
}
public class prims {
Graph g=new Graph();
g.generateGraph(10,20); // Error here
}
In Java, your code statements should be part of some method or a block
In your case, at least main method, if that is your main class.
The code can't just be inside class body, outside any block
Following code should probably help what you are trying to achieve
public class prims { public static void main(String args[]) { Graph g=new Graph(); g.generateGraph(10,20); // Error here } } | https://codedump.io/share/9h03TxDeO1oO/1/java-cannot-access-methods-from-a-different-class | CC-MAIN-2017-34 | refinedweb | 102 | 61.67 |
Learn an amazing programming language as you build a neural network from scratch.
This workshop inspired this YouTube video series on learning APL with neural networks.
The aim of this workshop is to introduce people to the APL programming language, with the first contact geared towards building a neural network from scratch. It helps if the audience has some programming knowledge (in no programming language in particular) and has heard of neural networks a bit, but that is not necessary.
There are two reasons why I use neural networks to introduce APL to newcomers in this workshop:
The objective of the workshop is to make incremental improvements to a namespace that eventually contains enough functionality to create a neural network that can be trained on the MNIST data (
mnistdata.rar) and classify handwritten digits.
That is, the neural network will receive input images like the ones below and should be able to identify the digit in the image.
For that matter, here is the standard order in which things get done in the workshop (this lines up almost perfectly with the order in which objects appear in
NeuralNets.apln):
The number of people attending the workshop, their previous knowledge of APL and neural networks and other related factors impacts how much we manage to accomplish.
If the list is exhausted within the time allotted for the workshop, here's a couple of follow-ups with little opportunity cost to start:
By the end of the workshop, attendees will have a (close to finished) neural network written in a programming language they probably never dealt with, APL.
Attendees will have dabbled for the first time with a purely array-oriented programming language and built a popular, modern-day machine learning model from scratch.
Finally, their own implementation of a neural network can be trained in less than 2 minutes to recognise handwritten digits with 89% accuracy (timed on my laptop). Here is an example of some drawn digits and the neural network's guesses.
@@ @@@ @@@ @@@ @@@@ @@@@@@@@ @@@@ @ @@@@@@@@@@ @@ @ @@ @@@ @@@@@@@@ @@@@@@@ @@@ @@@@@@@ @@ @@ @@@ @@@@@@@@@@@@@ @@@@@ @@@ @@@@@@@@ @@@ @@@ @@@@ @@@@@@@@@@@@@ @@@ @@@ @@@@ @@@ @@@ @@@ @@@ @@@@@@@@@@@ @@@ @@@ @@@ @@@ @@@ @@@ @@@@ @@@@ @@@ @@ @@@ @@ @@@ @@@@ @@@@@ @@@ @@@ @@@ @@@@ @@@ @@@ @@@@@@ @@@@@ @@@@ @@@ @@@@ @@@@ @@@ @@@ @@@@@@@ @@@@ @@@@ @@@@ @@@@@@@ @@@@@ @@@ @@@ @@@@ @@ @@@@@ @@@@@ @@@@@@@@@@@@@ @@@@@@@@@@ @@ @@@@ @@ @@@@ @@@@@@@@ @@@@@@@@ @@@ @@@@@@@@ @@@@@@ @@@ @@@@@ @@@@@@@@@@@@@ @@ @@@ @@@ @@@@@@ @@@@@ @@@@@ @@@@@@@@ @@@@@@ @@ @@@ @@@@@@@@@@ @@@@ @@@@@@@ @@@@ @@@ @@ @@@@@@@@ @@@@ @@@@@@@ @@ @@ @@@ @@@@ @@@@ @@@@@@ @@@ @@@ @@@@ @@@@ @@@ @@ @@@@ @@@ @@@ @@@ @@@@ @@@ @@@ @@@ @@@ @@ guessing 6 guessing 7 guessing 2 guessing 9 guessing 4
Here is what some people had to say about the contents of the workshop and the way I led it:
“It was amazing what we did in just 2 hours [...] In the end I was tired but satisfied with, and fascinated by what I had learned and built.” – João Afonso
“The best thing was to have this hands-on approach to learning a new programming language” – Carlos
“I really enjoyed the simple and accessible way in which it was taught” – Anonymous
Other than the code with the reference implementation (available in this GitHub repository), here are some links that might be useful: | https://mathspp.com/training/workshops/learn-apl-with-neural-networks | CC-MAIN-2022-05 | refinedweb | 445 | 52.23 |
p.s. This is not an end-to-end guide, I documented my journey and figured I would publish with what I had time to document instead of vaulting this knowledge in our private Knowledgebase. Then I happen to put a tech talk together so between the video and content below I hope it helps you create your own Event Tracking and Analytics on AWS.
AWS SDK Initializer
Since we only need DynamoDB add to your Gemfile:
gem 'aws-sdk-dynamodb'
To make it easier to work with the SDK I have in an initializer
RAILS_ROOT/config/intializers/aws.rb
You will notice I am aggressively setting credentials. The SDK is supposed to pick up these Environment Variables implicitly but I found in practice it did not when I wrote this. Maybe you don't have to be as verbose here like me here.
creds = Aws::Credentials.new( ENV['AWS_ACCESS_KEY_ID'], ENV['AWS_SECRET_ACCESS_KEY'] ) Aws.config.update credentials: creds module DynamoDB def self.resource @@dynamodb ||= Aws::DynamoDB::Resource.new({ region: 'us-east-1', credentials: Aws::Credentials.new( ENV['AWS_ACCESS_KEY_ID'], ENV['AWS_SECRET_ACCESS_KEY'] )}) @@dynamodb end end
Probably should be storing the region as an Environment Variable in Figaro
When we want to use DynamoDB all we have to do is the following:
DynamoDB.resource.client.put_item({ # ... })
Primary and Sort
Very unique ids such as User IDs make good Primary keys since its better for distribution across partitions.
Dates make very good Sort keys. Your table when queried will be stored ASC based on your SORT key. Explore the DynamoDB table explorer so you have an idea of the limitations of how you can filter.
Notice for Primary we only have the ability to do
= and for Sort we have many options.
There are more advanced filter options in the documentation if you can make sense of it.
Tracker
First I define how I want to use my tracker before writing a module.
So this would write to the Dynamo
Putting data:
Tracker::Put.event({ user_id: user.id, event_type: 'login', user_agent: request.user_agent, ip_address: request.remote_ip })
Getting Data
@recent_activity = Tracker::Get.recent_activity @model.id
For the Putting data probably want to put this in an ActiveJob since it's possible having these event calls littered throughout your application can cause the code to block resulting in latency experienced by your team. I think DynamoDB blocks as it waits for a response even though we don't need one.
I created a new module in my lib directory eg.
RAILS_ROOT/lib/tracker.rb
# This class is responsible for writing event data # to various DynamoDB tables and fetching that data # for display. module Tracker class Entity include ActiveModel::Validations def initialize(opts={}) opts.each { |k,v| instance_variable_set("@#{k}", v) } end attr_accessor :user_id, :event_type, :user_agent, :ip_address, :event_at, :event_id validates :user_id , presence: true, numericality: { only_integer: true } validates :event_type , presence: true, inclusion: { in: %w( login material-view )} validates :user_agent , presence: true validates :ip_address , presence: true validates :event_at , presence: true def event_at @event_at || Time.now.iso8601 end end class Put def self.event attrs={} entity = Tracker::Entity.new attrs unless entity.valid? raise ArgumentError, "Tracker Entity invalid permissions" end DynamoDB.resource.client.put_item({ item: { 'user_id' => entity.user_id, 'ip_address' => entity.ip_address, 'user_agent' => entity.user_agent, 'event_id' => entity.event_id, 'event_type' => entity.event_type, 'event_at' => Time.now.iso8601 # sort key }, # We don't care about returning cosumed capactiy # We can handle looking event tracking data and # don't need to be alerted. return_consumed_capacity: 'NONE', table_name: 'exampro-events' }) end end ## Put class Get def self.recent_activity user_id result = DynamoDB.resource.client.query({ expression_attribute_values: { ":user_id" => user_id }, # key_condition_expression: "user_id = :user_id", limit: 50, def self.logins user_id, event_type result = DynamoDB.resource.client.query({ expression_attribute_values: { ":user_id" => user_id, ":event_type" => event_type }, key_condition_expression: "user_id = :user_id", filter_expression: "event_type = :event_type", limit: 10, end ## Get end
Tracker::Entity
I have this Entity class. Its purpose is to validate the format of arguments. I would probably enrich this further in the future with a metadata attribute.
Tracker::Put
I have a class for
Put which for writing to DynamoDB. Currently, I only have one method but may in the future add more.
Tracker::Get
I have another class called
Get which queries data from
DyanmoDB
DateTime as String
Another thing to note is that I am converting the time to a string
Time.now.iso8601. DynamoDB does not have a DateTime datatype.
This StackOverflow does a good explaining what to consider when choosing what format to use your dates.
I care about readability so ISO 8601 is a good format.
I don't care about using TTL (Time to live) since I don't need to expire records from my DynamoDB to prune the DB.
You have DynamoDB only stream TTL events which is interesting.
What matters most is when filtering the date I can use the
BETWEEN to filter between two ranges.
scan_index_forward
We are using
scan_index_forward: false to change the sort to be DESC instead of ASC.
projection_expression
We only want specific attributes returned from the database so thati s the purposes of:
projection_expression: 'ip_address,event_type,event_at,user_agent'
return_consumed_capacity
We are using
return_consumed_capacity: 'NONE' because I don't care about getting a response back. If there was a capacity issue I have an alarm where I would take action. Since this is event data I don't are some event tracking is dropped.
DeviceDetector
We are passing our user_agent through DeviceDetector gem eg.
DeviceDetector.new(t['user_agent'])
It so in our dashboard for our app I can get human readable values such as if they are on a phone/desktop, windows/mac or using a specific web browser.
DynamoDB
Enabling DyanmoDB Streams
We are going to need to turn on DynamoDB streams.
To have streams trigger a lambda under the Triggers tab we will add an existing function. You may need to click more to find this Triggers tab.
When a record is inserted into DynamoDB. Streams will allow us to pass the puts in batches to a Lambda function.
We only want New Images. I believe a record it first put its an "Old Image" and does not contain all data. Then when all data is written it is a "New Image".
We will leave it for batches of 100. This doesn't mean the Streams will wait until it has 100 records to send but can send up to 100 at a time.
We can see our Lambda is attached. If an error occurs on this Lambda sometimes its smart to check here to find out at a glance if the Lambda is failing.
Here we can see the records in our DynamoDB table
We need to create a policy which allows Lambda to accept data from a specific DynamoDB Stream.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "dynamodb:GetShardIterator", "dynamodb:DescribeStream", "dynamodb:GetRecords" ], "Resource": "arn:aws:dynamodb:us-east-1:ACCOUNT-ID:table/exampro-events/stream/2019-06-30T11:17:05.770" }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": "dynamodb:ListStreams", "Resource": "*" } ] }
We need to allow our lambda function to stream data to our Kinesis Firehose
{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "firehose:PutRecord", "firehose:PutRecordBatch" ], "Resource": "arn:aws:firehose:us-east-1:ACCOUNT-ID:deliverystream/exampro-events" } ] }
Then I attach these two new policies to a role which is then attached to my Lambda function.
Lambda that streams data from DynamoDB to Firehose
Since DynamoDB Streams can deliver data in batches we are going to use the put_record_batch
We need to supply the
delivery_stream_name. Probably should place this in Environment Variables instead of how I'm hardcoding here.
Even though we are never going to update DynamoDB records we are going to only publish events to stream for INSERT
require 'json' require 'aws-sdk-firehose' def lambda_handler(event:, context:) records = [] event['Records'].each do |t| if t['eventName'] == 'INSERT' records.push({data: { user_id: t['dynamodb']['NewImage']['user_id']['N'], event_at: t['dynamodb']['NewImage']['event_at']['S'], event_id: t['dynamodb']['NewImage']['event_id']['N'], event_type: t['dynamodb']['NewImage']['event_type']['S'], ip_address: t['dynamodb']['NewImage']['ip_address']['S'], user_agent: t['dynamodb']['NewImage']['user_agent']['S'] }.to_json + "\n" }) end end json = {records_size: records.size}.to_json puts json unless records.size.zero? firehose = Aws::Firehose::Resource.new resp = firehose.client.put_record_batch({ delivery_stream_name: "exampro-events", # required records: records }) json = {failed_put_count: resp.failed_put_count}.to_json puts json end return true end
Json records on newline
You will notice I am adding a new line at then of our json string.
.to_json + "\n"
This is very important because when Athena reads our json files it expects each json record to be on its own line. If they are all on one line it will read only one record.
Json Log Events
Notice that I am converting my hash to json and then using
puts to log it. This is how you log Json events so we can then use a Metric Filter for later. You cannot just puts a hash, you have to convert it to json.
json = {records_size: records.size}.to_json puts json
SDK vs KPL
If you're wondering why I'm not using KPL (Kinesis Producer Library) I could have but I would have had to use a Java Lambda and its configuration is more complicated. KPL is more efficient but for our use-case we don't need to KPL. You can read more about KPL in the documentation
Metric Filter
Based on the Filter and Pattern Syntax under Publishing Numerical Values Found in Log Entries we can select an attribute of a JSON Log Event and then log it.
So for the metric filter, we want to filter json log events with an attribute of records_size greater than 0
{ $.records_size > 0 }
For the metric value, we will supply the attribute we want it to then collect
$.records_size
Define Metric Filter
View created metric filter
You cannot add a Metric Filter to your Cloudwatch Dashboard until data has been published to it.
How to find metric filter after its created
If you ever need to find this filter metric its shows up under Logs as a column in the logs table.
Kinesis Firehose
Kinesis Firehose is incredibly affordable at $0.029/GB so 500 GB = $14 USD. Other Kinesis can have a very expensive base cost.
But what about Kinesis Data Analytics?
You will see there is another AWS Kinesis service called Kinesis Data Analytics and you make think you nee this expensive service based on its name.
Kinesis Data Analytics lets you run queries (SQL) on incoming streaming data. I am thinking that Kinesis Data Analytics might be faster at proactivity producing real-time analytics because it crunches data as it comes in.
Using Firehose we just dump out data to S3. When someone needs to see an up to date dashboard we can just query Athena with a Lamba function, dump the results back into DynamoDB or maybe as a json file and then display that to the user. We can decide to only generate new analytics only if the last version compiled is out of date by say 5 mins.
Creating Firehose
The dashboard is a bit confusing so you look where I created my Firehose stream.
We could transform our data via Kinesis but for us, this is not necessary since we can apply our transformation prior Lambda and we do. If you have data coming from multiple sources you may want this lambda to normalize the data as guarantee its consistent. Since we only ingest data from one lambda function this is a minimal risk for us.
I have this option set to disabled but I just wanted to show you then the data can be transformed by Glue into Parquet files which are much more performant when using Athena. This is not a pain point for us currently so we are going to leave the data as is which is json. Also, I didn't feel like calculating the cost of Glue here at scale.
I had read somewhere in the docs that compression was needed for encryption in a specific use-case. When I used Glue create table using a crawler on snappy compression it produced a bizarre schema so I rolled back on this and just encrypted using KMS.
Since I am storing IP addresses I consider this sensitive data. We run Macie, so uncertain if it would alert on this if unencrypted.
The reason we collect IP Addresses for our click event data is to detect abnormal behaviour of a user. Such as account sharing, scraping or etc.
Athena
We need a database and table.
Database and Table via Glue Catalog and Glue Crawler
This is one way for you to create your Database and table.
So create a database. I am not going to recommend this way but showing you it can be done.
We will also need a table. We could easily define the columns manually but if we already have data in our S3 bucket we can just use a crawler once to determine that schema for us. So you choose your datastore being the s3 bucket and it does the rest.
If we check out table it should have determined our schema.
Database and Athena via SQL
When using Glue via automatic cralwer it would guess the wrong column types and did not partition based on date. We can just create what we need directly in Athena.
Create our database
CREATE DATABASE exampro_events LOCATION 's3://exampro-events/';
And now create the table
CREATE EXTERNAL TABLE exampro_events.events ( user_id INT, event_at STRING, event_id INT, event_type STRING, user_agent STRING, ip_address STRING ) ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe' WITH SERDEPROPERTIES ('paths' = 'user_id,event_at,event_id,event_type,user_agent,ip_address') LOCATION 's3://exampro-events/';
Ensure the location ends with a forward slash or you'll get an error about the path.
ROW FORMAT SERDE tells it the data will be in JSON format.
A SerDe (Serializer/Deserializer) is a way in which Athena interacts with data in various formats.
Notice that for
event_at I set it as STRING instead of TIMESTAMP. iso8601 is not the correct format for date, and we could change all our code to comply though since Athena has this sql function
from_iso8601_timestamp I'm not concerned unless I run into a performance or limitations on the ability to query.
Athena expects this format:
2008-09-15 03:04:05.324
Partitions
You can partition your tables on things such as date eg. Year 2020. This might be something I want to do in the future but for the time being, I am ignoring partitions.
Querying in Athena
To get started click on the ellipses beside the table and Preview Table. It will create the query and show you some data so you can save yourself the trouble to type all this yourself.
Writing Athena queries can be a painful experience even with prior SQL knowledge. Read the docs to help you learn the SQL syntax
CloudWatch Dashboard
If something goes wrong we want to have a CloudWatch Dashboard to gain some insight.
We are going to add a widget
Here we can see our custom Metric. If you don't see it here its because data has yet to ever be collected so ensure data is being logged and your metric filter is correctly filtered.
So there is our record-size. The other filter is just an old test one.
So here is my line graph. I don't know how useful it is but just getting something in here. Remember to Save dashboard !!!!
In DynamoDB there is the metric which could be useful to compare against the records which could be filtered in our Lambda.
Added a few more widgets.
We can see how many records are streaming, how many records the lambda passes to Firehose, how many incoming records were received, and how many were delivered to S3. Still missing Athena. We will get there.
Fake Data via Rake Command
I wanted some login data for the past 7 days so I can compose my Athena query to group logins per day for the week.
Rake commands are great for this. Also, I suppose you could test your read/write capacity using this method.
require 'faker' namespace :track do namespace :put do task :login => :environment do 50.times.each do |t| ip_address = Faker::Internet.public_ip_v4_address user_agent = Faker::Internet.user_agent event_at = rand(1..7).days.ago.iso8601 v = [0..4].sample Tracker::Put.event({ user_id: 1, event_type: 'login', user_agent: user_agent, ip_address: ip_address, event_at: event_at }) puts "#{ip_address} - #{user_agent} - #{event_at}" sleep 0.2 # sleep 1/5th of a second end # x.times end #login end #put namespace :get do task :logins => :environment do results = Tracker::Get.logins 1 puts results end #logins end #get end # track
So here I am running my rake command to create logins:
~/Sites/exampro-projects/exampro[master]: rake track:put:login 11.174.250.238 - Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.75.14 (KHTML, like Gecko) Version/7.0.3 Safari/7046A194A - 2019-06-29T16:46:23Z 143.251.23.90 - Mozilla/5.0 (compatible; MSIE 9.0; AOL 9.7; AOLBuild 4343.19; Windows NT 6.1; WOW64; Trident/5.0; FunWebProducts) - 2019-06-23T16:46:24Z 57.161.250.74 - Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; AS; rv:11.0) like Gecko - 2019-06-29T16:46:24Z 170.128.151.22 - Mozilla/5.0 (compatible; MSIE 9.0; AOL 9.7; AOLBuild 4343.19; Windows NT 6.1; WOW64; Trident/5.0; FunWebProducts) - 2019-06-29T16:46:24Z 65.166.116.179 - Mozilla/5.0 (Windows NT x.y; Win64; x64; rv:10.0) Gecko/20100101 Firefox/10.0 - 2019-06-29T16:46:25Z 54.85.94.162 - Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; AS; rv:11.0) like Gecko - 2019-06-24T16:46:25Z 56.33.98.190 - Mozilla/5.0 (compatible; MSIE 9.0; AOL 9.7; AOLBuild 4343.19; Windows NT 6.1; WOW64; Trident/5.0; FunWebProducts) - 2019-06-23T16:46:25Z 139.173.42.58 - Mozilla/5.0 (Windows NT x.y; Win64; x64; rv:10.0) Gecko/20100101 Firefox/10.0 - 2019-06-29T16:46:25Z 107.234.132.121 - Mozilla/5.0 (compatible; MSIE 9.0; AOL 9.7; AOLBuild 4343.19; Windows NT 6.1; WOW64; Trident/5.0; FunWebProducts
Posted on by:
| https://dev.to/andrewbrown/event-tracking-and-analytics-via-ruby-on-rails-dynamodb-with-streams-kinesis-firehose-and-athena-and-cloudwatch-dashboard-1lma | CC-MAIN-2020-40 | refinedweb | 3,049 | 58.79 |
The picture taken on March 14, 2006,
shows containers are loaded onto a large foreign container ship in the
busy port of Tianjin, a municipality near Beijing. (Xinhua
Photo)
BEIJING, May 8 -- China's port and shipping
facilities are to be upgraded to include two major new regions, the Ministry of
Communications has announced.
Five port "clusters," rather than the existing three
surrounding Shanghai, Shenzhen and Tianjin, will become the new priorities as
part of a new port development plan.
The outline of the plan to revise Chinese port
facilities was made by Communications Minister Li Shenglin.
The minister said the two additional port groups are
located on the mainland side of the Taiwan Straits in southern Fujian; and in
Hainan and southern Guangdong.
The plan is part of an effort to match the national
2006-10 social and economic development programme, Li said.
Li said China's sea ports and their relative easy
access to containers and industrial materials had been a major factor in
transforming the nation's economy.
Related
stories:
Shanghai, located at the mouth of the Yangtze River,
is the largest business city of China serving the versatile urban economic
network across the Yangtze River Delta.
Transport trucks are loaded with
containers at Shekou Port in Shenzhen. Shenzhen handled a record number of
containers in 2005, keeping its ranking as the world's fourth-busiest
port. (Photo: China
Daily)
Tianjin, close to Beijing and a key link of all the
sea ports around the Bohai Bay, is vital to the economy of northern China.
However, Li said the two newly planned port clusters
would be of no less importance.
The southeastern port cluster would be built around
its centre of Xiamen, a business centre of southern Fujian joined by Fuzhou,
Quanzhou, Putian and Zhangzhou.
Zhangzhou will serve as a destination for China's
import of crude oil and natural gas, and all others will be mainly handling
containers.
The Fujian port blueprint is part of the central
government's scheme of the Western Shore Economic Zone of the Taiwan Straits.
It was designed to help develop economic ties between
the Chinese mainland and Taiwan.
Li said this would anticipate the "mainland-Taiwan
free trade relations" that, although there had been little progress so far,
would benefit business communities on both sides of the Straits.
Xiamen is already a large port. Zhang Changping, the
mayor Xiamen, recently expressed the municipal government's will to boost its
annual throughput from 70 million tons to 100 million tons.
In southwest China, Zhanjiang, Fangcheng and Haikou
will form a system of container transportation. Zhanjiang, Haikou and other
ports will also serve as places to download and reserve imported crude oil and
natural gas. And Zhanjiang, Fangcheng and Basuo have been designed to become
ports to import mineral resources.
Passenger transport infrastructures will be improved
in Zhanjiang, Haikou and Sanya in the coming five years, according to the
national programme.
Li said the newly drafted port development plan was
aimed at "expanding the transportation capacity of the Chinese coast to match
the economy's fast growth."
He forecast that China's ocean cargo handling
capacity will rise from 3.8 billion tons in 2005 to 5 billion tons in 2010, and
its coastal throughput of containers, as measured in TEU (twenty-foot equivalent
unit), will grow from 74.41 million in 2005 to 130 million in 2010.
Chai Haitao, a researcher with the International
Trade Research Academy of the Ministry of Commerce, said the plan for the new
round of port expansions had been undertaken because of the future economic
development in China and the world.
Chai predicted that China's foreign trade would grow
at an annual rate of 15 per cent between 2006-10, so that almost all of China's
major sea ports would undergo expansions in the next few years.
Meanwhile, the International Monetary Fund (IMF) has
predicted that the world's economy would grow at an annual rate of 4.2 per cent
during 2006-09, relatively higher than that during the 2001-05 period. And in
the coming five years, China will continue to be the world's economic engine
with annual growth of no less than 8 per cent.
China has been the world's biggest cargo producer
since 2004, with Shanghai being the world's largest port in handling tonnage.
Ten out of the world's 25 largest sea ports are already in China.
Li announced the plans in a special interview with
China Daily at a recent sea transport forum in the port city of Tianjin.
(Source: China Daily) | http://news.xinhuanet.com/english/2006-05/08/content_4519593.htm | crawl-002 | refinedweb | 767 | 59.03 |
This document describes the No-OS software used to control the AD717X family parts and the AD4111 device. It also includes an example of how to initialize a AD7176 part.
The AD717x family of products are fast settling, high resolution, highly accurate, multiplexed Σ-Δ analog-to-digital converters (ADC) for low bandwidth input signals, with resolution options of both 32bit and 24bit available. The products are available in both TSSOP of LFCSP depending on the product selected. AD717x family of devices include:
The inputs to the ADC can be configured as fully differential or pseudo differential inputs depending on the product chosen, this can be done via the integrated crosspoint multiplexer. An integrated precision, 2.5 V, low drift (2ppm/°C), band gap internal reference (with an output reference buffer) adds functionality and reduces the external component count. A maximum channel scan data rate is 50 kSPS (with a settling time of 20 μs), resulting in fully settled data of 17 noise free bits. User-selectable output data rates range from 5 SPS to 250 kSPS can be obtained from AD7175, AD7176 and AD7177 devices. A maximum channel scan data rate is 6.1 kSPS (with a settling time of 161 μs). User-selectable output data rates range from 1.25 SPS to 31.25 kSPS can be obtained from AD7172, AD7173 devices. The AD717x devices offers three key digital filters. The fast settling filter maximizes the channel scan rate. The Sinc3 filter maximizes the resolution for single-channel, low speed applications. For 50 Hz and 60 Hz environments, the AD717x type of filter and output data rate used for each channel. All switching of the crosspoint multiplexer is controlled by the ADC and can be configured to automatically control an external multiplexer via the GPIO pins. The specified operating temperature range is −40°C to +105°C.
The AD4111 and AD4112 are low power, low noise, 24-bit, sigma delta (Σ-Δ) analog-to-digital converters (ADC) that integrate analog front ends (AFE) for fully differential or single-ended rail-to-rail, buffered bipolar, ±10 V voltage inputs, and 0 mA to 20 mA current inputs. They also integrate key analog and digital signal conditioning blocks to configure eight individual setups for each analog input channel in use. The AD4111 and AD4112 feature a maximum channel scan rate of 6.2 kSPS (161 μs) for fully settled data.
The AD4111 also has the unique feature of open wire detection on the voltage inputs (patent pending) for system level diagnostics using a single 5 V or 3.3 V power supply. It is housed in a 40-lead, 6 mm × 6 mm LFCSP package.
Entire AD717X family and AD411X family.
The driver contains two parts:
The Generic Platform Driver is where the specific communication functions for the desired type of processor and communication protocol have to be implemented. This driver implements the communication with the device and hides the actual details of the communication protocol to the specific device or family driver.
The Generic Platform Driver has a standard interface, so the device driver can be used exactly as it is provided. This standard interface includes the I2C and SPI communications, functions for managing GPIOs and a miliseconds delay function.
The milisecond delay functions is:
The I2C interface has the following functions:
The following structs and enums are used for the I2C interface:
typedef enum i2c_type { GENERIC_I2C } i2c_type; typedef struct i2c_init_param { enum i2c_type type; uint32_t id; uint32_t max_speed_hz; uint8_t slave_address; } i2c_init_param; typedef struct i2c_desc { enum i2c_type type; uint32_t id; uint32_t max_speed_hz; uint8_t slave_address; } i2c_desc;
The SPI interface has the following functions:
The following structs and enums are used for the SPI interface:
typedef enum spi_type { GENERIC_SPI } spi_type; typedef enum spi_mode { SPI_MODE_0 = (0 | 0), SPI_MODE_1 = (0 | SPI_CPHA), SPI_MODE_2 = (SPI_CPOL | 0), SPI_MODE_3 = (SPI_CPOL | SPI_CPHA) } spi_mode; typedef struct spi_init_param { enum spi_type type; uint32_t id; uint32_t max_speed_hz; enum spi_mode mode; uint8_t chip_select; } spi_init_param; typedef struct spi_desc { enum spi_type type; uint32_t id; uint32_t max_speed_hz; enum spi_mode mode; uint8_t chip_select; } spi_desc;
The GPIO interface has the following functions:
The following structs and enums are used for the GPIO interface:
typedef enum gpio_type { GENERIC_GPIO } gpio_type; typedef struct gpio_desc { enum gpio_type type; uint32_t id; uint8_t number; } gpio_desc;
The AD717X driver contains the following:
The driver works with any of the following headers by including them in your main project just as you include the ad717x.h header:
Each header declares an array of all register of the device that the header describes.
The following functions are implemented in this version of AD717X17X_Init() which has the following parameters:
ad717x_device *my_ad7176_2;
ad717x_init_param ad7176_2_init_param;
ad717x_init_param has the following definition:
typedef struct { /* SPI */ spi_init_param spi_init; /* Device Settings */ ad717x_st_reg *regs; uint8_t num_regs; } ad717x_init_param;
Where:
#include "ad7176_2_regs.h"
#define AD7176_2_INIT
Alternatively, a new register array can be defined by user which must be properly initialized before calling AD717X_Setup() function.
ad717x_st_reg my_ad7176_2_regs[] = { { AD717X_STATUS_REG, 0x00, 1 }, { AD717X_ADCMODE_REG, 0x0000, 2 }, ... }
A AD717X_Init() call will also reset the part then use all register values stored in the array pointed by the regs parameter to configure the part (the registers that are “Read-only” will not be written during this call).
The following code snipped provides an example of driver usage:
#include "ad7176_2_regs.h" #define AD7176_2_INIT /* Create a new driver instance */ ad717x_device *my_ad7176_2; ad717x_init_param ad7176_2_init; ad7176_2_init.spi_init.chip_select = 0x01; ad7176_2_init.spi_init.id = 0; ad7176_2_init.spi_init.max_speed_hz = 1000000; ad7176_2_init.spi_init.mode = SPI_MODE_3; ad7176_2_init.spi_init.type = GENERIC_SPI; ad7176_2_init.regs = ad7176_2_regs; ad7176_2_init.num_regs = sizeof(ad7176_2_regs) / sizeof(ad7176_2_regs[0]); /* Other variables */ long timeout = 1000; long ret; long sample; . . . /* Initialize the driver instance and let's use the ad7176_2_regs array defined in ad7176_2_regs.h */ ret = AD717X_Init(&my_ad7176_2, ad7176_2_init); if (ret < 0) /* Something went wrong, check the value of ret! */ /* Read data from the ADC */ ret = AD717X_WaitForReady(my_ad7176_2, timeout); if (ret < 0) /* Something went wrong, check the value of ret! */ ret = AD717X_ReadData(my_ad7176_2, &sample); if (ret < 0) /* Something went wrong, check the value of ret! */ | https://wiki.analog.com/resources/tools-software/uc-drivers/ad717x | CC-MAIN-2019-09 | refinedweb | 981 | 52.9 |
How to clump spheres and facet obtained from stl format
Hi everyone,
I am trying to create a surface obtained from an stl file, fill the surface with spheres and clump the surface (facet) and spheres as one particle.
This is very similar to this question: https:/
From yade doc, triangulated surfaces can be imported from gts and stl formats.
I have imported the surface using the 'ymport.stl' function.
However, for generating spheres within the surface, there are no commands for stl format. The example below is for gts format.
Appreciate if someone could help me with:
[1] similar commands for generating spheres in a surface (stl format) or any conversion method for stl to gts format.
[2] command for generating spheres with multiple sizes to fill the surface. This is to ensure the sphere clump is of the same volume as the surface.
#example for sphere packing in surface
if surf.is_closed():
O1=O.bodies.
sp=pack.
Otemp=
O3=O1+Otemp
#O3=Otemp
Looking forward to any suggestion.
Irfaan
Question information
- Language:
- English Edit question
- Status:
- Expired
- For:
- Yade Edit question
- Assignee:
- No assignee Edit question
- Last query:
- 2019-09-02
- Last reply:
- 2019-09-18
Hello,
> Is the pack.randomDens
if you do not need precise particle size distribution, then randomDensePack is one option
> Any rule of thumb in deciding the sphere radius?
no, depends on your needs
> Does the function include overlapping of spheres?
no. But you can then enlarge the spheres to create overlaps
> By adding memoizeDb=
yes. The next run, this packing is read and is not generated again
[3] Is there any function to obtain the clump volume as comparison to the imported gts surface.
> When running the script, I received the following error along with a blank view (no sphere or surface):
> WARN /build/
It is not error, but warning. Not inportant for now
To help you more, we would need a compete script and all data (i.e. sample.gts file). For testing purposes, a simple tetrahedron should be enough. So please provide the data to help you more.
cheers
Jan
Hi Jan,
Thank you for your reply.
Kindly find script and surface.gts from the link below:
https:/
[1] When running the simulation, it still gives me a blank screen (no surface or spheres can be seen). Appreciate if you could guide me in solving this.
[2] No, I do not require a specific PSD. I am trying to reproduce a realistic scanned particle using the gts surface. Since interaction between facet to facet cannot occur, I will need to create a clump representing the d surface. Is there a way to obtain the volume of the clump? This is to check if the clump is comparable to the scanned surface.
[3] Is there any other options to fill the surface with sphere clumps?
Appreciate your support.
Thanks and kind regards,
Irfaan
Hello,
> When running the simulation, it still gives me a blank screen (no surface or spheres can be seen).
>
> if surf.is_closed():
> O1=O.bodies.
> sp=pack.
> Otemp=sp.
surf.is_closed() returns False. Then no facets or spheres are created..
cheers
Jan
Hi Jan,
Thanks again for your prompt reply.
I have removed the "if surf.is_closed():" as follows:
#if surf.is_closed():
O1=O.bodies.
sp=pack.
Otemp=sp.
#clump spheres and surface together
idClump=
Once I load the script, I receive the following error that the surface is not closed:
Traceback (most recent call last):
File "/usr/bin/yade", line 182, in runScript
execfile(
File "/home/
sp=
ValueError: Surface is not closed.
And when I start the simulation, I received another error:
WARN /build/
WARN /build/
As you said the timestep error is not to worry.
The surface is visible when the script is loaded but once I start the simulation, it disappears at iter #20 (within few seconds).
There is still no spheres, even if I let it run for more than 10 minutes.
I tried using bigger sphere radius, still no difference.
Am I missing anything important here?
Sorry for not being precise enough. The problem is that the surface is not closed.
Either you have sphere generation inside "if surf.is_closed()" (which is not evaluated and no spheres are generated), or you delete/comment the condition which ends with error and no spheres are generated, too.
Could you provide the original stl?
Jan
Hi Jan,
I have uploaded the stl file, link below:
https:/
If the surface is not closed and spheres cannot be generated, is there a way to produce a closed surface?
Note that I converted the stl to gts using the command: stl2gts -r < filename.stl > filename.gts
Kind regards,
Irfaan
This question was expired because it remained in the 'Open' state without activity for the last 15 days.
What about the stl? is it closed and the problem comes from the format conversion?
If yes, there should be some solution.
If no, then the surface should be closed, but I don't know about very easy methods..
Also note the gts.Surface.cleanup method, which should merge close vertices (and possibly close the surface, but I did not test it).
cheers
Jan
Just an update:
I have managed to convert the .stl file to .gts using the following command: stl2gts -r < filename.stl > filename.gts
gmsh and libgts-bin need to be installed to do so.
So now I have a surface which was imported in .gts format using the command: surf=gts.
read(open( 'filename. gts'))
My next step is to fill the surface with spheres of various sizes and clump them together to produce a body of the same volume and geometry as the surface. This would require overlapping of spheres.
I used the pack.randomDens
ePack function with sphere radius=0.00005 and realfuzz=0.000005. The average range diameter of the surface 2.36mm to 1.18mm. Hence, my questions are:
[1] Is the pack.randomDens
ePack function appropriate for this situation? Any rule of thumb in deciding the sphere radius? Does the function include overlapping of spheres?
[2] By adding memoizeDb=
'/tmp/gts- packings. sqlite' , does it mean that a temporary sqlite file will be created to store the packing that will be generated?
[3] Is there any function to obtain the clump volume as comparison to the imported gts surface.
When running the script, I received the following error along with a blank view (no sphere or surface):
yade-fDuCoe/ yade-2018. 02b/pkg/ common/ InsertionSortCo llider. cpp:242 action: verletDist is set to 0 because no spheres were found. It will result in suboptimal performances, consider setting a positive verletDist in your script.
WARN /build/
Appreciate if anyone could guide me in solving this. The script is as follows:
# ========= Script ===================
from yade import pack,ymport
import gts
#Add material for spheres
m=FrictMat(young = 1E8, poisson = 0.25, frictionAngle = 0.0, density = 2650)
# import surface
read(open( '/home/ Desktop/ SphereClumping/ sample. gts'))
surf=gts.
# parameters for radom packing in imported surface
'/tmp/gts- packings. sqlite'
memoizeDb=
sp=SpherePack()
# generate lists of spheres and outer surface
append( pack.gtsSurface 2Facets( surf,fixed= False,noBound= True,material= m)) randomDensePack (pack.inGtsSurf ace(surf) ,radius= 0.00005, rRelFuzz= 0.000005, memoizeDb= memoizeDb, returnSpherePac k=True) sp.toSimulation ()
if surf.is_closed():
O1=O.bodies.
sp=pack.
Otemp=
#clump spheres and surface together
O.bodies. clump(Otemp)
idClump=
O.engines=[
tCollider( [Bo1_Sphere_ Aabb(), Bo1_Facet_ Aabb()] ), Ig2_Sphere_ Sphere_ ScGeom( ),Ig2_Facet_ Sphere_ ScGeom( )], Ip2_FrictMat_ FrictMat_ FrictPhys( )], Law2_ScGeom_ FrictPhys_ CundallStrack( )] or(gravity= (0,0,-9. 81),damping= 0.4),
ForceResetter(),
InsertionSor
InteractionLoop(
# handle sphere+sphere and facet+sphere collisions
[
[
[
),
NewtonIntegrat
]
# set timestep to a fraction of the critical timestep
5*PWaveTimeStep ()
O.dt=0.
# save the simulation, so that it can be reloaded later, for experimentation
O.saveTmp()
from yade import qt
qt.View() | https://answers.launchpad.net/yade/+question/683388 | CC-MAIN-2019-47 | refinedweb | 1,303 | 67.25 |
Used by an entity to notify the HAM of a state transition
#include <ha/ham.h> int ham_entity_condition_state( ham_entity_t *ehdl, unsigned tostate, unsigned flags );
libham
This function enables an entity to report a transition to the HAM; the value tostate indicates the transitional state. The HAM in turn triggers a condition state event for this entity, and will search for matching subscribers for this event and execute all associated actions. For more details of the matching mechanisms refer to the API documentation for ham_condition_state().
The connection to the HAM is invalid. This happens when the process that opened the connection (using ham_connect()) and the process that's calling this function are not the same.
In addition to the above errors, the HAM returns any error it encounters while servicing this request. | http://www.qnx.com/developers/docs/7.0.0/com.qnx.doc.ham/topic/hamapi/ham_entity_condition_state.html | CC-MAIN-2017-43 | refinedweb | 131 | 52.9 |
High level modeling¶
Introduction¶
To write synthesizable models in MyHDL, you should stick to the RTL templates shown in RTL modeling. However, modeling in MyHDL is much more powerful than that. Conceptually, MyHDL is a library for general event-driven modeling and simulation of hardware systems.
There are many reasons why it can be useful to model at a higher abstraction level.
Modeling with bus-functional procedures¶
A bus-functional procedure is a reusable encapsulation of the low-level operations needed to implement some abstract transaction on a physical interface. Bus-functional procedures are typically used in flexible verification environments.
Once again, MyHDL uses generator functions to support bus-functional procedures. In MyHDL, the difference between instances and bus-functional procedure calls comes from the way in which a generator function is used.
As an example, we will design a bus-functional procedure of a simplified UART transmitter. We assume 8 data bits, no parity bit, and a single stop bit, and we add print statements to follow the simulation behavior:
T_9600 = int(1e9 / 9600) def rs232_tx(tx, data, duration=T_9600): """ Simple rs232 transmitter procedure. tx -- serial output data data -- input data byte to be transmitted duration -- transmit bit duration """ print "-- Transmitting %s --" % hex(data) print "TX: start bit" tx.next = 0 yield delay(duration) for i in range(8): print "TX: %s" % data[i] tx.next = data[i] yield delay(duration) print "TX: stop bit" tx.next = 1 yield delay(duration)
This looks exactly like the generator functions in previous sections. It becomes a bus-functional procedure when we use it differently. Suppose that in a test bench, we want to generate a number of data bytes to be transmitted. This can be modeled as follows:
testvals = (0xc5, 0x3a, 0x4b) def stimulus(): tx = Signal(1) for val in testvals: txData = intbv(val) yield rs232_tx(tx, txData)
We use the bus-functional procedure call as a clause in a
yield statement.
This introduces a fourth form of the
yield statement: using a generator as a
clause. Although this is a more dynamic usage than in the previous cases, the
meaning is actually very similar: at that point, the original generator should
wait for the completion of a generator. In this case, the original generator
resumes when the
rs232_tx(tx, txData) generator returns.
When simulating this, we get:
-- Transmitting 0xc5 -- TX: start bit TX: 1 TX: 0 TX: 1 TX: 0 TX: 0 TX: 0 TX: 1 TX: 1 TX: stop bit -- Transmitting 0x3a -- TX: start bit TX: 0 TX: 1 TX: 0 TX: 1 ...
We will continue with this example by designing the corresponding UART receiver
bus-functional procedure. This will allow us to introduce further capabilities
of MyHDL and its use of the
yield statement.
Until now, the
yield statements had a single clause. However, they can have
multiple clauses as well. In that case, the generator resumes as soon as the
wait condition specified by one of the clauses is satisfied. This corresponds to
the functionality of sensitivity lists in Verilog and VHDL.
For example, suppose we want to design an UART receive procedure with a timeout. We can specify the timeout condition while waiting for the start bit, as in the following generator function:
def rs232_rx(rx, data, duration=T_9600, timeout=MAX_TIMEOUT): """ Simple rs232 receiver procedure. rx -- serial input data data -- data received duration -- receive bit duration """ # wait on start bit until timeout yield rx.negedge, delay(timeout) if rx == 1: raise StopSimulation, "RX time out error" # sample in the middle of the bit duration yield delay(duration // 2) print "RX: start bit" for i in range(8): yield delay(duration) print "RX: %s" % rx data[i] = rx yield delay(duration) print "RX: stop bit" print "-- Received %s --" % hex(data)
If the timeout condition is triggered, the receive bit
rx will still be
1. In that case, we raise an exception to stop the simulation. The
StopSimulation exception is predefined in MyHDL for such purposes. In the
other case, we proceed by positioning the sample point in the middle of the bit
duration, and sampling the received data bits.
When a
yield statement has multiple clauses, they can be of any type that is
supported as a single clause, including generators. For example, we can verify
the transmitter and receiver generator against each other by yielding them
together, as follows:
def test(): tx = Signal(1) rx = tx rxData = intbv(0) for val in testvals: txData = intbv(val) yield rs232_rx(rx, rxData), rs232_tx(tx, txData)
Both forked generators will run concurrently, and the original generator will resume as soon as one of them finishes (which will be the transmitter in this case). The simulation output shows how the UART procedures run in lockstep:
-- Transmitting 0xc5 -- TX: start bit RX: start bit TX: 1 RX: 1 TX: 0 RX: 0 TX: 1 RX: 1 TX: 0 RX: 0 TX: 0 RX: 0 TX: 0 RX: 0 TX: 1 RX: 1 TX: 1 RX: 1 TX: stop bit RX: stop bit -- Received 0xc5 -- -- Transmitting 0x3a -- TX: start bit RX: start bit TX: 0 RX: 0 ...
For completeness, we will verify the timeout behavior with a test bench that
disconnects the
rx from the
tx signal, and we specify a small timeout
for the receive procedure:
def testTimeout(): tx = Signal(1) rx = Signal(1) rxData = intbv(0) for val in testvals: txData = intbv(val) yield rs232_rx(rx, rxData, timeout=4*T_9600-1), rs232_tx(tx, txData)
The simulation now stops with a timeout exception after a few transmit cycles:
-- Transmitting 0xc5 -- TX: start bit TX: 1 TX: 0 TX: 1 StopSimulation: RX time out error
Recall that the original generator resumes as soon as one of the forked generators returns. In the previous cases, this is just fine, as the transmitter and receiver generators run in lockstep. However, it may be desirable to resume the caller only when all of the forked generators have finished. For example, suppose that we want to characterize the robustness of the transmitter and receiver design to bit duration differences. We can adapt our test bench as follows, to run the transmitter at a faster rate:
T_10200 = int(1e9 / 10200) def testNoJoin(): tx = Signal(1) rx = tx rxData = intbv(0) for val in testvals: txData = intbv(val) yield rs232_rx(rx, rxData), rs232_tx(tx, txData, duration=T_10200)
Simulating this shows how the transmission of the new byte starts before the previous one is received, potentially creating additional transmission errors:
-- Transmitting 0xc5 -- TX: start bit RX: start bit ... TX: 1 RX: 1 TX: 1 TX: stop bit RX: 1 -- Transmitting 0x3a -- TX: start bit RX: stop bit -- Received 0xc5 -- RX: start bit TX: 0
It is more likely that we want to characterize the design on a byte by byte
basis, and align the two generators before transmitting each byte. In MyHDL,
this is done with the
join function. By joining clauses together in a
yield statement, we create a new clause that triggers only when all of its
clause arguments have triggered. For example, we can adapt the test bench as
follows:
def testJoin(): tx = Signal(1) rx = tx rxData = intbv(0) for val in testvals: txData = intbv(val) yield join(rs232_rx(rx, rxData), rs232_tx(tx, txData, duration=T_10200))
Now, transmission of a new byte only starts when the previous one is received:
-- Transmitting 0xc5 -- TX: start bit RX: start bit ... TX: 1 RX: 1 TX: 1 TX: stop bit RX: 1 RX: stop bit -- Received 0xc5 -- -- Transmitting 0x3a -- TX: start bit RX: start bit TX: 0 RX: 0
Modeling memories with built-in types¶
Python has powerful built-in data types that can be useful to model hardware memories. This can be merely a matter of putting an interface around some data type operations.
For example, a dictionary comes in handy to model sparse memory structures. (In other languages, this data type is called associative array, or hash table.) A sparse memory is one in which only a small part of the addresses is used in a particular application or simulation. Instead of statically allocating the full address space, which can be large, it is better to dynamically allocate the needed storage space. This is exactly what a dictionary provides. The following is an example of a sparse memory model:
def sparseMemory(dout, din, addr, we, en, clk): """ Sparse memory model based on a dictionary. Ports: dout -- data out din -- data in addr -- address bus we -- write enable: write if 1, read otherwise en -- interface enable: enabled if 1 clk -- clock input """ memory = {} @always(clk.posedge) def access(): if en: if we: memory[addr.val] = din.val else: dout.next = memory[addr.val] return access
Note how we use the
val attribute of the
din signal, as we don’t want to
store the signal object itself, but its current value. Similarly, we use the
val attribute of the
addr signal as the dictionary key.
In many cases, MyHDL code uses a signal’s current value automatically when there
is no ambiguity: for example, when a signal is used in an expression. However,
in other cases, such as in this example, you have to refer to the value
explicitly: for example, when the Signal is used as a dictionary key, or when it is not
used in an expression. One option is to use the
val attribute, as in this
example. Another possibility is to use the
int() or
bool() functions to
typecast the Signal to an integer or a boolean value. These functions are also
useful with
intbv objects.
As a second example, we will demonstrate how to use a list to model a synchronous fifo:
def fifo(dout, din, re, we, empty, full, clk, maxFilling=sys.maxint): """ Synchronous fifo model based on a list. Ports: dout -- data out din -- data in re -- read enable we -- write enable empty -- empty indication flag full -- full indication flag clk -- clock input Optional parameter: maxFilling -- maximum fifo filling, "infinite" by default """ memory = [] @always(clk.posedge) def access(): if we: memory.insert(0, din.val) if re: dout.next = memory.pop() filling = len(memory) empty.next = (filling == 0) full.next = (filling == maxFilling) return access
Again, the model is merely a MyHDL interface around some operations on a list:
insert to insert entries,
pop to retrieve them, and
len
to get the size of a Python object.
Modeling errors using exceptions¶
In the previous section, we used Python data types for modeling. If such a type
is used inappropriately, Python’s run time error system will come into play. For
example, if we access an address in the
sparseMemory model that was not
initialized before, we will get a traceback similar to the following (some lines
omitted for clarity):
Traceback (most recent call last): ... File "sparseMemory.py", line 31, in access dout.next = memory[addr.val] KeyError: Signal(51)
Similarly, if the
fifo is empty, and we attempt to read from it, we get:
Traceback (most recent call last): ... File "fifo.py", line 34, in fifo dout.next = memory.pop() IndexError: pop from empty list
Instead of these low level errors, it may be preferable to define errors at the
functional level. In Python, this is typically done by defining a custom
Error exception, by subclassing the standard
Exception class. This
exception is then raised explicitly when an error condition occurs.
For example, we can change the
sparseMemory function as follows (with
the doc string is omitted for brevity):
class Error(Exception): pass def sparseMemory2(dout, din, addr, we, en, clk): memory = {} @always(clk.posedge) def access(): if en: if we: memory[addr.val] = din.val else: try: dout.next = memory[addr.val] except KeyError: raise Error, "Uninitialized address %s" % hex(addr) return access
This works by catching the low level data type exception, and raising the custom
exception with an appropriate error message instead. If the
sparseMemory function is defined in a module with the same name, an
access error is now reported as follows:
Traceback (most recent call last): ... File "sparseMemory.py", line 61, in access raise Error, "Uninitialized address %s" % hex(addr) Error: Uninitialized address 0x33
Likewise, the
fifo function can be adapted as follows, to report
underflow and overflow errors:
class Error(Exception): pass def fifo2(dout, din, re, we, empty, full, clk, maxFilling=sys.maxint): memory = [] @always(clk.posedge) def access(): if we: memory.insert(0, din.val) if re: try: dout.next = memory.pop() except IndexError: raise Error, "Underflow -- Read from empty fifo" filling = len(memory) empty.next = (filling == 0) full.next = (filling == maxFilling) if filling > maxFilling: raise Error, "Overflow -- Max filling %s exceeded" % maxFilling return access
In this case, the underflow error is detected as before, by catching a low level exception on the list data type. On the other hand, the overflow error is detected by a regular check on the length of the list.
Object oriented modeling¶
The models in the previous sections used high-level built-in data types internally. However, they had a conventional RTL-style interface. Communication with such a module is done through signals that are attached to it during instantiation.
A more advanced approach is to model hardware blocks as objects. Communication with objects is done through method calls. A method encapsulates all details of a certain task performed by the object. As an object has a method interface instead of an RTL-style hardware interface, this is a much higher level approach.
As an example, we will design a synchronized queue object. Such an object can
be filled by producer, and independently read by a consumer. When the queue is
empty, the consumer should wait until an item is available. The queue can be
modeled as an object with a
put(item) and a
get method, as
follows:
from myhdl import * def trigger(event): event.next = not event class queue: def __init__(self): self.l = [] self.sync = Signal(0) self.item = None def put(self,item): # non time-consuming method self.l.append(item) trigger(self.sync) def get(self): # time-consuming method if not self.l: yield self.sync self.item = self.l.pop(0)
The
queue object constructor initializes an internal list to hold
items, and a sync signal to synchronize the operation between the methods.
Whenever
put puts an item in the queue, the signal is triggered. When
the
get method sees that the list is empty, it waits on the trigger
first.
get is a generator method because it may consume time. As the
yield statement is used in MyHDL for timing control, the method cannot
“yield” the item. Instead, it makes it available in the item instance
variable.
To test the queue operation, we will model a producer and a consumer in the test bench. As a waiting consumer should not block a whole system, it should run in a concurrent “thread”. As always in MyHDL, concurrency is modeled by Python generators. Producer and consumer will thus run independently, and we will monitor their operation through some print statements:
q = queue() def Producer(q): yield delay(120) for i in range(5): print "%s: PUT item %s" % (now(), i) q.put(i) yield delay(max(5, 45 - 10*i)) def Consumer(q): yield delay(100) while 1: print "%s: TRY to get item" % now() yield q.get() print "%s: GOT item %s" % (now(), q.item) yield delay(30) def main(): P = Producer(q) C = Consumer(q) return P, C sim = Simulation(main()) sim.run()
Note that the generator method
get is called in a
yield statement in
the
Consumer function. The new generator will take over from
Consumer, until it is done. Running this test bench produces the
following output:
% python queue.py 100: TRY to get item 120: PUT item 0 120: GOT item 0 150: TRY to get item 165: PUT item 1 165: GOT item 1 195: TRY to get item 200: PUT item 2 200: GOT item 2 225: PUT item 3 230: TRY to get item 230: GOT item 3 240: PUT item 4 260: TRY to get item 260: GOT item 4 290: TRY to get item StopSimulation: No more events | http://docs.myhdl.org/en/stable/manual/highlevel.html | CC-MAIN-2021-39 | refinedweb | 2,676 | 51.48 |
I am working on my college project which needs to store data in EEPROM of AtMega32.
I am able to write and read data at any particular location of memory. But when I try to write data sequentially form address 0 to 1023 I am getting wrong values.
Here are the functions I have written.
Function definition to read and write data
#include "eeprom.h"
uint8_t EEPROMRead(uint16;
}
void EEPROMWrite(uint16_t uiAddress, uint8_t);
}
static int epadr=0;
epread=EEPROMRead(epadr); //reading from address stored in epadr
printf("%d",epread); //printing values
if(epadr<=1023)
{
EEPROMWrite(epadr,high); //writing at address stored in epadr
epadr++; //increment address
}
}
if(epadr>1023)
printf("Memory Full\n");
No need to define your own function for reading and writing data in internal EEPROM. AVR provide library for this purpose. Here is the sample code:-
#define F_CPU 16000000UL #include <avr/io.h> #include <util/delay.h> #include <avr/eeprom.h> int main(void) { char read[5]; eeprom_write_byte (0, '0'); eeprom_write_byte (1, '1'); eeprom_write_byte (2, '2'); eeprom_write_byte (3, '3'); eeprom_write_byte (4, '4'); for (int count=0;count<5;count++) { read[count]=eeprom_read_byte((const uint8_t *)(count)); } while (1); } | https://codedump.io/share/e0e34FE1JY0B/1/writing-and-reading-data-to-eeprom-sequentially | CC-MAIN-2018-26 | refinedweb | 190 | 56.76 |
There have been changes in the files <linux/etherdevice.h> and <linux/ethtool.h>, which stopped the ASUS/Atheros distributed atl2-1.0.40.4.tar.gz compiling and installing correctly.
After some digging around with a kernel cross-reference tool (I only just found out these cool tools)... Looks much better now. I've uploaded the kernel module atl2.ko compiled for 2.6.23.1-49.fc8.
Insert the following into the file kcompat.h just before the final #endif :
#if ( LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,23) )
// this used to be in <linux/etherdevice.h> - disappeared in 2.6.23
// See :;diffval=2.6.23;diffvar=v
static inline void eth_copy_and_sum (struct sk_buff *dest,
const unsigned char *src,
int len, int base)
{
memcpy (dest->data, src, len);
}
// this used to be in <linux/ethtool.h> - disappeared in 2.6.23
// static const struct ethtool_ops.get_perm_addr = ethtool_op_get_perm_addr,
// See :;diffval=2.6.23;diffvar=v
#undef ETHTOOL_GPERMADDR
#endif /* >= 2.6.23 */
As root :
mkdir /lib/modules/`uname -r`/kernel/drivers/net/atl2
cp atl2.ko /lib/modules/`uname -r`/kernel/drivers/net/atl2/
/sbin/depmod -a
# Do this as a force since this was compiled against 2.6.23.1-49.fc8,
# and your kernel may be different (should be no big deal, as long as it's Fedora 8)
/sbin/modprobe -f atl2
i have the same problem, but im using fedora 7..can you help me??:D
i have the same problem, but im using fedora 7..can you help me??:D | http://code.google.com/p/eeedora/wiki/WiredInternet | crawl-002 | refinedweb | 253 | 63.36 |
You can click on the Google or Yahoo buttons to sign-in with these identity providers,
or you just type your identity uri and click on the little login button.
Idea proposed to Vincent Legoll, since we know pylint/astng won't ever be able to read dynamic code such as :
class struct(object):
"""Similar to a dictionary except that members may be accessed as
s.member.
Usage:
s = struct(a=10, b=20, d={"cat":"dog"} )
print s.a + s.b
"""
def __init__(self, **args):
self.__dict__.update(args)
def __repr__(self):
r = ["<"]
for i in self.__dict__.keys():
r.append("%s=%s" % (i, getattr(self,i)))
r.append(">\n")
return " ".join(r)
Ticket #8769 - latest update on 2009/09/01, created on 2009/03/27 by Sylvain Thenault | https://www.logilab.org/ticket/8769 | CC-MAIN-2018-26 | refinedweb | 130 | 65.62 |
Retrieve lists of free HTTP proxies from online sites.
Project description
Package Description
GetProx is a library for retrieving lists of free HTTP proxies from various online sites.
Installation
The package may be installed as follows:
pip install getprox
Usage Examples
To retrieve proxies from all available sources, invoke the package as follows:
import getprox proxy_uri_list = getprox.proxy_get()
Proxies are returned in format. By default, the proxies will be tested using a simple timeout test to determine whether they are alive. A list of supported proxy sources can be obtained via
proxy_src_list = getprox.sources()
Proxies may also be obtained from a specific source or sources. For example:
proxy_uri_list = getprox.proxy_get('letushide')
Internally, proxy retrieval and testing is performed asynchronously; one can also access the asynchronous mechanism as follows:
p = getprox.ProxyGet() # .. wait for a while .. proxy_src_list = p.get()
Instantiation of the ProxyGet class will launch threads that perform retrieval and testing. If the threads finish running, the get() method will return the retrieved proxy URIs; if not, the method will return an empty list.
To Do
- Add support for more proxy sources.
- Expose proxy selection options for specific sources.
- Provide more robust proxy checking algorithm.
License
This software is licensed under the BSD License.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/getprox/ | CC-MAIN-2018-22 | refinedweb | 231 | 58.89 |
Search:
Forum
General C++ Programming
Complete the code using questions 1-6
Complete the code using questions 1-6
Nov 11, 2012 at 6:07pm UTC
alex00
(1)
1. Define data structure:
The structure for each company data block has the following members in the exact sequence (line 12):
char name [STRLONG];
char street [STRLONG];
char city [STRMED];
char state [STRMED];
char zip [STRSHORT];
char phone [STRSHORT];
char cell [STRSHORT];
int rating;
char contact [STRLONG];
2. Function Prototype:
One user defined function is in the program has to have a prototype (line 15) as follows:
void printInfo(company* info, FILE* fp);
3. Declare file pointers:
Inside the program main function, two file pointers need to be declared: Input database file pointer line 19 and Output text file pointer line 20.
4. Declare data structure:
Declare an instance of the generic company data structure line 25 used to hold read file contents. Use the 'new' operator creating a block of memory returning a pointer to the block. The pointer is named and is used throughout the program to reference the structure.
company* info= new company;
5. Open input database file:
Lines 27-29 prompt the user for the input file name. A char array for the input file name needs to be declared before it is used. The array can be declared with a long string constant [STRLONG]. The declaration should be placed at line 22. That input file name is then used on line 32 where the file is to be opened.
The 'if' epression should look like the following and specify read binary :
if ((file pointer = fopen (calling arguments)) == NULL)Line 32 uses the fopen function
.
Fill in the appropriate values used in the 'if' condition expression. Since the pipes database file is a binary database, the fopen specification should be to 'read binary' mode.
6. Open output text file:
Lines 38-40 prompt the user for the output text file name. A char array for the output file name needs to be declared before it is used. The array can be declared with a long string constant [STRLONG]. The declaration should be placed at line 23. That output file name is then used on line 43 where the file is to be opened.
The 'if' epression should again look like the following except now specify write text:
if ((file pointer = fopen (calling arguments)) == NULL)
Use the fopen function referenced above to open the output text file. Fill in the appropriate values used in the 'if' condition expression. Since the output text file is composed of strings, the fopen specification should be set to 'write text' mode.
Line 48 sets up a loop to read each company information block one-at-a-time from the beginning of the file to the end. Each time a block is read, the contents are printed to the output text file.
1 #include <stdio.h>
2 #include <stdlib.h>
3 #include <string.h>
4 // set string lengths
5 #define STRSHORT = 15 chars
6 #define STRMED = 30 chars
7 #define STRLONG = 80 chars
8 using namespace std;
9
10 struct company
11 {
12 ?? add member elements for structure
13 };
14
15 void printInfo(company* info, FILE* fp);
16
17 int main()
18 {
19 declare input file pointer ??
20 declare output file pointer ??
21 int BlockNumber;
22 char ??[STRLONG]; //input file name
23 char ??[STRLONG]; // output file name
24
25 create instance of company information structure
26
27 printf("Enter input file name :");
28 scanf("%s", input file name );
29 fflush(stdin); /* flush keyboard buffer */
30
31 // open binary data file
32 if ( fill in the fopen expression to open the database file )
33 {
34 fprintf(stderr, "Error opening input file.");
35 exit(1);
36 }
37
38 printf("Enter output file name :");
39 scanf("%s", output file name );
40 fflush(stdin); /* flush keyboard buffer */
41
42 // open output text file
43 if ( fill in the fopen expression to open the output text file )
44 {
45 fprintf(stderr, "Error opening output file.");
46 exit(1);
47 }
48 for (BlockNumber=0; BlockNumber<10; BlockNumber++)
49 {
50 /* Move the position indicator to the specified element. */
51 if ( fill in the fseek expression to position to BlockNumber )
52 {
53 fprintf(stderr, "\nError using fseek().");
54 exit(1);
55 }
56 /* Read in a single info. */
57 fread( fill in the fread expression to read data );
58
59 fprintf(op,"\n\n\n Block %d\n\n",BlockNumber); /add header text
60 printInfo (fill in correct calling arguments);
61 }
62 return 0;
63 }
Topic archived. No new replies allowed.
C++
Information
Tutorials
Reference
Articles
Forum
Forum
Beginners
Windows Programming
UNIX/Linux Programming
General C++ Programming
Lounge
Jobs
|
v3.1
Spotted an error? contact us | http://www.cplusplus.com/forum/general/84650/ | CC-MAIN-2014-42 | refinedweb | 783 | 62.38 |
Specifies an address range in the VRAM. More...
#include <VDPVRAM.hh>
Specifies an address range in the VRAM.
A VDP subsystem can use this to put a claim on a certain area. For example, the owner of a read window will be notified before writes to the corresponding area are commited. The address range is specified by a mask and is not necessarily continuous. See "doc/vram-addressing.txt" for details. TODO: Rename to "Table"? That's the term the VDP data book uses. Maybe have two classes: "Table" for tables, using a mask, and "Window" for the command engine, using an interval.
Definition at line 135 of file VDPVRAM.hh.
Disable this window: no address will be considered inside.
Definition at line 182 of file VDPVRAM.hh.
References openmsx::VRAMObserver::updateWindow().
Gets the mask for this window.
Should only be called if the window is enabled. TODO: Only used by dirty checking. Maybe a new dirty checking approach can obsolete this method?
Definition at line 146 of file VDPVRAM.hh.
Gets a pointer to a contiguous part of the VRAM.
The region is [index, index + size) inside the current window.
Definition at line 219 of file VDPVRAM.hh.
References isContinuous(), and utf8::unchecked::size().
Similar to getReadArea(), but now with planar addressing mode.
This means the region is split in two: one region for the even bytes (ptr0) and another for the odd bytes (ptr1).
Definition at line 233 of file VDPVRAM.hh.
References Math::floodRight(), and utf8::unchecked::size().
Is there an observer registered for this window?
Definition at line 270 of file VDPVRAM.hh.
Is the given index range continuous in VRAM (iow there's no mirroring) Only if the range is continuous it's allowed to call getReadArea().
Definition at line 190 of file VDPVRAM.hh.
References Math::floodRight(), and utf8::unchecked::size().
Referenced by getReadArea().
Alternative version to check whether a region is continuous in VRAM.
This tests whether all addresses in the range 0bCCCCCCCCCCXXXXXXXX (with C constant and X varying) are continuous. The input must be a value 1-less-than-a-power-of-2 (so a binary value containing zeros on the left ones on the right) 1-bits in the parameter correspond with 'X' in the pattern above. Or IOW it tests an aligned-power-of-2-sized region.
Definition at line 207 of file VDPVRAM.hh.
References openmsx::mask.
Test whether an address is inside this window.
"Inside" is defined as: there is at least one index in this window, which is mapped to the given address. TODO: Might be replaced by notify().
Definition at line 296 of file VDPVRAM.hh.
Referenced by openmsx::VDPVRAM::cpuRead(), openmsx::VDPVRAM::cpuWrite(), and notify().
Notifies the observer of this window of a VRAM change, if the changes address is inside this window.
Definition at line 305 of file VDPVRAM.hh.
References isInside(), and openmsx::VRAMObserver::updateVRAM().
Reads a byte from VRAM in its current state.
Definition at line 253 of file VDPVRAM.hh.
Referenced by openmsx::Graphic4Mode::point(), openmsx::Graphic5Mode::point(), openmsx::Graphic6Mode::point(), openmsx::Graphic7Mode::point(), and openmsx::NonBitmapMode::point().
Similar to readNP, but now with planar addressing.
Definition at line 261 of file VDPVRAM.hh.
Unregister the observer of this VRAM window.
Definition at line 285 of file VDPVRAM.hh.
Referenced by openmsx::VDPVRAM::setRenderer().
Definition at line 310 of file VDPVRAM.cc.
Sets the mask and enables this window.
Definition at line 163 of file VDPVRAM.hh.
References openmsx::VRAMObserver::updateWindow().
Referenced by openmsx::VDPVRAM::setRenderer(), setSizeMask(), and openmsx::VDPVRAM::VDPVRAM().
Register an observer on this VRAM window.
It will be called when changes occur within the window. There can be only one observer per window at any given time.
Definition at line 279 of file VDPVRAM.hh.
Referenced by openmsx::VDPVRAM::setRenderer(), and openmsx::SpriteChecker::SpriteChecker().
Inform VRAMWindow of changed sizeMask.
For the moment this only happens when switching the VR bit in VDP register 8 (in VR=0 mode only 32kB VRAM is addressable).
Definition at line 315 of file VDPVRAM.hh.
Only VDPVRAM may construct VRAMWindow objects.
Definition at line 332 of file VDPVRAM.hh. | http://openmsx.org/doxygen/classopenmsx_1_1VRAMWindow.html | CC-MAIN-2020-40 | refinedweb | 687 | 53.58 |
Static Site Generators Reviewed: Jekyll, Middleman, Roots, Hugo
In my previous article, I looked at why static website generation is growing in popularity, and I gave a high-level overview of all of the components of a modern generator.
In this article, we’ll look at four popular static website generators — Jekyll, Middleman, Roots, Hugo — in far more detail. This should give you a great starting point for finding the right one for your project. A lot of other ones are out there, and many of them could have made this list. The ones I chose for this article represent the different trends that dominate the landscape today.
Each generator takes a repository with plain-text files, runs one or more compilation phases, and spits out a folder with a static website that can be hosted anywhere. No PHP or database needed.
Jekyll
No article today about static website generation can get by without mentioning Jekyll.
We might not be talking about the resurgence of static websites if GitHub’s founder, Tom Preston-Werner, hadn’t sat down in his San Francisco apartment in October 2008 with a glass of apple cider and the urge to write his own blogging engine.
The result was Jekyll, “a simple, blog-aware, static site generator.”
One of the brilliant ideas behind Jekyll is that it lets any normal static website be a valid Jekyll project. This makes it one of the easiest generators to get started with:
- Take a plain HTML mockup of a blog.
- Get rid of repeating headers, menus, footers and so on by working with layouts and includes.
- Turn pages and blog posts into Markdown, and pull the content into the templates.
Along the way, Jekyll can act as a local web server and keep watch over any files in your project, generating all of the HTML, CSS and JavaScript files from templates, Markdown, Sass or CoffeeScript files.
Jekyll was the first static generator to introduce the concept of “front matter,” a way of annotating templates or Markdown files with meta data. Front matter is a bit of YAML at the top of any text file, indicated by three leading and following hyphens (
—):
title: A blog post date: 2014-09-01 tags: ["meta", "yaml"] --- # Blogpost with meta data This is a short example of a Markdown document with meta data as front matter.
Templating Engine
Jekyll is built on Liquid, a templating engine that originated with Shopify. This is both a blessing and a curse. Liquid is a safe templating engine made to run untrusted templates for Shopify’s hosted platform. That means there is no custom code in the templates. Ever.
On the one hand, this can make templates simpler and more declarative, and Liquid has a good set of filters and helpers built in out of the box.
On the other hand, it does mean you have to start creating your own Liquid helpers from Jekyll plugins if you want to do anything that’s not baked in.
Liquid lets you use variables in your templates like this:
{{ some_variable }}. And blog tags looks like this:
{% if some_variable %}Show this{% endif %}. Jekyll adds a few tags to handle includes and links, plus some helpers for sorting, filtering and escaping content. One glaring omission is a simple way to handle default values for variables. At some point in the future,
{{ some_variable | default: ‘Default Value’ }} should start working, but it’s not there yet. So, right now you’ll find a lot of clunky
if and
else statements in Jekyll websites that work around this.
Content Model
Jekyll’s content model has evolved a lot since the tool was conceived as a simple blogging engine.
Today, content can be stored in several different forms and take on different behavior.
The simplest form is an individual document in either Markdown or HTML. This file gets converted into a corresponding HTML page when the page is built. The document can specify a layout that will be used when it is turned into an HTML page, as well as specify various meta data, which you can access from templates via the
{{page}} variable.
Jekyll has special support for a folder named
_posts that contains Markdown files with a naming scheme of
yyyy-mm-dd-title-of-the-post.md. Posts behave like you would typically expect from entries on a blog.
Since version 2.0, Jekyll supports collections. A collection is a folder with Markdown documents. You can access collections in templates through the special
{{site.collections}} variable, and you can configure each document in the collection to have its own permalink. One big upcoming change in Jekyll 3.0 is that the distinction between
_posts and
collections will be eliminated.
The last form of content is data files. These are stored in a special
_data folder, and they can be YAML, JSON or CSV files. From there, you can pull the data into any template file through the
{{site.data}} variable.
Asset Pipeline
Jekyll’s asset pipeline is extremely simple. Just as with the logic-less Liquid, this is both good and bad. There’s no built-in support for live reloading, minification or asset bundling; however, Sass and CoffeeScript are pretty straightforward to handle. Any
.sass,
.scss or
.coffee file that starts with YAML front matter will be processed by Jekyll and turned into a corresponding
.css or
.js file in the final output for the static website.
This means a CoffeeScript file would have to look like this in order to get processed by Jekyll:
alert "Hello from CoffeeScript"
Your editor’s syntax highlighter might not be super-excited about the leading hyphens.
If you look at the code for large Jekyll websites out there in the wild, you’ll see that many of them drop Jekyll’s built-in asset pipeline in favor of a combination of Grunt or Gulp with Jekyll to run their builds. For a large project, this is typically the way to go because you can take advantage of the large infrastructure around these projects and get BrowserSync or LiveReload to work.
Putting It All Together
Let’s look at an actual Jekyll website to see how all of these parts fit together. What better source to learn from than the official Jekyll website. We’ll be looking at the documentation section here. You can follow along in the GitHub repository.
Here, I’ve marked roughly where all of the parts of the page come from.
The main structure of the website is defined in
_layouts/default.html. There are two includes,
_includes/header.html and
_includes/footer.html. The header has a fairly hardcoded navigation menu. Jekyll doesn’t have a way to pull in a specific set of pages and iterate over them; so, typically, the main navigation of a website will end up being coded by hand.
Each page in the documentation section is a Markdown file in the
_docs/ folder. The pages use a handy feature of Jekyll called nested layouts. The layout for the document section is set to
_layouts/docs.html, and that layout in turn uses
_layouts/default.html.
The navigation sidebar is generated from data in
_data/docs.yml, which arranges the different files in the documentation into different groups, each with its title.
Extending Jekyll
Jekyll is quite simple to extend, and the ecosystem of plugins is fairly large. The simplest way to extend Jekyll is to add a Ruby file in the
_plugins folder. There are five types of plugins: generators, converters, commands, tags and filters.
As mentioned earlier, the Liquid templating engine is strict about not allowing any code in the templates. Tags and filters work around this by enabling you to add your own tags and filters.
Generators and converters let you hook into the build process of Jekyll and generate extra pages or support new file formats.
Commands let you add new features to Jekyll’s command-line interface.
In Summary
Jekyll is widely used, and since version 2, the content model has grown rich enough to support websites with way more complexity than that of a simple blog. For a large project, you’ll probably quickly outgrow the limited asset pipeline and start relying on Gulp or Grunt. Liquid is battle-tested and a solid templating engine, but it can feel limiting. And as soon as you want to do complex filtering or querying on your content, you’ll need to write your own plugins.
Middleman
Middleman is about the same age as Jekyll. While it never got the widespread adoption that Jekyll achieved from the latter’s default integration with GitHub pages, it has quietly become the backbone of websites for some of the web’s most design-savvy companies: The websites for MailChimp, Nest and Simple are all built with Middleman.
It’s a thriving open-source project with more than a hundred contributors. The muscle behind it is Thomas Reynolds, technical director of Portland-based Instrument.
Whereas Jekyll was born of the desire for a simple, static blogging engine, Middleman was built as a framework for more advanced marketing and documentation websites. It’s a powerful tool and fast to pick up if you’re coming from the world of Rails.
Templating Engine
A stated goal of its author is to make Middleman feel like Ruby on Rails for static websites. Just as in Rails, the default templating engine is Ruby’s standard embedded Ruby (ERB) templates, but swapping these out for Haml or Liquid is straightforward.
ERB is a straightforward templating engine that lets you use free-form Ruby in your templates, with no restrictions. This gives the engine a lot more power than Liquid, but obviously it also requires more discipline on your part because you can write as much code as you want right in your templates.
Asset Pipeline
For years, the stable version of Middleman has been version 3. Now, version 4 is in beta and will bring some big changes to both the content model and the asset pipeline.
The core of Middleman’s content model is the
sitemap. The site map is a list of all of the files that makes up your Middleman file, called “resources” in Middleman terminology.
Each resource has a
source_file and a
destination_file and gets fed through Middleman’s asset pipeline when the website is built.
A simple
source/about/index.html source file would end up as a
build/about/index.html destination file without any transformations. Just as when you use Rails’ asset pipeline, you can string together file extensions to specify what transformations to apply to your files.
A file named
source/about/index.html.erb will get passed through the ERB templating engine and get transformed into
build/about/index.html. A file named
source/js/app.js.coffee would be compiled as CoffeeScript and end up as
build/js/app.js.
The asset pipeline is built on Sprockets, just like Rails, which means you can use “magic” comments in your JavaScript and CSS files to include dependencies:
//= require 'includes/test.js'
This makes it easy to split your front end into small modules and have Middleman resolve the dependencies at build time.
The upcoming version 4 introduces the concept of external pipelines. These let Middleman control external tools such as Ember CLI and Webpack during the build process and makes its asset pipeline even more powerful. This is really handy for getting Middleman to spin up a separate process running Ember CLI and then proxy the right requests through to the Ember.js server.
Content Model
In your templates, you can access the site map and use Ruby to easily access, filter and sort the content.
The current stable version of Middleman (3) comes with a query interface in the site map that mimics Active Record (the object-relational mapping that powers Rails). That’s been stripped in the upcoming version 4, in favor of simple Ruby methods to query the data.
Let’s say you have a folder named
source/faq. The FAQ entries are stored in Markdown files with a bit of front matter, and one looks something like this:
--- title: What is Middleman? position: 1 --- Middleman is a [static website generator]() with all of the shortcuts and tools of modern web development.
Suppose you want to pull all of these entries into an ERB template and order them by position. Our imaginary
source/faq.html.erb would look something like this:
<h1>FAQ</h1> <% sitemap.resources .select { |resource| resource.path =~ /^faq\// } .sort_by { |resource| resource.data.position } .each do |resource| %> <h2 class="question"><%= resource.data.title %></h2> <div class="answer"><%= resource.render(:layout => false) %></div> <% end %>
Version 4 introduces the concept of collections, which let you turn those kinds of filters in the site map into a collection. When running in LiveReload mode, Middleman watches your file system and automatically update the collections (and rebuilds any file that depends on it) when something changes.
Once you get the hang of this, constructing any content architecture you might need for your website is pretty straightforward. If something can be built statically, Middleman can be made to build it for you.
There’s also support for data files (YAML and JSON files stored in a
data/ folder), just like in recent versions of Jekyll.
Extending Middleman
Middleman allows extension authors to hook into different points through a powerful API. This is not needed nearly as often as it is in Jekyll, though, because the freedom to create ad-hoc helper functions, filtered collections and new pages, along with a templating engine that allows ERB, means you can do a lot out of the box that would require an extension in many other generators.
Authoring Middleman extensions is not very straightforward or well documented. To really get going, dig into some existing extensions and figure out how it’s all done.
Once you get going, however, Middleman offers hooks into both the CLI and the content model. The official directory of extensions lists a wide selection.
Roots
Like Middleman, Roots comes from an agency that needed a static website generator for its client work. Carrot, based in New York and now part of the Vice media group, sponsors the development of Roots. Jeff Escalante of Carrot is the mastermind behind it.
Whereas Middleman is like a static version of Ruby on Rails, Roots clearly comes from the world of Node.js-based front-end tools.
Roots is a lot more opinionated than Middleman, and it has obviously been tailored to make building websites with Carrot’s standard toolchain highly efficient.
Templating Engine
Roots comes with support for the Jade templating engine out of the box. Jade heavily abbreviates HTML’s syntax, cutting all of the cruft from HTML, and it makes embedding JavaScript snippets in templates clean and simple. It looks quite different from normal HTML, so copying and pasting HTML snippets from elsewhere is harder because they’ll need to be rewritten first.
You can switch the templating engine to EJS, and supporting other options wouldn’t be hard, but because Carrot has settled on Jade for its internal toolchain, all guides, examples and so on assume that you’re using Jade.
Asset Pipeline
Roots comes with a built-in asset pipeline tuned for CoffeeScript and Stylus. As with Jade for templates, you can make Roots handle other formats, but these two are what Carrot has built the workflow around, and if you go with Roots, you’ll probably have an easier time adopting the same workflow.
That being said, Roots’ asset pipeline is easily extensible. One great extension adds support for Browserify, a tool that makes it trivial to use any library distributed with npm in your front-end JavaScript. Roots’ asset pipeline also support multipass compilation. If a file is named
myfile.jade.ejs, then it would be compiled with EJS first and then with Jade.
As an asset pipeline, Roots obviously doesn’t have the ecosystem you’d find around more general build tools, such as Grunt, Gulp and Brunch. However, if you don’t try to fight Roots and you adopt a workflow similar to Carrot’s, then you’ll find that it is very simple to set up and get going with, while being just powerful enough to work for most projects.
Content Model
Out of the box, Roots doesn’t really have any preference for content models. It simply takes templates in a
views/ folder and turns them into HTML documents in the
public/ folder. Jade makes it easy to embed Markdown, but that’s about it:
extends layout block content :markdown ## This Is Markdown Everything in this block will be parsed as Markdown and inserted in the content block within the layout.jade template.
Extending Roots
Roots doesn’t have any content model as such because it relies completely on extensions for all content, and those extensions come in many flavors.
The official directory doesn’t list as many extensions as what you’ll find for Middleman or Jekyll, but off the bat you’ll notice several for dealing with different kinds of content.
There is the Roots Dynamic Content extension, which gives you something similar to Jekyll’s collections with front matter and a Jade body. There’s also my own Roots Posts extension, which adds collections in Markdown plus front matter, just like Jekyll.
The Records and YAML extensions add support for data files that can be pulled into any template. The former will even fetch data from any URL and make it available from the templates.
A similar extension is Roots Contentful, which Carrot blogged about in its article “Building a Static CMS.” The extension pulls in content from Contentful’s API and lets you filter and iterate over it.
Getting started with writing Roots extensions is very easy, and the documentation has a really good introduction to how the different hooks and compilation passes work. More thorough documentation on the inner workings of Roots wouldn’t hurt, though.
Fortunately, there is a very active Gitter chat room, where both Jeff Escalante and other Roots contributors readily answer questions.
Hugo
Hugo is a much more recent addition to the world of static website generators, having started just two years ago. It’s certainly growing the fastest in popularity at the moment.
Hugo is written in Go, which makes it the only really popular generator written in a statically compiled language. Most of its big advantages, and largest drawback, come from this fact.
Let’s start with the good. Hugo is fast! Not just fast as in, “This is pretty cool.” Fast as in, “Whoa! This feels like more than 1G acceleration!”
A great benchmark on YouTube shows Hugo building 5000 pages in about 6 seconds, and PieCrust2’s author, Ludovic Chabant, has a blog post that puts these numbers into context, showing Hugo generating a sample website about 75 times faster than Middleman.
Hugo is also incredibly simple to install and update. Ruby and Node.js are fine if you’ve already set up a development environment; otherwise, you’re in for a lot of pain. Not so with Hugo: Just download the binary for your platform and run it — no runtime dependencies or installation process. Want to update Hugo? Just download a new binary and you’re set.
Templating Engine
Hugo uses the package template (
html/template) from Go’s standard library, but it also supports two alternative Go-based template engines, Amber and Ace.
The package template engine is similar to Liquid in that it allows a limited amount of logic in your templates. As with Liquid, this is both a blessing and a curse. It will make your templates simpler and usually cleaner, but obviously it makes you far more dependent on whatever functions the templating language provides.
Fortunately, Hugo provides a really well-conceived set of helper methods that make it easy to do custom filtering, sorting and conditionals.
There’s no concept of layouts in the package template as we see in Jekyll, Roots and Middleman — just partials.
Variables and functions are inserted via curly braces:
<h1>{{ .Site.Title }}</h1>
One really interesting aspect of the package template engine is that variable insertion is context-aware, so the engine will always escape the output according to the context you’re in. So, the same output would be escaped differently according to whether you’re in an HTML block, within the quotes of an HTML attribute or in a
<script> tag.
Asset Pipeline
This is one of Hugo’s big weaknesses. You’d better want to work with plain CSS and JavaScript, or integrate an external asset pipeline with a tool like Gulp or Grunt, because Hugo doesn’t include any kind of asset pipeline.
When Hugo builds your website, it copies any files in the
static folder to your build directory, but that’s it. Want Sass, EcmaScript6, CSS auto-prefixing and so on? You’ll have to set up an external build tool and make Hugo part of a build process (which negates many of the advantages of having just one static binary to install).
Hugo does come with LiveReload built in. If you can do with no-frills CSS and JavaScript, that might be all you need.
Extensions
Because Go is a statically compiled binary and Hugo is distributed as a single compiled file, there is no easy way to add a plugin or extension engine to Hugo.
This means you’ll need to rely exclusively on the features built into Hugo, rather than roll your own. In this way, Hugo is almost the exact opposite of Roots, which is almost nothing on its own without plugins.
Fortunately, Hugo comes with batteries included and packs a big punch out of the box. Shortcodes, dynamic data sources, menus, syntax highlighting and tables of contents are all built into Hugo, and the templating language has enough options to sort and filter content. So, a lot of the cases for which you would otherwise want plugins or custom helpers are already taken care of.
The closest you’ll come to an extensions engine in Hugo are the external helpers, which currently add support for the AsciiDoc and reStructuredText formats, in addition to Markdown. But there’s no real way for these external helpers to interact with Hugo’s templating engine or content model.
Content Model
With the bad parts behind us — no asset pipeline, no extensions — let’s get back to the good stuff.
Hugo has the most powerful content model out of the box of any of the static website generators.
Content is grouped into sections with entries. Sections can be nested as a tree:
└── content ├── post | ├── firstpost.md // <- | ├── happy | | └── ness.md // <- | └── secondpost.md // <- └── quote ├── first.md // <- └── second.md // <-
Here,
post/happy and
quote would be sections, and all of the Markdown files would be entries. As with most other static website generators, entries may have meta data encoded as front matter. Hugo lets you write front matter in YAML, JSON or TOML.
Content from different sections can easily be pulled into templates, filtered and sorted. And the command-line tool makes it easy to set up boilerplates for different content types, to make writing posts, quotes and so on easy.
Here’s a condensed version of a short real-life snippet from Static Web-Tech that pulls in the three most recent entries from the “Presentations” section:
<ul class="link-list recent-posts"> {{ range first 3 (where .Site.Pages.ByDate "Section" "presentations")}} <li> <a href="{{ .Permalink }}">{{ .Title }}</a> <span class="date">{{ .Params.presenter }}</span> </li> {{ end }} </ul>
The
range and
where syntax with various filters takes a little getting used to. But once it clicks, it’s very powerful.
The same could be said for Hugo’s taxonomy, which adds support for both tags and categories (with their own pages — so, you could list all posts in a category, list all entries with a particular tag, etc.) and helpers for showing counts, listing all tags and so on.
Apart from this, Hugo can also get content from data files and load data dynamically from URLs during the build process.
Modern Static Website Technology
While static websites have been around since the beginning of the Internet, modern static website generation is just getting started.
All of the generators reviewed above are powerful modern tools that have already been used by large agencies to develop big, complex websites. They are all under active development and will only get more powerful and more flexible.
The whole ecosystem around modern static website technology is growing rapidly, with an emerging array of external services for hosting, search, e-commerce, commenting and similar functionality. The limit of what you can achieve with a static website keeps getting pushed.
If you’re a beginner, one tricky question is simply where to start. The answer will generally depend on what programming language you’re familiar with and whether you’re more of a designer or a developer:
- Jekyll is a safe choice as long as you’re familiar with the whole Ruby toolchain and you use Mac or Linux. (Ruby’s ecosystem is not very Windows-friendly.)
- Middleman has broad appeal to anyone coming from the world of Rails. It’s geared to people who are comfortable writing Ruby, and it is a better fit than Jekyll for large websites with a lot of sections and a complex content configuration.
- Roots is great for front-end developers who are comfortable with JavaScript (or CoffeeScript) and want to build custom-designed websites.
- Hugo is great for content-driven websites, because it is completely dependency-free and is easy to get going. What it lacks for in extensibility, it largely makes up for with a good content model and super-fast build times.
Use, share, improve, enjoy. Welcome to modern static website technology!
Further Reading on SmashingMag:
- Using A Static Site Generator At Scale: Lessons Learned
- Build A Blog With Jekyll And GitHub Pages
- Content Modeling With Jekyll
- Creating Websites With Dropbox-Powered Hosting Tools
| https://www.smashingmagazine.com/2015/11/static-website-generators-jekyll-middleman-roots-hugo-review/ | CC-MAIN-2022-05 | refinedweb | 4,348 | 62.48 |
Hello!
I noticed that when CIL is invoked with --dosimplify option, it
transforms a simple function call to call-by-pointer, but only if the
pointer to the function has been taken somewhere else.
Is there an option to make CIL leave such calls as they were? Of course,
this should not affect the compilation of the resultant source, but, for
many static software verification tools, calls-by-pointer are
incomprehensible.
The sample file has been attached, and the options I used were:
./obj/x86_LINUX/cilly.asm.exe --dosimplify --printCilAsIs --domakeCFG
why_pointer.c --out result.c
The resultant main function looks like this:
int main(void)
{ void (*__cil_tmp1)(void) ;
{
{
good();
__cil_tmp1 = & bad;
(*__cil_tmp1)();
}
return (0);
}
}
--
Pavel Shved
ISPRAS
(Institute for System Programming
of Russian Academy of Sciences)
Operating Systems section
This option prevents converting direct function calls into the
calls-by-pointers if the address of the function has been taken
somewhere. Calls from inside of structures are converted anyway.
(A small debug print fix has been also applied)
---
src/ext/simplify.ml | 14 +++++++++++---
1 files changed, 11 insertions(+), 3 deletions(-)
diff --git a/src/ext/simplify.ml b/src/ext/simplify.ml
index 706e12d..0e3de67 100644
--- a/src/ext/simplify.ml
+++ b/src/ext/simplify.ml
@@ -85,6 +85,10 @@ let splitStructs = ref true
let simpleMem = ref true
let simplAddrOf = ref true
+(* Whether to convert function calls to calls-by-pointer when function address
+ * has been taken somewhere. *)
+let convertDirectCalls = ref true
+
let onlyVariableBasics = ref false
let noStringConstantsBasics = ref false
@@ -130,7 +134,7 @@ and makeBasic (setTemp: taExp -> bExp) (e: exp) : bExp =
(* Make it a three address expression first *)
let e' = makeThreeAddress setTemp e in
if dump then
- ignore (E.log " e'= %a\n" d_plainexp e);
+ ignore (E.log " e'= %a\n" d_plainexp e');
(* See if it is a basic one *)
match e' with
| Lval (Var _, _) -> e'
@@ -220,8 +224,9 @@ and simplifyLval
in
let a' = if !simpleMem then makeBasic setTemp a' else a' in
Mem (mkCast a' (typeForCast restoff)), restoff
-
- | Var v, off when v.vaddrof -> (* We are taking this variable's address *)
+ (* We are taking this variable's address; but suppress simplification if it's a simple function
+ * call in no-convert mode*)
+ | Var v, off when v.vaddrof && (!convertDirectCalls || not (isFunctionType (typeOfLval lv) )) ->
let offidx, restoff = offsetToInt v.vtype off in
(* We cannot call makeBasic recursively here, so we must do it
* ourselves *)
@@ -714,6 +719,9 @@ let feature : featureDescr =
fd_extraopt = [
("--no-split-structs", Arg.Clear splitStructs,
" do not split structured variables");
+ ("--no-convert-direct-calls", Arg.Clear convertDirectCalls,
+ " do not convert direct function calls to function pointer \
+ calls if the address of the function was taken");
];
fd_doit = (function f -> iterGlobals f doGlobal);
fd_post_check = true;
--
1.7.0.2
Here's a sample file to test the fix.
However, I'm not sure how and if it should be inserted into the CIL test
suite.
--
Pavel Shved
ISPRAS
(Institute for System Programming
of Russian Academy of Sciences)
Operating Systems section
On Wed, Sep 28, 2011 at 04:24:48PM +0400, Pavel Shved wrote:
> However, I'm not sure how and if it should be inserted into the CIL test
> suite.
Thanks for the patch, I've applied it. Not including the test, though,
since I can't figure out a way to check the result automatically.
Best regards,
--
Gabriel
On Wednesday, September 28, 2011 18:44:29 you wrote:
> Thanks for the patch, I've applied it.
Thank you.
(Just some background: the patch is aimed for CIL-lifying Linux device
drivers, which usually contain statically initialized structures full of
pointers to various driver functions. We wanted to preserve direct calls
to these functions in order to run the generated code through different
static verification tools.)
--
Pavel Shved
ISPRAS
(Institute for System Programming
of Russian Academy of Sciences)
Operating Systems section | http://sourceforge.net/p/cil/mailman/cil-users/thread/201109281852.57840.shved@ispras.ru/ | CC-MAIN-2015-11 | refinedweb | 637 | 53.1 |
Slides of structure 2: Modern C++ for Computer Vision Lecture 2: C++ Basic Syntax (uni-bonn.de)
This part mainly introduces the keywords, entities, entity declarations and definitions, types, variables, naming rules of identifiers, expressions, if else structures, switch structures, while loops, for loops, arithmetic expressions, conditional expressions, self increasing and self decreasing in C + +
if(STATEMENT){ //... } else{ //... } switch(STATEMENT){ case 1: EXPRESIONS;break; case 2: EXPRESIONS;break; } while(STATEMENT){ //... } for(int i=0;i<10;i++){ //... }
Spooler alert (supplementary)
1. For loop in C + + 17 and python 3 Comparison of for loops in X
In the latest C++ 17 standard, the new writing method of for loop is compared with that in Python:
//Pythonic Implementation my_dict = {'a':27,'b':3} for key,value in my_dict.items(): print(key,"has value",value) // Implementaion in c++ 17 std::map<char,int> my_dict{{'a',27},{'b',3}} for (const auto&[key,value]:my_dict) { std::cout<<"has value"<<value<<std::endl; }
It can be seen that the new standard has Python taste, but the implementation of C + + is 15 times faster than Python:
2. Built-in types
For the "Out of the box" type in C + + (Out of the box), you can refer to Fundamental types - cppreference.com
int a = 10; auto b = 10.1f ; // Automatic type [float] auto c = 10; //Automatic type [int] auto d = 10.0 ; // Automatic type [double] std::array<int,3> arr={1,2,3}; // Array of intergers
The automatic type here is also a bit like Python.
3. C-style strings are evil
C + + can be programmed in C style like other types in C:
#include<cstring> #include<iostream> int main(){ const char source[] = "Copy this"; char dest[5]; std::cout<<source<<std::endl; std::strcpy(dest,source); std::cout<<dest<<std::endl; //Source is const, no problem right std::cout<<source<<std::endl; return 0; }
You may think that the source is const char type and should not be changed, but the result is unexpected. This is the so-called "C-style strings are evil", so it is more recommended to use Strings type. There is this
Several precautions:
- #std::string can be used after include < string >
- string type implements operator overloading and can be spliced with +;
- You can check whether STR is empty through str.empty();
- It can be combined with I/O streams to play tricks
For example, in the above example, we use the string type in C + +:
#include<iostream> #include<string> int main(){ const std::string source{"Copy this"}; std::string dest = source; std::cout << source << '\n'; std::cout<< dest <<'\n'; return 0; }
The result of this operation is completely expressed.
Add: why is there such a mistake in the first example? We found std::strcpy Official description , the signature of this function is:
char* strcpy( char* dest, const char* src );
Copies the character string pointed to by src, including the null terminator, to the character array whose first element is pointed to by dest.
The behavior is undefined if the dest array is not large enough. The behavior is undefined if the strings overlap.
Let's take a look at the function prototype of strcpy:
//C language standard library function strcpy is a typical industrial simplest implementation. //Return value: the address of the target string. //The ANSI-C99 standard does not define exceptions, so the return value is determined by the implementer, usually NULL. //Parameters: des is the target string and source is the original string. char* strcpy(char* des,const char* source) { char* r=des; assert((des != NULL) && (source != NULL)); while((*r++ = *source++)!='\0'); return des; } //while((*des++=*source++)); Explanation: the assignment expression returns the left operand, so the loop stops after assignment '\ 0'.
In fact, it's not a language problem here. Both source and dest are located in the stack area and adjacent to each other in memory. Therefore, DeST's memory overflows into source, overwriting the previous part of source and adding \ 0. The location information of both is as follows:
+--------+ ... +--------+ | src[2] | <--- -0x5 +--------+ | src[1] | <--- -0x6 +--------+ | src[0] | <--- -0x7 +--------+ | dest[3]| <--- -0x8 +--------+ | dest[2]| <--- -0x9 +--------+ | dest[1]| <--- -0xa +--------+ | dest[0]| <--- -0xb +--------+
How to avoid buffer Overflow caused by insufficient memory space pointed to by dest?
Here is an official example. In order to avoid unpredictable behavior caused by insufficient dest length, the length of src is adjusted according to the length of src:
#include <iostream> #include <cstring> #include <memory> int main() { const char* src = "Take the test."; // src[0] = 'M'; // can't modify string literal auto dst = std::make_unique<char[]>(std::strlen(src)+1); // +1 for the null terminator std::strcpy(dst.get(), src); dst[0] = 'M'; std::cout << src << '\n' << dst.get() << '\n'; }
Of course, you can also use more secure strncpy or strcpy_s.
4. Any variables can be const
It's worth noting that any type can be declared const as long as you're sure it won't change.
Google style names constants with camelCase and starts with the lowercase letter k, for example:
const float kImportantFloat = 20.0f; const int kSomeInt = 20; const std::string kHello = "Hello";
Add: Google style names variables in snake_case, and all of them are lowercase letters, such as some_var.
Variable reference is also a very common usage, which is faster and less code than copying data, but sometimes we use const to avoid unwanted changes.
5. I/O streams
#Include < iostream > to use I/O stream
Here are the commonly used String streams:
- Standard output cerr and cout
- Standard input cin
- File streams fsstream, i fstream and ofstream
- String stringstream can convert the combination of int, double, string and other types into string, or decompose strings into int, double, string, etc.
6. Program input parameters
C + + allows parameters to be passed to binary files. For example, the main function can receive parameters:
int main(int argc,cahr const *argv[])
Where argc represents the number of input parameters and argv is the array of input strings. By default, the former is equal to 1 and the latter is equal to "< binary_path >" | https://programmer.ink/think/modern_cpp_3-c-basic-syntax.html | CC-MAIN-2022-21 | refinedweb | 1,002 | 59.13 |
On Monday 04 August 2003 21:40, Sebastian Kapfer wrote: > > // change the way it is accessed to prove a point int * p_b = (int *) > > p_a; > > Ouch. Try this in /usr/src/linux/kernel $ grep *\) *. You must be aware of what you are doing when you do a type conversion. Portability is a concern. I am limiting my app to Intel 32 bit Linux. Screw everything else. > Watch out for inheritance, user-defined casting operators and > other funny stuff. C++ adds a few new meanings to the () casting syntax. > In general, your assumption is _wrong_. I have no user defined casting operators or funny stuff. I'm no fan of overloading. > >: Besides, reinterpret_cast is probably a template function doing this: return ((T) x); // type conversion using cast > > That way, you're clearly stating the intent of the cast. It is up to your > compiler what it makes of this statement; the C++ standard doesn't cover > such abuse. Language experts sure get their shorts knotted up over simple questions. I've known some killer programmers and none of them have quoted a language specification in conversation. That was way over the top. That stuff is for compiler writers, not application programmers. I did not start a knowledge contest. If I did inadvertently, then you win. I just wanted to discuss a problem with others of similar interest and try to learn something. I really don't care if my code style is bad, abusive, or politically incorrect. I just want to make a couple of bucks and make something useful. -- Mike Mueller | https://lists.debian.org/debian-user/2003/08/msg00894.html | CC-MAIN-2015-35 | refinedweb | 260 | 68.36 |
This action might not be possible to undo. Are you sure you want to continue?
1
CHAPTER 1
INTRODUCTON 1.1 OVERVIEW:
The Project Entitled “POWER TRANSFORMER PROTECTION USING MICROCONTROLLER-BASED RELAY” designed with Peripheral Interface Controller (PIC 16F877A). Utility companies have enormous amounts of money invested in transformers of all types, including distribution and power transformer. Operating, maintaining, and inspecting all power transformers are not an easy work. In order to reduce burden on maintenance of such transformers a new idea has been discovered. This project is mainly used to protect the transformer from getting worn out due to electrical disturbances. The electrical parameters like current, voltage of the transformers are fed as base values, using a keypad to the Peripheral Interface Controller and the output signal is provided to operate a relay by comparing the base values with the operating electrical parameters. The application consists of a board of electronic components inclusive of a PIC 16F877A microcontroller with programmable logic. It has been designed to work with high accuracy.
2
protecting the power transformer from malfunctioning.
1.2 ORGANIZATION OF THE THESIS:
This paper mainly deals with the protection of power transformer from various faults. The second chapter deals with the various methods already existing in the protection of the transformer from various faults. The various faults occurring in the transformer and the steps taken to protect the device from these faults are also discussed. The components of the project and the embedded system are explained in the same chapter. The next chapter gives an elaborate description of the PIC microcontroller and its working. This chapter mentions the core features and peripheral features of the PIC16F877A. The various memory organizations and the ports description are detailed in this chapter. The fourth chapter contains the idea of the software used in the project and the variation code. The fifth chapter describes the hardware implemented in the project and the working principle of the various components of the hardware. The choice of relay considering the various factors is also listed out in this chapter.
3
CHAPTER 2
PROTECTION SYSTEM OF TRANSFORMER
2.1 INTRODUCTION:
The protection system of transformer is inevitable due to the voltage fluctuation, frequent insulation failure, earth fault, over current etc. Thus the following automatic protection systems are incorporated.
1. Buchholz devices:
A Buchholz relay, also called a gas relay or a sudden pressure relay, is a safety device mounted on some oil-filled power transformers and reactors, equipped with an external overhead oil reservoir called a conservator. The Buchholz Relay is used as a protective device sensitive to the effects of dielectric failure inside the equipment. It also provides protection against all kind of slowly developed faults such as insulation failure of winding, core heating and fall of oil level. 2. Earth fault relays: An earth fault usually involves a partial breakdown of winding insulation to earth. The resulting leakage current is considerably less than the short circuit current. The earth fault may continue for a long time and creates damage before it ultimately develops into a short circuit and
4
removed from the system. Usually provides protection against earth fault only. 3. Over current relays: An over current relay, also called as overload relay have high current setting and are arranged to operate against faults between phases. Usually provides protection against phase -to-phase faults and overloading faults. 4. Differential system: Differential system, also called as circulating-current system provides protection against short-circuits between turns of a winding and between windings that correspond to phase-to-phase or three phase type shortcircuits ie, it provides protection against earth and phase faults. The complete protection of transformer usually requires the combination of these systems. Most of the transformers are usually connected to the supply system through series fuses instead of circuit breakers. In existing method the transformer does not have automatic protective relays for protecting the transformer.
5
2.2 TRANSFORMER – DEFINITION
A device used to transfer electric energy from one circuit to another, especially a pair of multiple wound, inductively coupled wire coils that affect such a transfer with a change in voltage, current, phase, or other electric characteristic.
Fig 2.1 Basic Transformer
6
2.3 THE UNIVERSAL EMF EQUATION If the flux in the core is sinusoidal, the relationship for either winding between its number of turns, voltage, magnetic flux density and core cross-sectional area is given by the universal emf equation (from Faraday’s Law):
…(2.1)
• E is the sinusoidal rms or root mean square voltage of the winding, • f is the frequency in hertz, • N is the number of turns of wire on the winding, • a is the cross-sectional area of the core in square meters • B is the peak magnetic flux density in Tesla • P is the power in volt amperes or watts,
2.4 NECESSITY FOR PROTECTION
Transformers are static devices, totally enclosed and generally oil immersed. Therefore, chances of faults occurring on them are very rare. However, the consequences of even a rare fault may be very serious unless the transformer is quickly disconnected from the system. This necessitates providing adequate automatic protection for transformers against possible faults.
7
2.5 COMMON TRANSFORMER FAULTS
As compared with generators, in which many abnormal conditions may arise, power transformers may suffer only from: 1. Open circuits 2. Overheating 3. Winding short-circuits
2.5.1 Open circuit Faults:
An open circuit in one phase of a 3-phase transformer may cause undesirable heating. In practice, relay protection is not provided against open circuits because this condition is relatively harmless. On the occurrence of such a fault, the transformer can be disconnected manually from the system.
2.5.2 Overheating Faults:.
2.5.3 Winding Short-circuit Faults:
Winding short-circuits (also called internal faults) on the transformer arise from deterioration of winding insulation due to overheating or mechanical injury.
8
When an internal fault occurs, the transformer must be disconnected quickly from the system because a prolonged arc in the transformer may cause oil fire. Therefore, relay protection is absolutely necessary for internal faults.
2.6 PROPOSED METHOD
In proposed method, monitoring and protecting the power transformer from overvoltage and over current are performed automatically by using PIC microcontroller.
2.6.1 Components of the project:
The protection of power transformers that we have implemented as our project exclusively contains the following as shown in the Fig. 2.2 Rectifier, filter and Regulating circuit (Power circuits) Voltage measuring circuit using Potential Transformer Current measuring circuit using Current Transformer Keypad and LCD display Driver circuit and a Relay PIC 16F877A microcontroller board
9
Fig 2.2 Block Diagram of Protection of power transformer using Microcontroller based relay
2.7 OVERVIEW OF EMBEDDED SYSTEM
• An Embedded system is any computer system hidden inside a product other than a computer. • You will encounter a number of difficulties when you writer embeddedsystem software in addition to those you encounter when you write applications. • Response— Your system may to react to events quickly. • Testability— Setting up equipment to teat embedded software can be difficult.
10
• Debugability— Without a screen or a keyboard, finding out what the software is doing wrong is a troublesome problem. • Reliability—Embedded systems must be able to handle any situation without human intervention. • Memory Space—Memory is limited on embedded systems, and you must make the software and the data fit into whatever memory exists. • Program installation—You will need special tools to get your software into embedded systems. • Power Consumption—Portable systems must run on battery power, and the software in these systems must conserve power. • Processor Hogs—Computing that requires larger amounts of CPU time can complicate the response problem. • Cost—Reducing the cost of the hardware is a concern in many embedded system projects; software often operates on hardware that is barely adequate for the job. • Embedded systems have a microprocessor and a memory. Some have a serial port or network connection. They usually do not have keyboards, screens, or disk drives.
2.7.1 Components of an Embedded System
A computer is a system that has the following or more components.
11
• A microprocessor • A large memory comprising the following two kinds: a. Primary memory (semiconductor memories-RAM, ROM and fast accessible caches.) b. Secondary memory (magnetic memory located in hard disks, diskettes and cartridge tapes and optical memory in CD-ROM) • Input units like keyword, mouse, digitizer, scanner etc. • Output units like video monitor, printer etc. • Networking units like Ethernet card, front-end processor-based drivers, etc. • I/O units like a modem, fax cum modem, etc. An embedded system is one that has computer-hardware with software embedded in it as one of its most important component. It is a dedicated computerbased system for an application(s) or product. It may be either an independent system or a part of a larger system. As its software usually embeds in ROM (Read Only Memory) it does not need secondary memories as in a computer. An embedded system has three main components • It has hardware. Figure2.3 shows the units in the hardware of an embedded system. • It has main application software. The application software may perform concurrently the series of tasks or multiple tasks.
12
• It has a real time operating system that supervises the application software and provides a mechanism to let the processor run a process as per scheduling and do the context-switch between the various processes. RTOS defines the way the system works. It organizes access to a resource in sequence of the series of tasks of the system. It schedules their working and execution by following a plan to control the latencies and to meet the deadlines. Latency refers to the waiting period between running the codes of a task and the instance at which the need for the task arises. • It sets the rules during the execution of the application software. A small-scale embedded system may not need an RTOS. An embedded system has software designed to keep in view three constraints: i. ii. iii. Available system memory Available processor speed and The need to limit power dissipation when running the system continuously in cycles of wait for events, run, stop and wake-up.
2.7.2 Programming the PIC in C:
Compilers produce hex file that is downloaded into ROM of the microcontroller. The size of the hex file produced by the compiler is one of the main factors to be considered while programming the microcontroller, because the microcontroller have limited on chip ROM. The assembly language produces hex file that is much smaller than C programming, on the other hand programming 8051 in C is less time consuming and
13
much easier to write, but size of hex file produced is much larger than if we use assembly language which may lead to increase in memory size for even a small application so it decreases the application speed. Although, the following are some of the major reasons for writing C programs instead of assembly. 1. It is easier and less time consuming to write than assembly. 2. C is easier to modify and update. 3. C code is portable to other microcontrollers with little modifications.
2.8 EMBEDDED SYSTEM AND DEVELOPMENT TOOLS
• Embedded software development is typically done on a host machine, different from the larger machine on which the software will eventually be shipped to customers. • A tool chain for developing embedded software typically contains a crosscompiler, a cross-assembler, a linker/locator, and a method for loading the software into the target machine. • A cross-compiler understands the same C language as a native compiler (with a few exceptions) but its output uses the instruction set of the target microprocessor. • A cross-compiler understands as assembly language that is specific to your target microprocessor and outputs instructions for that microprocessor. • A linker/locator combines separately compiled and assembled modules into an executable image. In addition it places code, data, startup code, constant strings, and so on at suitable addresses in ROM and RAM.
14
• Linker/locators produce output in a variety of formats; it is up to you to ensure that your liker/locator’s output is compatible with the tools you use for the loading software into your target.
2.9 MICROCONTROLLER FOR EMBEDDED SYSTEMS
In the literature discussing microprocessors, we often see the term embedded system. Microprocessor and Microcontroller are widely used in embedded system products. An embedded product uses a microprocessor (or microcontroller) to do one task and one task only. A printer is an example of embedded system since the processor inside
15
it performs only one task; namely, getting the data and printing it.
Fig 2.3 Block Diagram for Microcontroller unit
16
CHAPTER 3
PIC MICROCONTROLLER
3.1. INTRODUCTION TO MICROCONTROLLER:
Microcontroller differs from microprocessor in many ways. First of all, the most important difference is its functionality. In order to operate a microprocessor, other components such as memory or components for receiving and sending data must be added to it externally. In short, microprocessor is the heart of the computer. On the other hand, microcontroller is designed to be all of that in one. No other external components are needed for its application because all the necessary peripherals are already built into it. Thus we save time and space needed to construct devices.
What is PIC and why you go for PIC?
PIC stands for Peripheral Interface Controller as coined by microchip technology • PIC is very popular microcontroller world wide • Microchip is the first manufacturer of 8 pin RISC MCU • Focus on high performance cost effective field programmable embedded control solution
3.1.1 Need for Microcontroller:
• Microcontroller is a general-purpose device which has in-built CPU memory and peripherals to make it act as a mini-computer
17
• Microcontroller has one or two operational codes for moving data from external to CPU • Microcontroller has many bit handling instructions • Microcontroller works faster than microprocessor because of rapid movement of bits within the chip • Microcontroller can function as a computer with the addition of no external parts
3.2. CORE FEATURES:
High-performance RISC CPU Only 35 single word instructions to learn Operating speed: DC – 20 MHz clock input DC – 200 ns instruction cycle Up to 8K x 14 words of Flash Program Memory, Up to 368 x 8 bytes of Data Memory (RAM) Up to 256 x 8 bytes of EEPROM data memory Interrupt capability (up to 14 internal/external
Eight level deep hardware stack Direct, indirect, and relative addressing modes Power-on Reset (POR) Power-up Timer (PWRT) and Oscillator Start-up Timer (OST) Watchdog Timer (WDT) with its own on-chip RC Oscillator for reliable operation Programmable code-protection
18
Power saving SLEEP mode Selectable oscillator options In-Circuit Serial Programming (ICSP) via two pins Only single 5V source needed for programming capability In-Circuit Debugging via two pins Wide operating voltage range: 2.5V to 5.5V High Sink/Source Current: 25 Ma Commercial and Industrial temperature ranges Low-power consumption: < 2 Ma typical @ 5V, 4 MHz 20Ma typical @ 3V, 32 kHz < 1Ma typical standby current
3.3. PERIPHERAL FATURES:
Timer0: 8-bit timer/counter with 8-bit prescaler Timer1: 16-bit timer/counter with prescaler Timer2: 8-bit timer/counter with 8-bit period register, prescaler and postscaler Two Capture, Compare, PWM modules • Capture is 16-bit, max resolution is 12.5 ns. • Compare is 16-bit, max resolution is 200 ns. • PWM max resolution is 10-bit. 10-bit multi-channel Analog-to-Digital converter
19
Synchronous Serial Port (SSP) with SPI. (Master/Slave) USART/SCI with 9-bit address detection.
(Master Mode) and I2C.
Parallel Slave Port (PSP) 8-bits wide, with external RD, WR and CS controls
3.4.
ARCHITECTURE OF PIC 16F877A
Fig 3.1 Architecture of 16F877A
20
The complete architecture of PIC 16F877A is shown in the fig 3.1 that gives details about the specifications of PIC 16F877A. Fig 2.2 shows the complete pin diagram of the IC PIC 16F877A. 3.5 TABLE SPECIFICATIONS
21
3.6 PIN DIAGRAM OF PIC 16F877
DEVICE PIC 16F877A PROGRAM FLASH
DATA MEMORY 368 Bytes
DATA EEPROM 256 Bytes
8K
22
Pin#
Pin#
Pin#
Type
Type Oscillator crystal input/external clock source input.
OSC1/CLKIN
13
14
30
I
ST/ CMOS(4)
OSC2/ CLKOUT
14
15
31
O
-
Oscillator crystal output. Connects to crystal or resonator in crystal oscillator mode. In RC mode, OSC2 pin outputs CLKOUT which has 1/4 the frequency of OSC1, and denotes the instruction cycle rate Master clear (reset) input or programming voltage input or high voltage test mode control. This pin is an active low reset to the device. PORTA is a bidirectional I/O port.
MCLR/VPP/ THV
1
2
18
I/P
ST
RA0/AN0
2
3
19
I/O
TTL
RA0 can also be analog input0
RA1/AN1
3
4
20
I/O
TTL
RA1 can also be analog input1
RA2/AN2/ VREF4 5 21 I/O TTL RA2 can also be analog input2 or negative analog reference voltage
23
RA3/AN3/ VREF+ 5 6 22 I/O TTL
RA3 can also be analog input3 or positive analog reference voltage
RA4/T0CKI 6 7 23 I/O ST
RA4 can also be the clock input to the Timer0 timer/counter. Output is open drain type. RA5 can also be analog input4 or the slave select for the synchronous serial port.
RA5/SS/AN4
7
8
24
I/O
TTL
PORTB is a bidirectional I/O port. PORTB can be software programmed for internal weak pull-up on all inputs RB0/INT RB1 RB2 RB3/PGM 33 34 35 36 36 37 38 39 8 9 10 11 I/O I/O I/O I/O TTL/ST(1) TTL TTL TTL RB0 can also be the external interrupt pin.
RB3 can also be the low voltage programming input Interrupt on change pin. Interrupt on change pin Interrupt on change pin or In-Circuit debugger pin. Serial programming clock.
RB4 RB5
37 38
41 42
14 15
I/O I/O
TTL TTL
24
RB6/PGC
39
43
16
I/O
TTL/ST(2)
Interrupt on change pin or In-Circuit Debugger pin. Serial programming data.
RB7/PGD
40
44
17
I/O
TTL/ST(2)
PORTC is a bidirectional I/O port. RC0/T1OSO/ T1CKI 15 16 32 I/O ST RC0 can also be the Timer1 oscillator output or a Timer1 clock input. RC1 can also be the Timer1 oscillator input or Capture2 input/Compare2 output/PWM2 output. RC2 can also be the Capture1 input/Compare1 output/PWM1 output. RC3 can also be the synchronous serial clock input/output for both SPI and I2C modes. RC4 can also be the SPI Data In (SPI mode) or data I/O (I2C mode). RC5 can also be the SPI Data Out (SPI mode). RC5/SDO 24 26 43 I/O ST
RC1/T1OSI/ CCP2
16
18
35
I/O
ST
RC2/CCP1
17
19
36
I/O
ST
RC3/SCK/SCL
18
20
37
I/O
ST
RC4/SDI/SDA
23
25
42
I/O
ST
25
RC6/TX/CK
25
27
44
I/O
ST
RC6 can also be the USART Asynchronous Transmit or Synchronous Clock. RC7 can also be the USART Asynchronous Receive or Synchronous Data.
RC7/RX/DT
26
29
1
I/O
ST
PORTD is a bidirectional I/O port or parallel slave port when RD0/PSP0 RD1/PSP1 RD2/PSP2 RD3/PSP3 RD4/PSP4 RD5/PSP5 RD6/PSP6 RD7/PSP7 19 20 21 22 27 28 29 30 21 22 23 24 30 31 32 33 38 39 40 41 2 3 4 5 I/O I/O I/O I/O I/O I/O I/O I/O ST/TTL(3) ST/TTL(3) ST/TTL(3) ST/TTL(3) ST/TTL(3) ST/TTL(3) ST/TTL(3) ST/TTL(3) interfacing to a microprocessor bus.
26
PORTE is a bidirectional I/O port. RE0/RD/AN5 8 9 25 I/O ST/TTL(3) RE0 can also be read control for the parallel slave port, or analog input5. RE1 can also be write control for the parallel slave port, or analog input6. RE2 can also be select control for the parallel slave port, or analog input7.
RE1/WR/AN6
9
10
26
I/O
ST/TTL
(3)
RE2/CS/AN7
10
11
27
I/O
ST/TTL(3)
VSS
12,31
13,34
6,29
P
-
Ground reference for logic and I/O pins
NC
-
1,17,28,40 12,13, 33,34
-
These pins are not internally connected. These pins should be left unconnected.
3.8 PORTS DESCRIPTION:
Input/Output Ports Some pins for these I/O ports are multiplexed with an alternate function for the peripheral features on the device. In general, when a peripheral is enabled, that pin may not be used as a general purpose I/O pin.
27
3.8.1 PORT A and the TRIS A RA port pins have TTL input levels and full CMOS output drivers. Other PORTA pins are multiplexed with analog inputs and analog VREF input. The operation. 3.8.1 PORT A FUNCTIONS: Name RA0/AN0 RA1/AN1 RA2/AN2 RA3/AN3/VREF RA4/TOCKI Bit# bit0 bit1 bit2 bit3 Buffer TTL TTL TTL TTL Function Input/output or analog input Input/output or analog input Input/output or analog input Input/output or analog input or VREF Input/output or external clock input for
28
RA5/
/AN4
bit4 bit5
ST TTL
Timer0. Output is open drain type Input/output or slave select input for synchronous serial port or analog input
3.8.2 PORT B and the TRIS B>). This interrupt can wake the device from SLEEP. The user, in the interrupt service routine, can clear the interrupt in the following manner:
29
a) Any read or write of PORTB. This will end the mismatch condition. b) Clear flag bit RBIF.. This interrupt on mismatch feature, together with software configurable pull-ups on these four pins, allow easy interface to a keypad and make it possible for wake-up on key depression. 3.8.2 PORT B FUNCTIONS: Name RB0/INT RB1 RB2 RB3/PGM Bit# bit0 bit1 bit2 bit3 Buffer Function (1) TTL/ST Input/output pin or external interrupt input. TTL TTL TTL Internal software programmable weak pull-up Input/output pin. Internal software programmable weak pull-up Input/output pin. Internal software programmable weak pull-up Input/output pin or programming pin in LVP mode. Internal software programmable weak RB4 RB5 RB6/PGC bit4 bit5 bit6 TTL TTL TTL/ST
(2)
pull-up Input/output pin (with interrupt on change). Internal software programmable weak pull-up Input/output pin (with interrupt on change). Internal software programmable weak pull-up Input/output pin (with interrupt on change) or In-Circuit Debugger pin. Internal software
30
programmable weak pull-up. Serial programming clock. RB7/PGD bit7 TTL/ST Input/output pin (with interrupt on change) or In-Circuit Debugger pin. Internal software programmable weak pull-up. Serial programming data. Legend: TTL = TTL input, ST = Schmitt Trigger input
(2)
Note 1: This buffer is a Schmitt Trigger input when configured as the external interrupt. 2: This buffer is a Schmitt Trigger input when used in serial programming mode.
3.8.3 PORT C and the TRIS C
31
(BSF, BCF, XORWF) with TRISC as destination should be avoided. The user should refer to the corresponding peripheral section for the correct TRIS bit settings. 3.8.3 PORT C FUNCTIONS: Name Bit# Buffer Type ST ST Function Input/output port pin or Timer1 oscillator output/Timer1 clock input Input/output port pin or Timer1 oscillator input or Capture2 RC2/CCP1 RC3/SCK/SCL bit2 bit3 ST ST input/Compare2 output/PWM2 output Input/output port pin or Capture1 input/Compare1 output/PWM1 output RC3 can also be the synchronous serial clock for both SPI and I2C RC4/SDI/SDA RC5/SDO RC6/TX/CK RC7/RX/DT bit4 bit5 bit6 bit7 ST ST ST ST modes RC4 can also be the SPI Data In (SPI mode) or data I/O (I2C mode). Input/output port pin or Synchronous Serial Port data output Input/output port pin or USART Asynchronous Transmit or Synchronous Clock Input/output port pin or USART Asynchronous Receive or Synchronous Data
RC0/T1OSO/T1CKI bit0 RC1/T1OSI/CCP2 bit1
Legend: ST = Schmitt Trigger input
3.8.4 PORT D and TRIS D Registers:
32
This section is not applicable to the 28-pin devices. PORTD is an 8-bit port with Schmitt Trigger input buffers. Each pin is individually configurable as an input or output. PORTD can be configured as an 8-bit wide microprocessor Port (parallel slave port) by setting control bit PSPMODE (TRISE<4>). In this mode, the input buffers are TTL.
3.8.4 PORT D FUNCTIONS: Name RD0/PSP0 RD1/PSP1 RD2/PSP2 RD3/PSP3 RD4/PSP4 RD5/PSP5 RD6/PSP6 RD7/PSP7 Bit# bit0 bit1 bit2 bit3 bit4 bit5 bit6 bit7 Buffer Type ST/TTL(1) ST/TTL ST/TTL ST/TTL ST/TTL ST/TTL ST/TTL
(1)
Function Input/output port pin or parallel slave port bit0 Input/output port pin or parallel slave port bit1 Input/output port pin or parallel slave port bit2 Input/output port pin or parallel slave port bit3 Input/output port pin or parallel slave port bit4 Input/output port pin or parallel slave port bit5 Input/output port pin or parallel slave port bit6 Input/output port pin or parallel slave port
(1)
(1)
(1)
(1)
(1)
ST/TTL(1)
bit7 Legend: ST = Schmitt Trigger input TTL = TTL input Note 1: Input buffers are Schmitt Triggers when in I/O mode and TTL buffer when in Parallel Slave Port Mode
33
3.8.5 PORT E and TRIS E Register: PORTE has three pins RE0/RD/AN5, RE1/WR/AN6 and RE2/CS/AN7, which are individually configurable as inputs or outputs. These pins have Schmitt Trigger input buffers. The PORTE pins become control inputs for the microprocessor port when bit PSPMODE (TRISE<4>) is set. In this mode, the user must make sure that the TRISE<2:0> bits are set (pins are configured as digital inputs). Ensure ADCON1 is configured for digital I/O. In this mode the input buffers are TTL. PORTE pins are multiplexed with analog inputs. When selected as an analog input, these pins will read as ‘0’s. TRISE controls the direction of the RE pins, even when they are being used as analog inputs. The user must make sure to keep the pins configured as inputs when using them as analog inputs. 3.8.5 PORT E FUNCTIONS: Name RE0/ /AN5 Bit# bit0 Buffer Type Function ST/TTL(1) Input/output port pin or read control input in parallel slave port mode or analog input: 1=Not a read operation 0=Read operation. Reads PORTD register (if chip selected) Input/output port pin or write control input in parallel slave port mode or analog input: 1=Not a write operation 0=Write operation. Writes register (if chip selected)
RE1/
/AN6 bit1
ST/TTL(1)
PORTD
34
RE2/
/AN7
bit2
ST/TTL(1)
Input/output port pin or write control input in parallel slave port mode or analog input:
1=Device is not selected 0=Device is selected Legend: ST = Schmitt Trigger input TTL = TTL input Note 1: Input buffers are Schmitt Triggers when in I/O mode and TTL buffers when in Parallel Slave Port Mode.
3.9 MEMORY ORGANIZATION:
There are three memory blocks in each of the PIC16F877 MUCs. The program memory and Data Memory have separate buses so that concurrent access can occur. micro Mid-Range Reference Manual.
35
Fig 3.3 Memory Organization
3.9.1 Program memory organization:
The PIC16F877 devices have a 13-bit program counter capable of addressing 8K*14 words of FLASH program memory. 0000h and the interrupt vector is at 0004h. Accessing a location above the physically implemented address will cause a wrap around. The RESET vector is at
3.9.2 DATA memory organization:
36
The data memory is partitioned into multiple banks which contain the General Purpose Registers and the special functions Registers. Bits RP1 (STATUS<6) and RP0 (STATYUS<5>) are the bank selected bits. RP1:RP0 Banks 00 01 10 11 0 1 2 3
Each bank extends up to 7Fh (1238.
CHAPTER 4 SOFTWARE USED
4.1 INTRODUCTION TO EMBEDDED ‘C’: Embedded is the extension of c language. Embedded C is a compiler which constitutes more build in function. By using c language it is easy to connect the comport easily. The embedded c compiler has the bias function to connect the
37
comport. The command from fussing kit sends from the c program according to user wish. 4.1.1 HI-TEC ‘C’ HI-TEC ‘C’ is a set of software that translates the program written in the C language in to executable machine code versions are available which compile the program for the operation under the host operating system. Some of the Hi-Tec features are • • A simple batch file will compile, assemble and link entire program The compiler perform strong type checking and issues warning about various constructs which may represent programming errors The generated code is extremely small and fast in execution A full run time library is provided implementing all standard c input/ output and other function The source code for all run time routine is provided
• • •
• A power full general purpose macro-assembler is provided • Programs may be generated to execute under the host operating system or customized for installation in ROM.
4.2 PIC TOOLS The tools used in PIC microcontroller is given below
38
MP LAB Mp lab provides the following functions • Create and edit source file • Group files in to projects • Debug source code • Debug executable logic using the simulator Tools in Mp Lab are • Mp Lab development tool • Mp Lab project manager • Mp Lab editor • Mp Lab –SIM simulator • Mp Lab –ICE emulator
4.3 VARIATION CODE USED:
#include <pic.h> #include <lcd.h> void adc_init(void); void adc0(void);
39
void adc1(void); void hex_dec_cur(unsigned char); void set_mode(); static bit relay @ ((unsigned ) & PORTE*8+1); static bit set @((unsigned) &PORTC*8+0); static bit ent @((unsigned) &PORTC*8+1); static bit inc @((unsigned) &PORTC*8+2); static bit dec @((unsigned) &PORTC*8+3); unsigned char volt,curr,j,curr_high,curr_low,volt_low,volt_high; unsigned int temp0,temp1;
void main() { TRISC=0x0f; relay=0; lcd_init(); adc_init();
40
command(0x80); lcd_condis("Volt & Curr Moni",16); command(0xc0); lcd_condis(" V: del(); EEPROM_READ(12); curr_high=EEDATA; EEPROM_READ(13); curr_low=EEDATA; EEPROM_READ(14); volt_high=EEDATA; EEPROM_READ(15); volt_low=EEDATA; while(1) { temp0=0; adc0(); command(0xc3); C: ",16);
41
hex_dec(volt);
temp0=0; adc1(); command(0xca); hex_dec_cur(curr);
if(volt>volt_high curr<curr_low)relay=1;
||
volt<volt_low
||
curr>curr_high
||
if(!set) set_mode();
} } //Main
//While
void adc_init() { ADCON1=0x02; // 8-channel, Left justified, ADC control
42
TRISA=0xff; TRISE=0x00; }
// to select the port A as input port
void adc0() { for(j=0;j<10;j++) { ADCON0=0x00; ADON=1; delay(255); ADCON0 =0x05; go/done bit high while(ADCON0!=0X01); // Chk whether conversion finished or not temp1 = ADRESH; temp0 = temp0 + temp1; } volt=temp0/10; // 8 bit value taken into one variable // selecting a particular channel and making the // Channel select (Cha: 0) // ADC module ON
43
}
void adc1() { for(j=0;j<10;j++) { ADCON0=0x08; ADON=1; delay(255); ADCON0 =0x0d; go/done bit high while(ADCON0!=0X09); // Chk whether conversion finished or not temp1 = ADRESH; temp0 = temp0 + temp1; } curr=temp0/10; } // 8 bit value taken into one variable // selecting a particular channel and making the // Channel select (Cha: 0) // ADC module ON
44
void hex_dec_cur(unsigned char val) { h=val/100; hr=val%100; t=hr/10; o=hr%10; lcd_disp(h+0x30); lcd_disp(t+0x30); lcd_disp('.'); lcd_disp(o+0x30); } void set_mode() { command(0x80); lcd_condis(" Set Mode ",16); command(0xC0); lcd_condis("Hig Curr : ",16);
45
j=0; while(ent) { command(0xca);command(0x0f); if(!inc){j++; if(j>=255)j=0;hex_dec_cur(j);} else if(!dec){j--;if(j>=255)j=255;hex_dec_cur(j);} delay(15000); } EEPROM_WRITE(12,j);delay(2000); EEPROM_READ(12); curr_high=EEDATA; del(); command(0xC0); lcd_condis("Low Curr : j=0; while(ent) { command(0xca);command(0x0f); ",16);
46
if(!inc){j++; if(j>=255)j=0;hex_dec_cur(j);} else if(!dec){j--;if(j>=255)j=255;hex_dec_cur(j);} delay(15000); } EEPROM_WRITE(13,j);delay(2000); EEPROM_READ(13); curr_low=EEDATA; del(); command(0xC0); lcd_condis("Hig Volt : j=0; while(ent) { command(0xca);command(0x0f); if(!inc){j++; if(j>=255)j=0;hex_dec(j);} else if(!dec){j--;if(j>=255)j=255;hex_dec(j);} delay(15000); } ",16);
47
EEPROM_WRITE(14,j);delay(2000); EEPROM_READ(14); volt_high=EEDATA; del(); command(0xC0); lcd_condis("Low Volt : j=0; while(ent) { command(0xca);command(0x0f); if(!inc){j++; if(j>=255)j=0;hex_dec(j);} else if(!dec){j--;if(j>=255)j=255;hex_dec(j);} delay(15000); } EEPROM_WRITE(15,j);delay(2000); EEPROM_READ(15); volt_low=EEDATA; del(); ",16);
48
command(0x80); lcd_condis("Volt & Curr Moni",16); command(0xc0); lcd_condis(" V: } C: ",16);
CHAPTER 5 HARDWARE IMPLEMENTED
5.1 POWER SUPPLY AND ITS BLOCK DIAGRAM:
The ac voltage, typically 220V rms, is connected to a transformer, which steps that ac voltage down to the level of the desired dc output. A diode rectifier then provides a full-wave rectified voltage that is initially filtered by a simple
49
capacitor filter to produce a dc voltage. This resulting dc voltage usually has some ripple or ac voltage variation. A regulator circuit removes the ripples and also remains the same dc value even if the input dc voltage varies, or the load connected to the output dc voltage changes. This voltage regulation is usually obtained using one of the popular voltage regulator IC units.
Fig 5.1 Power supply block
5.2 WORKING PRINCIPLE OF THE BLOCK: Transformer:
The potential transformer will step down the power supply voltage (0-230V) to (0-6V) level. Then the secondary of the potential transformer will be connected to the precision rectifier, which is constructed with the help of op–amp. The advantages of using precision rectifier are it will give peak voltage output as DC, rest of the circuits will give only RMS output.
50
Bridge rectifier:
When four diodes are connected as shown in figure, the circuit is called as bridge rectifier. The input to the circuit is applied to the diagonally opposite corners of the network, and the output is taken from the remaining two corners.Let us assume that reverse,. The current flow through RL is always in the same direction. In flowing through RL this current develops a voltage corresponding to that shown waveform (5). Since current flows through the load (RL) during both half cycles of the applied voltage, this bridge rectifier is a full-wave rectifier.
51. The maximum voltage that appears across the load resistor is nearly-but never exceeds-500 v0lts,..
52
Fig. 5.2 Voltage Regulator for 5 V
Fig. 5.3 Voltage Regulator for 12 V
A fixed three-terminal voltage regulator has an unregulated dc input voltage, Vi, applied to one input terminal, a regulated dc output voltage, Vo, from a second terminal, with the third terminal connected to ground. The series 78 regulators provide fixed positive regulated voltages from 5 to 24 volts. Similarly, the series 79 regulators provide fixed negative regulated voltages from 5 to 24 volts. • For ICs, microcontroller, LCD --------- 5 volts
53
• For alarm circuit, op-amp, relay circuits ---------- 12 volts
5.3 VOLTAGE MEASUREMENT:
This circuit is designed to monitor the supply voltage. The supply voltage that has to monitor is step down by the potential transformer. Usually we are using the 06v potential transformer. The step down voltage is rectified by the precision rectifier. The precision rectifier is a configuration obtained with an operational amplifier in order to have a circuit behaving like an ideal diode or rectifier..
54
Fig. 5.4 Voltage Measurement Circuit
5.4 CURRENT MEASUREMENT:
This circuit is designed to monitor the supply current. The supply current that has to monitor is step down by the current transformer. The step down current is converted by the voltage with the help of shunt resistor. Then the converted voltage is rectified by the precision rectifier. The precision rectifier is a configuration obtained with an operational amplifier in order to have a circuit behaving like an ideal diode or rectifier.
55.
56
Fig. 5.5 Current Measurement Circuit
5.5.
57.
Fig
Relay types..
58
Fig
Relay internal connection
The relay's switch connections are usually labeled COM, NC and NO:
• • •
COM = Common, always connect to this. It is the moving part of the switch. NC = Normally Closed, COM is connected to this when the relay coil is off. NO = Normally Open, COM is connected to this when the relay coil is on.
5.5 are given to base of the Q2 transistor. So the relay is turned OFF state.
59 Microcontroller or PC 1
Transistor Q1 ON
Transistor Q2 OFF
Relay
OFF
0
OFF
ON
ON
Fig. 5.6 Relay Circuit
60
5.5.2 Choice of Relay: You need to consider several features when choosing a relay:
1. Physical size and pin arrangement:
If you are choosing a relay for an existing PCB you will need to ensure that its dimensions and pin arrangement are suitable. You should find this information in the supplier's catalogue.
2..
3. Coil resistance:
The circuit must be able to supply the current required by the relay coil. You can use Ohm's law to calculate the current: Relay coil current = Supply voltage/Coil resistance.
4.".
61
5. Switch contact arrangement (SPDT, DPDT):
Most relays are SPDT or DPDT which are often described as "single pole changeover" (SPCO) or "double pole changeover" (DPCO). For further information please see the page on switches.
PROTECTION DIODES FOR RELAY: Transistors and ICs (chips) must be protected from the brief high voltage 'spike' produced when the relay coil is switched off. The diagram shows how a signal diode (e.g. IN.
Advantages of relays:
•
Relays can switch AC and DC, transistors can only switch DC. Relays can switch high voltages, transistors cannot. Relays are a better choice for switching large currents (> 5A). Relays can switch many contacts at once.
• •
Disadvantages of relays:
•
Relays are bulkier than transistors for switching small currents.
62
•
Relays cannot switch rapidly .
• •
5.6. One each polarizer’s are pasted outside the two glass panels. This polarizer’s would rotate the light rays passing through them to a definite angle, in a particular direction. When the LCD is in the off state, light rays are rotated by the two polarizers and the liquid crystal, such that the light rays come out of the LCD without any orientation, and hence the LCD appears transparent. When sufficient voltage is applied to the electrodes, the liquid crystal molecules would be aligned in a specific direction. The light rays passing through the
63
LCD would be rotated by the polarizer’s, which would result in activating / highlighting the desired characters. The LCD’s are lightweight with only a few millimeters thickness. Since the LCD’s consume less power, they are compatible with low power electronic circuits, and can be powered for long durations. The LCD does don’t generate light and so light is needed to read the display. By using backlighting, reading is possible in the dark. The LCD’s have long life and a wide operating temperature range. Changing the display size or the layout size is relatively simple which makes the LCD’s more customer friendly. The LCDs used exclusively in watches, calculators and measuring instruments are the simple seven-segment displays, having a limited amount of numeric data. The recent advances in technology have resulted in better legibility, more information displaying capability and a wider temperature range. These have resulted in the LCDs being extensively used in telecommunications and entertainment electronics. The LCDs have even started replacing the cathode ray tubes (CRTs) used for the display of text and graphics, and also in small TV applications. Crystalonics dot–matrix (alphanumeric) liquid crystal displays are available in TN, STN types, with or without backlight. The use of C-MOS LCD controller and driver ICs result in low power consumption. These modules can be interfaced with a 4-bit or 8-bit microprocessor /Micro controller.
5.6.1 Features of LCD
• The built-in controller IC has the following features: • Correspond to high speed MPU interface (2MHz) • 80 x 8 bit display RAM (80 Characters max)
64
• 9,920-bit character generator ROM for a total of 240 character fonts. 208 character fonts (5 x 8 dots) 32 character fonts (5 x 10 dots) • 64 x 8 bit character generator RAM 8 character generator RAM 8 character fonts (5 x 8 dots) 4 characters fonts (5 x 10 dots) • Programmable duty cycles • 1/8 – for one line of 5 x 8 dots with cursor • 1/11 – for one line of 5 x 10 dots with cursor • 1/16 – for one line of 5 x 8 dots with cursor • Wide range of instruction functions display clear, cursor home, display on/off, cursor on/off, display character blink, cursor shift, display shift. • Automatic reset circuit, which initializes the controller / driver ICs after power on.
5.6.2 Applications:
Personal computers, word processors, facsimiles, telephones, etc.
5.7 KEYPAD
Keypad is used to enter the predefined values of the power transformer. Keypad with four keys is employed. The operations of the keys are to increment and decrement the values to be set.
65
CHAPTER 6
CONCLUSION for transformers against possible faults. The major faults on transformers occur due to short circuits in the transformers or in their connections. The basic system used for protection against these faults is the differential relay scheme. Protection of power transformer is a big challenge nowadays. By the help of microcontroller-based relay, protection of transformer is performed very quickly and accurately. This system provides a better and safer protection than the other methods which are currently in use. The advantages of this system over the current methods in use are fast response, better isolation and accurate detection of the fault. This system overcomes the other drawbacks in the existing systems such as maintenance and response time.
66
REFERENCES
1. Guzman, A., S. Zocholl and G.Benmouyal, 2000. “Performance
analysis of traditional and improved transformer differential protective relays. Hector J. Altue (Universidad Autonoma de Nuevo Leon) SEL. 2. Guzman, A., S. Zocholl, G.Benmouyal, and H.J.Altuve 2001. “A
current based solution for differential protection IEEE Trans., Power Deliv., 3. Mao, P.L. and R.K. Aggarwal, 2001. A novel approach to the
classification of transient phenomena in power transformers using combined wavelet transform and neural network. IEEE Trans. Power Deliv., 16:4 4. Sidhu, T.S., M.S. Sachdev and M. Hfuda, 1996 Computer simulation of
protective relay design for evaluation their performance. Power System Research Group University of Saskatchewan, Canada. 5. Sachdev M.S., T.S. Sidhu and H.C. Wood, 1989. A digital relaying
algorithm for detecting transformer winding faults. IEEE Trans. Power Deliv., 43:1638-1648. | https://www.scribd.com/doc/91714262/Power-Transformer-Protection-Using-Micro-Controller-Based-Relay | CC-MAIN-2017-30 | refinedweb | 7,251 | 53.1 |
Icon Button State
On 03/08/2018 at 09:25, xxxxxxxx wrote:
Is there a way to modify the state of an icon to keep the button "pressed" while at an on state?
I've made a script that hides and reveals things. But I want the icon state to remain highlighted when it's either on or off. Just like how the Selection Tools and most icons do already in the UI. Is that possible with a script icon?
Thanks!
On 06/08/2018 at 02:02, xxxxxxxx wrote:
Hi,
no, currently there's no way to accomplish this for a script. But you can quite easily convert your script into a CommandData plugin, where you can then use the GetState() function. While the conversion from a script to a CommandData is actually simple (basically the script code just goes into Execute()), you may also consider using the Script Converter from Niklas Rosenstein's quick prototyping toolset, featured on our blog a while ago.
And now, that Cinema 4D R20 got announced, I can also say, that with R20 Python scripts in Script Manager will also get the ability to set the menu state.
On 06/08/2018 at 07:07, xxxxxxxx wrote:
Perfect! Thanks for the help Andreas
On 13/08/2018 at 11:33, xxxxxxxx wrote:
Hey Motion4D did you succeed?
I am also trying to set a highlight toggle for a python script icon based on it's state... I managed to convert it into a plugin but can't go any further... I found a few similar topics through the web but no real solution...
If you have found a way I'd be glad to know it
Cheers
On 14/08/2018 at 01:39, xxxxxxxx wrote:
Hi Shallow_Red, first of all, welcome in the plugin cafe community!
As Andreas explain in his previous post you have to useGetState to define your behavior.
Here is a quick plugin example which toggles the icon if there is an object selected or not.
import c4d # Be sure to use a unique ID obtained from PLUGIN_ID = 1000001 class MyCommandDataPlugin(c4d.plugins.CommandData) : def Execute(self, doc) : print 'Execute' return True def GetState(self, doc) : # If we get an object selected we can click on the Icon if doc.GetActiveObject() : return c4d.CMD_ENABLED else: return False if __name__ == "__main__": plugins.RegisterCommandPlugin(id=PLUGIN_ID, str="MyCommandDataPlugin", help="PMyCommandDataPlugin", info=0, dat=MyCommandDataPlugin(), icon=None)
If you have any questions please, let me know :)
Cheers,
Maxime!
On 16/08/2018 at 09:00, xxxxxxxx wrote:
Hi Maxime,
Thanks for you reply, it helped a lot!
What I was looking for was to return either c4d.CMD_ENABLED|c4d.CMD_VALUE when the plugin is on and c4d.CMD_ENABLED when off, and I found a way thanks to your link.
Thanks again, I will try to search deeper before asking next time, but this really makes me want to crave into Python deeper.
Cheers | https://plugincafe.maxon.net/topic/10898/14347_icon-button-state/ | CC-MAIN-2020-10 | refinedweb | 491 | 71.55 |
I’ve been working on an application where I’m using ArangoDB’s WITHIN_RECTANGLE function to pull up documents within the current map bounds. The obvious problem there is that the current map bounds can be very very big.
Dumping the entire contents of your database every time the map moves sounded decidedly sub-optimal to me so I decided to calculate the area within the requested bounds using Turf.js and send back an error if it’s to big.
So far so good, but I wanted a nice way to display that error message as a notification right on the map. There are lots of ways to tackle that sort of thing, but given that this seemed very specific to the map, I thought I might take a stab at making it a mapbox-gl.js plugin.
The result is mapbox-gl-flash. Currently you would install it from github:
npm install --save mapbox-gl-flash
I’m using babel so I’ll use the ES2015 syntax and get a map going.
import mapboxgl from 'mapbox-gl' import Flash from 'mapbox-gl-flash' //This is mapbox's api token that it uses for it's examples mapboxgl.accessToken = 'pk.eyJ1IjoibWlrZXdpbGxpYW1zb24iLCJhIjoibzRCYUlGSSJ9.QGvlt6Opm5futGhE5i-1kw'; var map = new mapboxgl.Map({ container: 'map', // container id style: 'mapbox://styles/mapbox/streets-v8', //stylesheet location center: [-74.50, 40], // starting position zoom: 9 // starting zoom }); // And now set up flash: map.addControl(new Flash());
This sets up an element on the map that listens for a “mapbox.setflash” event.
Next the element that is listening has a class of .flash-message, so lets set up a little basic styling for it:
.flash-message { font-family: 'Ubuntu', sans-serif; position: relative; text-align: center; color: #fff; margin: 0; padding: 0.5em; background-color: grey; } .flash-message.info { background-color: DarkSeaGreen; } .flash-message.warn { background-color: Khaki; } .flash-message.error { background-color: LightCoral; }
With that done lets fire an CustomEvent and see what it does.
document.dispatchEvent(new CustomEvent('mapbox.setflash', {detail: {message: "foo"}}))
Ruby on Rails has three different kinds of flash messages: info, warn and error. That seems pretty reasonable so I’ve implemented that here as well. We’ve already set up some basic styles for those classes above and we can apply one of those classes by adding another option to out custom event detail object:
document.dispatchEvent(new CustomEvent('mapbox.setflash', {detail: {message: "foo", info: true}})) document.dispatchEvent(new CustomEvent('mapbox.setflash', {detail: {message: "foo", warn: true}})) document.dispatchEvent(new CustomEvent('mapbox.setflash', {detail: {message: "foo", error: true}}))
These events add the specified class to the flash message.
One final thing that I expect is for the flash message to fade out after a specified number of seconds. The is accomplished by adding a fadeout attribute:
document.dispatchEvent(new CustomEvent('mapbox.setflash', {detail: {message: "foo", fadeout: 3}}))
Lastly you can make the message go away by firing the event again with an empty string.
With a little CSS twiddling I was able to get the nice user-friendly notification I had in mind to let people know why there is no more data showing up.
I’m pretty happy with how this turned out. Now I have a nice map specific notification that not only works in this project, but is going to be easy to add to future ones too. | https://mikewilliamson.wordpress.com/2016/03/04/flash-messages-for-mapbox-gl-js/ | CC-MAIN-2020-16 | refinedweb | 560 | 55.95 |
In our earlier post we learned about Connecting Java to Microsoft Access.In today’s post we shall learn about connecting Java to Mysql database.For connecting Java to Mysql we need to install Mysql odbc drivers.You can get the drivers from their original website.After downloading the drivers install them on your system.
So you have done the primary step in connecting Java to Mysql.Now open control panel,goto administrative tools.Their select DataSources(odbc).Now you get a dialog box which looks like this and select the add option init.
After selecting the add option there search for”MYSQL ODBC 5.2 ANSI Driver” and select it.It opens a dialog box to enter the details of the sql database…..
In the Data source Name enter the name of the source file.For TCP/IP give it as localHost and keep the port as default.Now enter the user and password details of the Mysql database.After the details of user and password click on test.If connection is ok,then in the field Database we can see the list of the tables present in the given database and select the desired one. With this we have created a connection between java and Mysql.
Now we shall write the program which is similar to that of using Java with Microsoft Access Database.
import java.sql.*; public class sqldemo { public static void main(String arg[]) throws Exception { String url="jdbc:mysql://localhost/emp1"; //protocol:subprotocol:sourcefile String id="root"; //User name of the Mysql String pd="abcd"; //Password of the Mysql database ResultSet res; try { Class.forName("com.mysql.jdbc.Driver"); Connection con; con = DriverManager.getConnection(url,id,pd); Statement st = con.createStatement(); res = st.executeQuery("select * from emp"); while(res.next()) { System.out.println(res.getString(1)+" "+res.getString(2)); } } catch(Exception e) { System.out.print(e); } } }
Description:In the above program we have used the method forName() which is present in the class “Class” for dynamically load the class.Next we have declared a string url for telling the protocol,sub-protocol and data source which we are going to use.
Now we need to make connection which is done by using the method getConnection() which is present in the class DriverManager and it returns the object of the Connection class.In order to execute the statements,first we need to create a Statement.For this we are going to use the method createStatement() by using the object of Connection class and it returns an object of Statement class.For executing a query we shall use the method executeQuery() which is invoked through the object of the statement class.It returns result set of the query which is stored in an object of ResultSet class.So the output of the above program is
OUTPUT:
To download the program click here:download
| https://letusprogram.com/2013/11/26/connecting-java-to-mysql-database/ | CC-MAIN-2021-17 | refinedweb | 476 | 58.79 |
Read/Write streamline files¶
Overview¶
DIPY can read and write many different file formats. In this example we give a short introduction on how to use it for loading or saving streamlines.
Read Frequently Asked Questions
import numpy as np from dipy.data import get_fnames from dipy.io.streamline import load_trk, save_trk from dipy.tracking.streamline import Streamlines
- Read/write streamline files with DIPY.
fname = get_fnames('fornix') print(fname) # Read Streamlines streams, hdr = load_trk(fname) streamlines = Streamlines(streams) # Save Streamlines save_trk("my_streamlines.trk", streamlines=streamlines, affine=np.eye(4))
- We also work on our HDF5 based file format which can read/write massive datasets (as big as the size of your free disk space). With Dpy we can support
- direct indexing from the disk
- memory usage always low
- extensions to include different arrays in the same file
Here is a simple example.
from dipy.io.dpy import Dpy dpw = Dpy('fornix.dpy', 'w')
Write many streamlines at once.
dpw.write_tracks(streamlines)
Write one track
dpw.write_track(streamlines[0])
or one track each time.
for t in streamlines: dpw.write_track(t) dpw.close()
Read streamlines directly from the disk using their indices
dpr = Dpy('fornix.dpy', 'r') some_streamlines = dpr.read_tracksi([0, 10, 20, 30, 100]) dpr.close() print(len(streamlines)) print(len(some_streamlines))
Example source code
You can download
the full source code of this example. This same script is also included in the dipy source distribution under the
doc/examples/ directory. | http://nipy.org/dipy/examples_built/streamline_formats.html | CC-MAIN-2019-26 | refinedweb | 241 | 61.22 |
Post your Comment
Setting the Column Header in JTable
Setting the Column Header in JTable
... method and APIs that helps you in setting the column headers in JTable. For more... in setting the column headers in
JTable. First of all, program creates a table that has
jtable
the database and store it in jtable.now i have to make a checkbox column in each row and also a select all option in header but i am not able to do so.i am getting...
private Vector<String> header; //used to store data header
sum of all values in a column of jtable
,
is there a code to display the sum of all values in a column of a jtable namely CARTtbl...("Setting Cell Values in JTable");
JPanel panel = new JPanel();
Integer data...);
table = new JTable(model);
JTableHeader header = table.getTableHeader
Moving a Column in JTable
Moving a Column in JTable
This section describes, how to move a column in JTable... the first column of JTable. The moveColumn
method takes the index
Changing the Name of Column in a JTable
Changing the Name of Column in a JTable
... the name
of column in JTable component. You have learnt the JTable
containing ... the name of column, you must have to change the column header. For
changing the name create a header in jtable using java swing
how to create a header in jtable using java swing how to create a header in jtable using java swing
d
Create a Custom Cell Renderer in a JTable
components
and creation with column header, you will be able to create a custom... that can
be used to display the cells in a column. The JTable describes a specific
renderer to the particular column. For this the JTable invokes the table model
Setting an Icon with Text in a Column Head of JTable
Setting an Icon with Text in a Column Head of JTable...;
and setHeaderValue methods that helps you setting the icon and
text in column header... to
set an icon with text in a column head of JTable component. But what
Inserting a Column in JTable
Inserting a Column in JTable
...
to insert a column in JTable at a specified location. As, you have learnt
in previous... in JTable. So, in this case you must add a column always at
the append position
problem scrolling jtable
){public boolean isCellEditable(int row, int column) { return false;}};
table = new JTable(tableModel...);
table.setAutoResizeMode(JTable.AUTO_RESIZE_OFF);
header = table.getTableHeader
JTable Display Data From MySQL Database
column header's name, how can you show data into the table.
How to create... should be kept
into JScrollPane.
How to add column header's name ?
However, you can set the column header's name in various ways but, the
simplest
Getting Cell Values in a JTable
values in a JTable component. It is a same as setting the cell values in
a JTable... data and columns with column
header and yellow background color. Here...; at a specified row
and column position in JTable. The cell values display
Creating a Scrollable JTable
a table having the large amount of
data and columns with column header... Creating a Scrollable JTable : Swing Tutorials ... section, you will learn how to
create a scrollable JTable component. When any
Setting Tool Tips on Cells in a JTable
Setting Tool Tips on Cells in a JTable
... in JTable. First of all, this program creates a JTable
having the data and column with column header. Here overrides the prepareRenderer()
method and sets the tool
Not able to filter rows of jtable with textfield in netbeans
", "Item", "Qty In Boxes"};
tblStock = new JTable(data,header);
sorter=new... of implementing row filter to the 4th column of jtable with a textfield but its...Not able to filter rows of jtable with textfield in netbeans
Disabling User Edits in a JTable Component
a table
having some data and column with column header. Here the isCellEditable...
Disabling User Edits in a JTable Component
... in all JTable
in every previous sections but now you will learn a JTable
restrict jtable editing
restrict jtable editing How to restrict jtable from editing or JTable disable editing?
public class MyTableModel extends AbstractTableModel {
public boolean isCellEditable(int row, int column
Setting the Margin Between Cells in a JTable
Setting the Margin Between Cells in a JTable
...
with column header. The column header have the column name. Then you
will have... the margin
(Gap) between cells in a JTable component. Here we are providing you
Shading Rows in JTable
background in the column header. After that you need to shade the alternate rows... Shading Rows in JTable
You have learnt about the JTable components and its
Post your Comment | http://roseindia.net/discussion/18235-Setting-the-Column-Header-in-JTable.html | CC-MAIN-2014-41 | refinedweb | 781 | 73.78 |
"Module "QtQuick" version 2.3 is not installed"
I have searched thru Google and searched these forums, with no success.
Just getting started on Qt and QtQuick. Installed qt-creator version 3.5.1 based on Qt 5.5.1. This is the one that my distro (CentOS 7) installs by default and is the only one listed as available in yum.
I started the sample "Hello World" application that is loaded as a default when a QtQuick application is started.
My main.qml is:
import QtQuick 2.3
import QtQuick.Window 2.2
Window {
visible: true
MouseArea {
anchors.fill: parent
onClicked: {
Qt.quit();
}
}
Text {
text: qsTr("Hello World")
anchors.centerIn: parent
}
}
I can't seem to get it to run with the "Run" icon in Qt Creator, so I just ran it from the command line with
qmlviewer main.qml
I get:
Qml debugging is enabled. Only use this in a safe environment!: module "QtQuick" version 2.3 is not installed
When I installed Qt Creator, clearly something didn't install, but I've gone thru the install steps and don't see anything I might have missed.
Any idea what's going on?
How do I find the version of QtQuick (if any) that actually is installed? I've read that you can "import" a version that's earlier than the one installed and it will fall back to that earlier version. So, I tried 2.0, 2.1, 2.2 but no joy.
Thanks...
Eric
Hi,
Maybe a silly question but: did you install the Qt development packages from your distribution ?
@SGaist >> did you install the Qt development packages from your distribution ?
Good evening, SGaist.
Not a silly question at all. I basically became root and said "yum install qt-creator" and it installed whatever it was set up to install. Yum has been pretty good (although not perfect) in the past about installing everything that's needed and resolving dependencies. I got Qt Creator and Qt Designer and enough other stuff so that almost all that's available in Qt Creator works, except (apparently) Qt Quick.
Was there something else I should have done to install the Qt development packages?
If there was, I haven't been able to find a reference to it in the documentation or in Google.
Check that you have also the QtDeclarative module installed. You should be able to check what is available with
yum whatprovides "*/qt5"
I said "yum info qt5-qtdeclarative" and got:
Installed Packages
Name : qt5-qtdeclarative
Arch : x86_64
Version : 5.5.1
Release : 2.el6
Size : 9.6 M
Repo : installed
From repo : epel
Summary : Qt5 - QtDeclarative component
URL :
License : LGPLv2 with exceptions or GPLv3 with exceptions
Description : Qt5 - QtDeclarative component.
Available Packages
Name : qt5-qtdeclarative
Arch : i686
Version : 5.5.1
Release : 2.el6
Size : 4.1 M
Repo : epel
Summary : Qt5 - QtDeclarative component
URL :
License : LGPLv2 with exceptions or GPLv3 with exceptions
Description : Qt5 - QtDeclarative component.
Then I said "yum whatprovides "*/qt5"" and got
qt5-qtbase-5.5.1-11.el6.i686 : Qt5 - QtBase components
Repo : epel
Matched from:
Filename : /usr/share/doc/qt5
Filename : /usr/share/qt5
Filename : /usr/lib/qt5
qt5-qtbase-devel-5.5.1-11.el6.x86_64 : Development files for qt5-qtbase
Repo : epel
Matched from:
Filename : /usr/include/qt5
qt5-qtbase-5.5.1-11.el6.x86_64 : Qt5 - QtBase components
Repo : epel
Matched from:
Filename : /usr/lib64/qt5
Filename : /usr/share/doc/qt5
Filename : /usr/share/qt5
qt5-qtbase-devel-5.5.1-11.el6.i686 : Development files for qt5-qtbase
Repo : epel
Matched from:
Filename : /usr/include/qt5
You need to install qt5-qtdeclarative-devel. For that matter install all devel packages for Qt 5 you'll be ready for the future ;)
That did it. Thank you, SGaist.
I am still unable to start the Qt Quick "Hello World" app by clicking on the green triangle at the lower left of the Qt Creator screen, but I can start it with Tools->External->Qt Quick->... (qmlscene) which I couldn't do before.
There certainly is a LOT (!!!) of devel stuff that doesn't get installed by default. Where is the best place to go where this level of detail in Qt installations is discussed clearly? Obviously I haven't found the best source of Qt docs yet.
Thanks...
Eric
You can take a look at Qt's own documentation to have an overview of the modules available. As for what to install exactly, it depends greatly on the package manager of your distribution. There might be a "qt-5-devel" package somewhere that will pull in all the dependencies needed, but it's something you have to check with yum. | https://forum.qt.io/topic/65323/module-qtquick-version-2-3-is-not-installed | CC-MAIN-2017-51 | refinedweb | 781 | 67.65 |
Bul so that I could write: class Object a where ... data Object = forall a. Object a => Object a etc (this goes with H' proposal that the namespace should always be explicit on the module export/import list [1]). A good editor (hint: the one I'm writing!!!) will be able to highlight the uses of "Object" to make it clear which is a class, which is a Tycon, and which is a ValueCon, and the operation of replacing a concrete type with a class in the type signature to generalise some functions could be made easier with a good refactoring tool. Best regards, Brian. [1] -- Logic empowers us and Love gives us purpose. Yet still phantoms restless for eras long past, congealed in the present in unthought forms, strive mightily unseen to destroy us. | http://www.haskell.org/pipermail/haskell-cafe/2006-August/017544.html | CC-MAIN-2013-48 | refinedweb | 134 | 71.65 |
ACME::Error - Never have boring errors again!
use ACME::Error SHOUT; warn "Warning"; # WARNING!
ACME::Error is a front end to Perl error styles.
$SIG{__WARN__} and
$SIG{__DIE__} are intercepted. Backends are pluggable. Choose a backend by specifying it when you
use ACME::Error SomeStyle;
Writing backends is easy. See ACME::Error::SHOUT for a simple example. Basically your backend needs to be in the
ACME::Error namespace and defines just two subroutines,
warn_handler and
die_handler. The arguments passed to your subroutine are the same as those passed to the signal handlers, see perlvar for more info on that. You are expected to
return what you want to be
warned or
died.
You can also run use an
import function. All arguments passed to
ACME::Error after the style to use will be passed to the backend.
Casey West <casey@geeknest.com>
Copyright (c) 2002 Casey R. West <casey@geeknest.com>. All rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
perl(1). | http://search.cpan.org/dist/ACME-Error/lib/ACME/Error.pm | CC-MAIN-2016-22 | refinedweb | 177 | 69.68 |
Lab 3: Recursion and Midterm Review
Due at 11:59pm on 09/19/2016. through 3 must be completed in order to receive credit for this lab. Starter code for these questions is in lab03.py.
- Questions 4 through 11 are optional. It is recommended that you complete these problems on your own time. Starter code for these questions is in the lab03_extra.py file.
Note: When you submit, the autograder will run tests for all questions, including the optional questions. As long as you pass the tests for the required questions, you will receive credit...
Required Questions
Question 1: Common Misconception
Find the bug in the following recursive function.
def factorial(n): """Return n * (n - 1) * (n - 2) * ... * 1. >>> factorial(5) 120 """ if n == 0: return 1 else: return factorial(n-1)
You can unlock this question by:
python3 ok -q factorial_ok -u
The result of the recursive calls is not combined into the correct solution.
def factorial(n): if n == 0: return 1 else: return n * factorial(n-1)
Question 2:
Optional Questions
Note: The following questions are in lab03_extra.py.
Midterm Review
Question 4: Doge
Draw the environment diagram for the following code.
You do not need to submit or unlock this question through Ok. Instead, you can check your work with the Online Python Tutor, but try drawing it yourself first!
wow = 6 def much(wow): if much == wow: such = lambda wow: 5 def wow(): return such return wow such = lambda wow: 4 return wow() wow = much(much(much))(wow)
Question 5: Palindrome
A number is considered a palindrome if it reads the same forwards and backwards. Fill in the blanks '_' to help determine if a number is a palindrome. In the spirit of exam style questions, please do not edit any parts of the function other than the blanks.
def is_palindrome(n): """ Fill in the blanks '_____' to check if a number is a palindrome. >>> is_palindrome(12321) True >>> is_palindrome(42) False >>> is_palindrome(2015) False >>> is_palindrome(55) True """ x, y = n, 0f = lambda: _____f = lambda: y * 10 + x % 10while x > 0:x, y = _____, f()x, y = x // 10, f()return y == n
Use OK to test your code:
python3 ok -q is_palindrome
More Recursion Practice
Question 6: Common Misconception
Find the bug with this recursive function.
def skip_mul(n): """Return the product of n * (n - 2) * (n - 4) * ... >>> skip_mul(5) # 5 * 3 * 1 15 >>> skip_mul(8) # 8 * 6 * 4 * 2 384 """ if n == 2: return 2 else: return n * skip_mul(n - 2)
You can unlock this question by:
python3 ok -q skip_mul_ok -u
Once you unlock the question, fix the code in
lab03_extra.py, and run:
python3 ok -q skip_mul == 2, and we will end up recursing
indefinitely. We need to add another base case to make sure this doesn't
happen.
def skip_mul(n): if n == 1: return 1 elif n == 2: return 2 else: return n * skip_mul(n - 2)
Question 7: Common Misconception
Find the bugs with the following recursive functions.
def count_up(n): """Print out all numbers up to and including n in ascending order. >>> count_up(5) 1 2 3 4 5 """ i = 1 if i == n: return print(i) i += 1 count_up(n-1) def count_up(n): """Print out all numbers up to and including n in ascending order. >>> count_up(5) 1 2 3 4 5 """ i = 1 if i > n: return print(i) i += 1 count_up(n)
You can unlock this question by:
python3 ok -q count_up_ok -u
Once you unlock the question, finish the function in
lab03_extra.py, and run:
python3 ok -q count_up
Hint: One possible solution uses a helper function to make recursive calls.
def count_up(n): """Print out all numbers up to and including n in ascending order. >>> count_up(5) 1 2 3 4 5 """ def counter(i):"*** YOUR CODE HERE ***"if i <= n: print(i) counter(i + 1)counter(1)
Question 8: AB+C
Implement
ab_plus_c, a function that takes arguments
a,
b, and
c and computes
a * b + c. You can assume a and b are both positive
integers. However, you can't use the
* operator. Use recursion!
def ab_plus_c(a, b, c): """Computes a * b + c. >>> ab_plus_c(2, 4, 3) # 2 * 4 + 3 11 >>> ab_plus_c(0, 3, 2) # 0 * 3 + 2 2 >>> ab_plus_c(3, 0, 2) # 3 * 0 + 2 2 """"*** YOUR CODE HERE ***"if b == 0: return c return a + ab_plus_c(a, b - 1, c)
Use OK to test your code:
python3 ok -q ab_plus_c
Question 9: Is Prime
Write a function
is_prime that takes a single argument
n and returns
True
if
n is a prime number and
False otherwise. Assume
n > 1. We implemented
this in the project functions to compute
the terms for odd and even numbers. Do so without using any loops or
testing in any way if a number is odd or even. (You may test if a
number is equal to 0, 1, or
n.)
Hint: Use recursion and a helper function!
Question 11: Ten-pairs: | http://inst.eecs.berkeley.edu/~cs61a/fa16/lab/lab03/ | CC-MAIN-2018-05 | refinedweb | 830 | 69.31 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.