text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
LoPy vs L01 (OEM module) I have simple LoRa raw code that runs perfectly fine with two LoPy devices talking to each other. But the same simple code causes the exception error #11 EAGAIN when running on an L01 module. Pybytes, Device, Firmware, Python, and MicroPython all report the same on both devices. The code and output (from the L01) is shown below. The L01 always crashes on the 4th 'ping' from socket.send(). Is there some difference between the LoPy and the L01 OEM that requires a different LoRa setup or firmware version? Code from network import LoRa import socket import machine import time import uos import sys from network import Bluetooth lora = LoRa(mode=LoRa.LORA, region=LoRa.US915) s = socket.socket(socket.AF_LORA, socket.SOCK_RAW) print("PING Unit") print('Device: ' + uos.uname().machine) print('Firmware: ' + uos.uname().release) print('Python: ' + sys.version) print('MicroPython: ' + str(sys.implementation.version[0]) + '.' + str(sys.implementation.version[1]) + '.' + str(sys.implementation.version[2])) print('===============================================================================') print('Switching off Bluetooth') bt = Bluetooth() bt.deinit() s.setblocking(False) while True: print("Send 'Ping'") s.send('Ping') print("Waiting...") data = s.recv(64) print(data) if s.recv(64) == b'Pong': print("Got 'Pong'") time.sleep(2) Output In case anybody else has this problem. In our case it was either a defective L01 OEM module or a manufacturing defect with the PCB. We built a second unit and the LoRa worked fine in that L01 module. It did not hang or crash, and the LoRa pings worked fine over several minutes of testing. @anthony The transmission speed is the same for both devices. But as @mgranberry pointed out, the program logic is wrong in two aspects: a) s is set nonblocking. That means, the code will continue immediately once s.send() was called, not waiting for the data to be sent. Similar, the immediately s.recv() will return immediately with either the data which as received beforehand, or None. So at the print statement "Waiting..." it does not wait. The code waits after the second s.recv() b) you call s.recv() twice in a loop (that's what @mgranberry expressed with his comment). The whole code is similar to the examples here @robert-hh Do you know how or why this would cause error #11 from the LoRa send command in the code example I listed? Even at its slowest I would assume the LoRa can push ~180 bps, so sending 4 chars every 2 seconds should be easy? @mgranberry This code as-is will work fine with two LoPy boards that ping each other back and forth. The error actually occurs with the "s.send()" line. If I take the receive code out and just have the send and the sleep I will still get "EAGAIN". @anthony The difference between LoPy (assuming LoPy 1) and L01 is the memory size. L01 has 4 MB SPIRAM, which LoPy1 does not have. That affects the timing of code. LoPy1 runs slightly faster. Especially external events are served faster. - mgranberry last edited by I think you might have intended if s.recv(64) == b'Pong':to be if data == b'Pong':. I think you have an empty buffer.
https://forum.pycom.io/topic/4076/lopy-vs-l01-oem-module/1?lang=en-US
CC-MAIN-2021-04
refinedweb
533
69.07
Exercise 16.1 In a large collection of MP3 files there may be more than one copy of the same song, stored in different directories or with different file names. The goal of this exercise is to search for these duplicates. - Write a program that walks a directory and all of its sub-directories for all files with a given suffix (like .mp3) and lists pairs of files with that are the same size. Hint: Use a dictionary where the key of the dictionary is the size of the file from os.path.getsize and the value in the dictionary is the path name concatenated with the file name. As you encounter each file check to see if you already have a file that has the same size as the current file. If so, you have a duplicate size file and print out the file size and the two files names (one from the hash and the other file you are looking at). - Adapt the previous program to look for files that have duplicate content using a hashing or checksum algorithm. For example, MD5 (Message- Digest algorithm 5) takes an arbitrarily-long “message” and returns a 128- bit “checksum.” The probability is very small that two files with different contents will return the same checksum. import hashlib... fhand = open(thefile,'r') data = fhand.read() fhand.close() checksum = hashlib.md5(data).hexdigest() You should create a dictionary where the checksum is the key and the file name is the value. When you compute a checksum and it is already in the dictionary as a key, you have two files with duplicate content so print out the file in the dictionary and the file you just read. Here is some sample output of a photo from time to time without deleting the original. - 瀏覽次數:736
http://www.opentextbooks.org.hk/zh-hant/ditatopic/6833
CC-MAIN-2021-17
refinedweb
303
70.02
The motivation of writing this blog came from a simple automation I did for a data collection work which was done manually for last 1 year at my current workplace. Gathering data has always been a challenge from lot of resources and websites from internet. To simply put, in lot of reporting work, we have situation where we have to gather data from website. For an example, in a use-case where we want to gather information regarding companies. We may like to go to some website, search about a company and then collect data from company’s information page. Problem Description In this blog-post, We will discuss one such use-case and describe building bot for automating the stuff using selenium (web-crawling) and beautiful soup (web scraping). Here goes problem statement and the steps to be done. - Go to website - Search an company ID like 310164, 307494, 305637, 519675 in the register. - Click on “Search the Register” button. It may land to a ‘search result page’ OR ‘firm details page’. - If it lands to search result page, check the “Status” column in the table whether firm is “authorized“. If authorized, go to the “Name” column in the same row and click on company link. The company link associated with it carries the URL for “firm details page”. - If it lands to “firm details page” then we get the URL for firm details directly. - Scrape the URL from step 4 or step 5. If the status of the firm is authorized on “firm details page” then extract the required information about the company. - The details or attributes to be extracted are: - Company name - Address - Phone - Website - Authorization Status Let us start developing a python based bot which will crawl to “firm details page” and scrap the required information about the firm. Disclaimer : The mention of any company names, trademarks or data sets in this blog-post does not imply we can or will scrape them. They are listed only as an illustration purposes and general use cases. Any code provided in article is for learning purpose only, we are not responsible for how it is used. 1. Imports import string import pandas as pd from lxml import html from bs4 import BeautifulSoup from urllib.request import Request, urlopen 2. Web crawling : Selenium The selenium package is used to automate web browser interaction from Python. Selenium requires a driver to interface with the chosen browser. Chrome, for example, requires chromedriver, which needs to be installed before the below examples can be run. Make sure to provide the path in python code. The below code snippet performs step 1, 2 and 3 which is opening the website, passing a company ID in the search box and clicking the “Search the Register” button. fp = open("cid.txt","r") cids = fp.readlines() cids = [ix.strip("\n") for ix in cids] # global list of lists of extracted fields for all CIDs info = [] # Looping through all the company IDs for information extraction for cid in cids: # Setting up chrome driver for automated browsing driver = webdriver.Chrome("Path/chromedriver") # Site to browse driver.get("") # url : url retrieved after search button click url="" # data : list for keeping all extracted attributes for particular CID data=[] # placing the id on search box driver.find_element_by_id('j_id0:j_id1:j_id33:j_id34:registersearch:j_id36:searchBox').send_keys(cid) # Clicking the search button button = driver.find_element_by_id('j_id0:j_id1:j_id33:j_id34:registersearch:j_id36:j_id39') button.click() Now that we have searched for an company ID by clicking button, we would like to check whether it goes to “search results page” or “firm details page”. Here, the new URL contains sub string “” when it is “firm details page” and sub string “” when it is “search results page”. We will also keep some waiting time say 10 seconds in order to load the webpage. Post that, we get the new URL. try: # wait for max 10 secs to load the new URL, check if URL is "firm details page" if WebDriverWait(driver, 10).until(EC.url_contains("")): url = driver.current_url #print(url) driver.quit() except TimeoutException: # wait for max 10 secs to load the new URL, check if URL is "search results page" if WebDriverWait(driver, 10).until(EC.url_contains("")): url = driver.current_url #print(url) driver.quit() Now, we have got the URL of the new page where it landed. We will use beautiful-soup to scrap the new URL webpage. we will now perform step 4 i.e. If the URL webpage is search result page then check the “Status” column in the table whether firm is “authorized”. If authorized, go to the “Name” column in the same row and click on company link. The company link associated with it carries the URL for “firm details page”. Further, we pass URL of firm details page to function parse_infopage(authorised_link, data). This function parses the firm details page to extract all the required fields about the company. Also, if the URL webpage is firm details page then it can be directly passed to parse_infopage(authorised_link, data) function. The below python code snippet does the same. # scraping using Request package of urllib library req = Request(url, headers={'User-Agent': 'Mozilla/5.0'}) webpage = urlopen(req).read() # if url contains search result page if "" in url: soup = BeautifulSoup(webpage.decode("utf-8"), "html.parser") flag = 0 base_weblink = "" authorised_link = "" # find the table with id "SearchResults" for table in soup.findAll("table", id="SearchResults"): for row in table.findAll("tr"): for col in row.findAll("td"): # finding the current status of firm for sp in col.findAll("span", class_="CurrentStatus Authorised Authorised search_popover"): if col.text == "Authorised": flag = 1 if flag == 1: # when Authorised, find the "Name" column for name in row.findAll("td", class_="ResultName"): for a in name.findAll("a"): # get the hyperlink of firm details page authorised_link = base_weblink + a["href"] flag=0 data.append(cid) # extract information from firm details page data,cols = parse_infopage(authorised_link, data) info.append(data) # if url contains firm details page elif "" in url: data.append(cid) # extract information from firm details page data,cols = parse_infopage(url, data) info.append(data) # Create dataframe using data lists and column names df = pd.DataFrame(info,columns = cols) # writing the extracted data of tabular format in excel writer = pd.ExcelWriter('companies_info.xlsx') df.to_excel(writer,sheet_name='CID_Info') writer.save() 3. Scraping & Parsing : Beautiful Soup Important thing to note is that the information is extracted from “Principle place of Business” section inside “Contact Details”. The python function written below parses the firm details page and extracts the necessary fields using beautiful soup. def parse_infopage(url_link, data): """ input: URL of firm details page, a list to be returned returns: list of extracted data, column names """ req = Request(url_link, headers={'User-Agent': 'Mozilla/5.0'}) webpage_authorised = urlopen(req).read() # parsing firm details page with beautifulsoup soup_authorised = BeautifulSoup(webpage_authorised.decode("utf-8"), "html.parser") # columns list cols=["CID"] cols.append("Company") # Extracting company name field from parsed html for name in soup_authorised.findAll("h1",class_="RecordName"): data.append(name.text) # Extracting information from "Principal place of business" for div in soup_authorised.findAll("div", class_="address section"): for h3 in div.findAll("h3",class_="addressheader"): if h3.text == "Principal place of business": # extract attribute/column names from span tags and class "addresslabel" for sp in div.findAll("span", class_="addresslabel"): cols.append(sp.text.strip()) # extract the data fields from div tags for respective attributes/columns for d in div.findAll("div", class_="addressvalue"): data.append(' '.join(d.text.split())) # extract data fields from span tags for sp in div.findAll("span", class_="addressvalue"): # decode the cloudfare obfuscated email id if "email" in sp.text.strip(): email = sp.find("a", class_="__cf_email__") data.append(decodeEmail(email['data-cfemail'])) else: data.append(sp.text.strip()) # Extracting authorization status checking statusbox field cols.append("Status") for stat in soup_authorised.findAll("span",class_="statusbox"): if "No longer authorised" in stat.text: data.append("Not Authorised") elif "Authorised" in stat.text: data.append("Authorised") return data, cols 4. Extracting Protected Email : Cloud-fare Obfuscation Coming towards the end of this blog-post, we penned down the decoding function for protected emails. Cloudflare email address obfuscation helps in spam prevention by hiding email addresses appearing in web pages from email harvesters and other bots, while remaining visible to site visitors. An obfuscated email in an anchor tag looks like this. data-cfemail="6a090b192a0b471a060b04440905441f01" The below python function can decode the hexadecimal encoding to characters which forms the email id. Every two hexadecimal makes a character except the initial two hexadecimal characters which is used only to assist decoding every character. def decodeEmail(e): de = "" k = int(e[:2], 16) for i in range(2, len(e)-1, 2): de += chr(int(e[i:i+2], 16)^k) return de Passing the hexadecimal encoding as a parameter to the function def decodeEmail(e) will return cas@a-plan.co.uk as decoded email string. Final Thoughts Finally, the python bot created in this blog-post extracts information about companies/firms and returns a data frame which has been written in an excel sheet ‘companies_info.xlsx’ You can get the full python implementation for the demonstrated bot from GitHub link here. Hope it was easy to go through tutorial as I have tried to keep it short and simple. Readers who are interested in web crawling and web scraping can get hands-on with the use case demonstrated in this blog-post. It could be a good start in this field. Crawling & Scraping 🙂
https://appliedmachinelearning.blog/2019/06/16/web-crawling-scrapping-using-selenium-beautiful-soup-automating-data-extraction-with-python/
CC-MAIN-2021-49
refinedweb
1,583
58.18
Hi guys, for the question: Define a function named sum. This function expects two numbers, named low and high, as arguments. The function computes and returns the sum of all of the numbers between low and high, inclusive. ive done: def sum(num1, num2): sumoftwo = 0 index = 0 for num1 < num2: sumoftwo = sumoftwo + (num1 + index) index = index + 1 return sumoftwo i think thats correct, if theres any errors please point it out. also when i tried to run it, it gives me an error of "invalid syntax" while highlighting the "<", not sure what the problem is. Still a noob at python, and any help is appreciated! thanks
http://forums.devshed.com/python-programming/943269-sumoftwonumbers-last-post.html
CC-MAIN-2015-18
refinedweb
107
70.43
. I remember using the ternary operator in C++ where it was a question mark. I looked it up with Google and found some good example in a StackOverflow question and answer and in that aforementioned Wikipedia example. Let’s take a look at some of those and see if we can figure them out. Here is one of the simplest examples: x = 5 y = 10 result = True if x > y else False This basically reads as follows: The result will be True is x is greater than y, otherwise the result is False. To be honest, this reminds me mightily of some of the Microsoft Excel conditional statements I’ve seen. Some people object to this format, but it’s what the official Python documentation uses. The following is how you would write it in a normal conditional statement: x = 5 y = 10 if x > y: print "True" else: print "False" So you would save 3 lines of code by using the ternary operator. Anyway, you might want to use this structure when you’re looping over a set of files and you want to filter out some sections or rows. For our next example, we’ll loop over some numbers and check if they are odd or even: for i in range(1, 11): x = i % 2 result = "Odd" if x else "Even" print "%s is %s" % (i, result) You would be surprised how often you have to check the remainder of a division statement. This is a quick way to tell if the number is odd or even though. In the previously mentioned StackOverflow link, there’s this funny piece of code that is shown as an example for those people who are still using Python 2.4 or older: # (falseValue, trueValue)[test] >>> (0, 1)[5>6] 0 >>> (0, 1)[5<6] 1 This is rather ugly, but it does the job. This is indexing a tuple and is certainly a hack, but it’s an interesting bit of code. Of course, it doesn’t have the short-circuit value of the newer method that we looked at previously, so both values are evaluated. You may even run into oddball errors doing it this way where True is False and vice-versa, so I don’t really recommend it. There are also several ways to do the ternary with Python’s lambda. Here’s one from the Wikipedia entry mentioned earlier: def true(): print "true" return "truly" def false(): print "false" return "falsely" func = lambda b,a1,a2: a1 if b else a2 func(True, true, false)() func(False, true, false)() This is some funky code, especially if you don’t understand how lambdas work. Basically, a lambda is an anonymous function. Here we create two normal functions and a lambda one. Then we call it with a boolean True and a False. In the first call, it’s read as follows: call the true function if boolean True, else call the false function. The second is slightly more confusing as it appears to say that you should call the true method if boolean False, but it’s really saying it will call the false method only if b is boolean False. Yeah, I find that a bit confusing too. Wrapping Up There are several other examples of ternary operators in the “additional reading section below that you can check out, but at this point you should have a pretty good grasp of how to use it and perhaps when you’d use it. I would personally use this methodology when I know I have a simple True/False conditional I need to make and I want to save a few lines of code. However, I often tend to go with explicit over implicit because I know that I’ll have to come back and maintain this code and I don’t like having to figure out weird stuff like this all the time, so I would probably just go ahead and write the 4 lines. The choice is yours, of course. Additional Reading - “conditionals” in expressions (Python recipe) - The Python Lambda - Lambda instead of if – StackOverflow - Stupid Lambda Tricks includes a ternary example Pingback: Visto nel Web – 42 « Ok, panico
http://www.blog.pythonlibrary.org/2012/08/29/python-101-the-ternary-operator/
CC-MAIN-2013-48
refinedweb
703
64.04
Why Does our Tail Reaper Program Work in Times of Market Turmoil? By Ernest Chan I generally don’t like to write about our investment programs here, since the good folks at the National Futures Association would then have to review my blog posts during their regular audits/examinations of our CPO/CTA. But given the extraordinary market condition we are experiencing, our kind cap intro broker urged me to do so. Hopefully there is enough financial insights here to benefit those who do not wish to invest with us. As the name of our Tail Reaper program implies, it is designed to benefit from tail events. It did so (+20.07%) during August-December, 2015’s Chinese stock market crash (even though it trades only the E-mini S&P 500 index futures), it did so (+18.38%) during February-March, 2018’s “volmageddon”, and now it did it again (+12.98%) during February, 2020’s Covid-19 crisis. (As of this writing, March is up over 21% gross.) There are many names to this strategy: some call it “crisis alpha”, others call it “convex”, “long gamma” or “long vega” (even though no options are involved), “long volatility”, “tail hedge”, or just plain old “trend-following”. Whatever the name or description, it usually enjoys outsize return when there is real panic. (But of course, PAST PERFORMANCE IS NOT NECESSARILY INDICATIVE OF FUTURE RESULTS.) Furthermore, our strategy did so without holding any overnight positions. Why is a trend-following strategy profitable in a crisis? A simple example will suffice. If a short trade is triggered when the return (from some chosen benchmark) exceeds -1%, then the trade will be very profitable if the market ends up dropping -4%. Vice versa for a long trade. (As recent market actions have demonstrated, prices exhibit both left and right tail movements in a crisis.) The trick, of course, is to find the right benchmark for the entry, and to find the right exit condition. Naturally, insurance against market crash isn’t completely free. Our goal is to prevent the insurance cost, which is essentially the loss that the strategy suffers during a stretch of bull market, from being too high. After all, if insurance were all we want, we could have just bought put options on the market index, and watched it lost premium every month in “good” times. To prevent the loss of insurance premium requires a dose of market timing, assisted by our machine learning program that utilizes many, many factors to predict whether the market will suffer extreme movements in the next day. In most years, the cost (loss) is negligible despite the long bull market, except in 2019 when we lost 8.13%. That year, which seems a long time ago, the SPY was up 30.9%. (It was in the August of that year that we added the machine learning risk management layer.) But most investors have a substantial long exposure. A proper asset allocation to both Tail Reaper and to a long-only portfolio will smooth out the annual returns and hopefully eliminate any losing year. (Again, PAST PERFORMANCE IS NOT NECESSARILY INDICATIVE OF FUTURE RESULTS.) But why should we worry about a losing year? Isnt’ total return all investors should care about? Recently, Mark Spitznagel (who co-founded Empirica Capital with Nassim Nicholas Taleb) wrote a series of interesting articles. It argued that even if a tail hedge strategy like ours returns an arithmetic average return of 0%, as long as it provides outsize positive returns during a market crisis, it will be able to significantly improves the compound growth rate of a portfolio that includes both an index fund and the tail hedge strategy. I have previously written a somewhat technical blog post on this mathematical curiosity. The gist of the argument is that the compound growth rate of a portfolio is m-s²/2, where m is the arithmetic mean return and s is the standard deviation of returns. Hedging tail risk is not just for the psychological comfort of having no losing years — it is mathematically proven to improve long-term compound growth rate overall. PAST PERFORMANCE IS NOT NECESSARILY INDICATIVE OF FUTURE RESULTS. For further reading on convex strategies, please see the papers by Paul Jusselin et al “Understanding the Momentum Risk Premium: An In-Depth Journey Through Trend-Following Strategies” and Dao et al “Tail protection for long investors: Trend convexity at work” (Hat tip to Corey Hoffstein for leading me to them!).
https://predictnow-ai.medium.com/why-does-our-tail-reaper-program-work-in-times-of-market-turmoil-6e0f36cdb1b5?source=user_profile---------8----------------------------
CC-MAIN-2022-27
refinedweb
752
61.06
TL. Authorization Models with Auth0 In a typical application, you might have different "tiers" of users. Let's say you have a blog and a database of users who interact with the blog. The first set of users, let's call them subscribers, can only view public blog posts. Next you have the users who manage the blog and are permitted to see the blog's dashboard. Even within that set of users you might have admins and editors, both of which have different permissions. So how would we control who can see what? A popular way to do this is with role based access control. We'd simply assign each of those users the role of either subscriber, admin, or editor and then associate certain permissions with that role. This is a great authorization model for a lot of applications, but sometimes as your application grows, you might find you're creating more and more one-off roles and permissions. Maybe you created a subsection of the blog that only users who have been subscribed for over a year have access to. Should you create a role specifically for that? In our application, we're going to use our flexible GraphQL API with Auth0's rules to implement two other options for handling these more complex authorization scenarios: Attribute-based Access Control and Graph-Based Access Control Attribute-based Access Control (ABAC) Attribute-based access control means we're authorizing our user to have access to something based on a trait associated with that user. So in the above example, instead of assigning each user who has been subscribed for over a year a special role for that, we'd just look at the created_at field associated with that user and allow or deny access based on that. Graph-based Access Control (GBAC) Graph-based access control is where we allow access to something based on how that data relates to other data. In the blog example, we might want to allow a guest author access to the posts section of the dashboard, but only let them view and edit posts that they wrote. In this case, we need to check the relationship between a post and the user before allowing access. Usually we'll have an author_id field on the post that will link back to the id field of a user. If that relationship exists, we can grant post editing access for that user. If not, we can simply deny them on the spot. So what does this have to do with a graph structured data? Modeling our data in a graph-like structure is a great way to bring flexibility to our data and really make relationships the top priority. As applications become increasingly complex, it gets much harder to manage roles and permissions. "Graph-based access control is where we allow access to something based on how that data relates to other data." Think of an application as massive as Facebook. You allow your profile to be viewed by your friends and also friends of friends. Some user clicks to view your profile, so now Facebook needs to run a query to search through all of that person's friends and then search all of friends of those friends before it can authorize them to view your profile. That's a lot of work! By modeling these relationships in a graph, we can just select out any point in the graph and "hop" to the next data point to see that relationship. Then we just define rules that use those relationships or data attributes to make authorization decisions. That's exactly what we're going to do in this article. Why Flask and GraphQL We're going to be using Flask as our application backend. Flask is a simple and flexible web framework for Python that provides us with tools that will make our application development much faster and easier. Flask is super lightweight, so it's a perfect choice for a simple but extensible backend. We'll also be using GraphQL to build a simple API. GraphQL is a query language that allows us to create APIs or work with existing APIs. It was created by Facebook and publicly released in 2015. Since then it's been gaining a ton of traction from both individual developers and also big companies such as Airbnb and Shopify. "Flask is a simple and flexible web framework for Python!" Instead of your data being accessed in a table format, imagine it's a graph. Once we establish what data points are exposed in that graph in the backend, we can use the frontend to pick and choose exactly what data we want and how we want it formatted. The client query dictates the response. Some notable features: - It can be used with any language or database - Can be used on top of existing REST APIs - Gives the client side more control Prerequisites Because Flask is a web framework for Python, we need to have Python on our machines. You can check if it's installed on your system by opening your terminal and running: python --version If a version is not returned, you're going to need to download and install the latest version from the Python website. For this tutorial, we'll be using Python 3. Next we're going to be using Pip, which is a package manager for Python, similar to npm. If you downloaded a recent version of Python 3 (3.4 or higher), pip should have been installed with it. If not, you can install it here. You can double check if it's installed with: pip --version Setting Up our Application Before we jump into GraphQL, let's setup our Flask backend and then get our database ready for querying. First things first, let's setup our project directory. Create a folder called flask-quidditch-manager. Then enter into that folder and we'll create our first file, which will serve as the entry point for our application app.py. You can do this in your terminal with the following commands: mkdir flask-quidditch-manager cd flask-quidditch-manager touch app.py You can open your preferred code editor now and let's get started with Python. Creating a virtual environment Whenever you're creating a Python project that requires external packages, it's a good idea to create a virtual environment. This will keep all of our dependencies isolated to that specific project. If we just installed every package globally on our system, we could eventually run into problems if we had a scenario where two different projects required different versions of the same package. So let's setup our virtual environment now. If you're on Python 3, the module venv should already be installed on your system. If not, you can find installation instructions here. Make sure you're in the project folder flask-quidditch, and run the following command: For Mac/Linux: python3 -m venv env For Windows: py -3 -m venv env This will create a folder in your project folder called env where we can store all of our dependencies. Next we just need to activate it. All you have to do is run the activate script that's inside the folder we created. In this case it's located at env/Scripts/activate. The env part of the path will be replaced by whatever you named the environment. If you're on Windows use: env\Scripts\activate If you're on Mac or Linux use: . env/bin/activate Your terminal should now look similar to this: Whenever you're ready to exit the environment, just run deactivate in your terminal. Setting up Flask Now that we have a virtual environment to store our dependencies, we're finally ready to setup Flask! In your terminal run: pip install flask This creates a site-packages folder nested inside your env folder. Now let's setup a basic skeleton app. Open up your empty app.py file and paste in the following: # app.py # import flask from flask import Flask # initialise flask object app = Flask(__name__) # Create home route @app.route('/') def home(): return 'Hello world' if __name__ == '__main__': app.run(debug=True) The first thing we're doing here is importing Flask. Next we're creating a Flask instance called app. In the next line, we're creating a basic home route that returns Hello world. when called. This is just for testing purposes to make sure our server is running. The last line is actually how we'll start up our server. We're passing debug=True so that we don't have to restart the server every time we make a change to our code. Let's start it up now to make sure everything is working properly! python app.py Now if you go to localhost:5000 in your browser, you should be greeted with Hello World. Setting Up our Database Next up let's create our database! We're going to have three tables: players, teams, and games, so let's see how we can create those. SQLite and SQLAlchemy We'll be using SQLite for our application's database. It's a lightweight database that's great for small applications such as our Quidditch Manager. While Python comes with built-in support for SQLite, it can be a bit tedious to work with if you need to write a lot of SQL queries. To make things easier, we're going to be using SQLAlchemy, which is an ORM (object-relational mapper) that will help us to interact with our database. In your terminal, run the following pip command: pip install sqlalchemy Create a new file called database.py. Paste in the following code and then we'll go over it. from sqlalchemy import create_engine from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import scoped_session, sessionmaker # Database setup # Sqlite will just come from a file engine = create_engine('sqlite:///quidditch.db') db_session = scoped_session(sessionmaker(autocommit=False, autoflush=False, bind=engine)) Base = declarative_base() Base.query = db_session.query_property() First we need to import create_engine from the sqlalchemy package. Next we're going to create our database file using create_engine. This is the starting point for our SQLite database. Then we create a session so we can interact with the database. Next we construct a base class Base for our class definitions (we'll use this later when we're creating our models). Finally Base.query is going to be required for making our queries later on. Creating our models Next up we're going to create our models. A model is a class that represents the structure of our data. Each model will map to a table in our database and include information such as the field name, type, if it's nullable, and more. We can also define any relationships between our tables in our model classes. Create a new file called models.py. Our database will have three tables, players, teams, and games, so we're going to have one class for each of them. In models.py, paste in the following chunk of code and then we'll walk through all of it to see what's going on here. # models.py from database import Base from sqlalchemy import Column, ForeignKey, Integer, String from sqlalchemy.orm import backref, relationship # Create our classes, one for each table class Team(Base): __tablename__ = 'teams' id = Column(Integer, primary_key=True) name = Column(String(50)) rank = Column(Integer, nullable=False) players = relationship('Player', backref='on_team', lazy=True) def __repr__(self): return '<Team %r>' % self.name class Player(Base): __tablename__ = 'players' id = Column(Integer, primary_key=True) name = Column(String(50), nullable=False) position = Column(String(50), nullable=False) year = Column(Integer, nullable=False) team_id = Column(Integer, ForeignKey('teams.id'), nullable=False) def __repr__(self): return '<Player %r>' % self.name class Game(Base): __tablename__ = 'games' id = Column(Integer, primary_key=True) level = Column(String(30), nullable=False) child_id = Column(Integer, ForeignKey('games.id'), nullable=True) winner_id = Column(Integer, ForeignKey('teams.id'), nullable=False) loser_id = Column(Integer, ForeignKey('teams.id'), nullable=False) child = relationship('Game', remote_side=[id]) def __repr__(self): return '<Game %r>' % self.winner_id Here's the basic flow of what we're doing: - Importing the base class, Base, for filling out our tables - Define our three classes: Team, Player, and Game(note that they're all singular) - Set the table name for each class - Create variables that represent each column of each table - Specify any attributes required of that column - Define any relationships between tables - Define what get's returned if we call this class These classes are basically the lifeblood of our database. When we create our database in the next step, it's going to setup the tables and columns exactly how we told it to in this file. Just a quick sidenote, you may have noticed in the teams table we have a players column that's using db.relationship(). This is how we create a one to many relationship using SQLAlchemy. All we're saying is that one team can have many players, but a player can only belong to one team. These relationships are important to define now so that we can model them in our graph later. You can learn more about creating relationships in SQLAlchemy in this article. Creating and Seeding our Database Now that we've done all that setup, we can finally create our database! In your terminal, type python to start a Python shell. We're first going to import the db object from our app and our classes from model.py. Then we'll just run SQLAlchemy's create_all() method to create the database. python from database import Base, engine, db_session from models import Team, Player, Game Base.metadata.create_all(bind=engine) You should now see a quidditch.db file in the root of your project folder. Our database structure now looks like this: The final step before we move onto GraphQL is populating our database with data. We'll walk through the commands you can use in the Python shell to add data to the tables, but the data itself is pretty trivial and time-consuming to enter by hand, so head to the seeder.txt file in the GitHub repo to get all the data for this example. Let's manually enter Gryffindor into the teams table. Go to your terminal and open the Python shell by typing python and then enter the following: team1 = Team(name='Gryffindor', rank=1) db_session.add(team1) db_session.commit() This is the process we'll use to add any new row into a table. We're just calling the class for that table and specifying the attributes. Note that we don't need to specify the id as that's auto-generated. Let's add our first player as well. We've already imported the classes, so we don't need to do it again. player1 = Player(name='Harry Potter', year=1, position='Seeker', on_team=team1) db_session.add(player1) db_session.commit() You may have noticed that for the last attribute we called it on_team when the column is called team_id. Because we've already defined a relationship between the players table and teams table, we can actually just use that backref we created earlier and assign it to the team1 variable we just created for the Gryffindor team. That way we don't have to go through the trouble of looking up what id Gryffindor was assigned. Pretty neat! To exit Python, just press ctrl + Z and hit enter. Getting Started with GraphQL Alright now that we've setup Flask and have our database ready to go, let's finally play with GraphQL! Integrating GraphQL with Graphene First we need to install a few dependencies so we can bring GraphQL into our application. pip install flask-graphql graphene graphene-sqlalchemy flask-graphql - Flask Package that will allow us to use GraphiQL IDE in the browser graphene - Python library for building GraphQL APIs graphene-sqlalchemy - Graphene package that works with SQLAlchemy to simplify working with our models Creating our schemas Next let's create our schema. The schema is going to represent the graph-like structure of our data so that GraphQL can know how to map it. Instead of the traditional tabular structure of data, imagine we have a graph of data. Each square in the image below represents a node and each line connecting them is considered and edge. Node - A node in a graph represents the data item itself, e.g. a player, game, or team Edge - An edge connects 2 nodes and represents the relationship between them, e.g. a player belongs to a team Each node will also have attributes associated with it. In this case we can see some of the position attributes such as Captain and Seeker, represented as ovals. If setup properly, GraphQL gives us the ability to select any tree from that graph. If we want to grab all players who are captains we can do that. If we want to grab all of the game data for a particular team, we can do that as well. GraphQL makes our data super flexible and gives the client more control over the type of data and structure of data that's returned to it. But before we can start doing any queries, we're going to have to setup our schema with the help of our models that we defined earlier. Luckily Graphene makes this pretty simple for us. Create a new file called schema.py and paste the following code in. from models import Team from models import Player from models import Game import graphene from graphene import relay from graphene_sqlalchemy import SQLAlchemyObjectType, SQLAlchemyConnectionField class PlayerObject(SQLAlchemyObjectType): class Meta: model = Player interfaces = (graphene.relay.Node, ) class TeamObject(SQLAlchemyObjectType): class Meta: model = Team interfaces = (graphene.relay.Node, ) class GameObject(SQLAlchemyObjectType): class Meta: model = Game interfaces = (graphene.relay.Node, ) class Query(graphene.ObjectType): node = graphene.relay.Node.Field() all_players = SQLAlchemyConnectionField(PlayerObject) all_teams = SQLAlchemyConnectionField(TeamObject) all_games = SQLAlchemyConnectionField(GameObject) schema = graphene.Schema(query=Query) This is a lot to digest, so let's break it down. - Import our models - Import our Graphene packages we installed earlier - For each class, tell Graphene to expose all attributes from that model - Create a query class - In the query class, define queries for getting all entries for each of the classes defined above Let's fill out that queries section a little more so we can demonstrate how to resolve more complex queries. Back in schema.py, keep everything the same, but add in the following code to the Query class: # schema.py from sqlalchemy import or_ class Query(graphene.ObjectType): node = graphene.relay.Node.Field() all_players = SQLAlchemyConnectionField(PlayerObject) all_teams = SQLAlchemyConnectionField(TeamObject) all_games = SQLAlchemyConnectionField(GameObject) # Get a specific player (expects player name) get_player = graphene.Field(PlayerObject, name = graphene.String()) # Get a game (expects game id) get_game = graphene.Field(GameObject, id = graphene.Int()) # Get all games a team has played (expects team id) get_team_games = graphene.Field(lambda: graphene.List(GameObject), team = graphene.Int()) # Get all players who play a certain position (expects position name) get_position = graphene.Field(lambda: graphene.List(PlayerObject), position = graphene.String()) # Resolve our queries def resolve_get_player(parent, info, name): query = PlayerObject.get_query(info) return query.filter(Player.name == name).first() def resolve_get_game(parent, info, id): query = GameObject.get_query(info) return query.filter(Game.id == id).first() def resolve_get_team_games(parent, info, team): query = GameObject.get_query(info) return query.filter(or_(Game.winner_id == team, Game.loser_id == team)).all() def resolve_get_position(parent, info, position): query = PlayerObject.get_query(info) return query.filter(Player.position == position).all() schema = graphene.Schema(query=Query) So what's going on here? We're adding some more complex queries that can't just rely on the models above to display their data. For example, we're expecting GraphQL to get all games a team has played, but we haven't told it how to do that. We have to create resolvers that will work with our SQLite database and get that information to be added to the graph. get_player This will allow us to request any single player by name. We're passing in the PlayerObject, so we'll have access to all attributes for that player. Now we just need to setup a function to resolve that player, meaning the actually query we do on the database to get them. We're just searching the player table until we find a player whose name is equal to the one we passed in. get_game This is similar to get player, except here we're getting a single game by id. get_team_games Here we're requesting data about all games played by a certain team. We're going to allow the client to pass in a team's id and from there they can request any information they'd like about games that team has either won or lost. When we resolve that query, we're just searching the database for any games where the team's id matches the winner_id or loser_id. Also note back in the get_team_games variable, we need to specify that we want a List of games instead of just one. get_position Our final query will allow the client to specify a player's position and then we'll return back all player's who match that position. Testing Queries with GraphiQL Now that we have our schema setup, let's see how we can actually make those queries from the client. GraphiQL GraphQL comes with this awesome IDE called GraphiQL that allows us to test our GraphQL queries directly in the browser. This will allow us to test our API calls in the same structure that the client will use. Back in app.py, let's create a new endpoint for the route /graphql. # app.py from flask import Flask from flask_graphql import GraphQLView from schema import schema # initialise flask object app = Flask(__name__) app.add_url_rule( "/graphql", view_func=GraphQLView.as_view("graphql", schema=schema, graphiql=True) ) if __name__ == '__main__': app.run(debug=True) Your final app.py file should match the above. We aren't using that original home route anymore that we created for testing purposes, so you can go ahead and delete that now. Make sure you still have your app running ( python app.py) and head on over to localhost:5000/graphql. We can enter our test queries on the left and it will immediately spit out the results on the right. This is very similar to how a client would consume our GraphQL API, so if we ever wanted to extend this example to have a frontend, these are the queries we'd use. Let's test out one of our initial queries now. Get all players with name { allPlayers { edges { node { name } } } } Think back to that graph of our data that we had above. To get all players, first we need to walk along all the lines in the graph (edges) that point to each player (nodes). Once we hit a node, we have access to all attributes of that node that were defined in our schema, which is this case is everything. Note that by convention, when we're making GraphQL queries we have to use camel case. In a normal REST API, if you did a query to get a user it might return a lot of unnecessary attributes about the user. With GraphQL, we can request exactly what we want. Let's look at another example to demonstrate this. Get all players with their name, team name, and position { allPlayers { edges { node { name position onTeam { name rank } } } } } This time we're going to request all players with their name, position, and team name. We're able to access onTeam because back when we setup our models, we defined a relationship between teams and players where we created that pseudo-column on the players table. This is how we can use it now! Instead of just getting back an id, we can request the name directly. Get a single player with their position, team name, and team rank query { getPlayer(name: "Harry Potter") { name position onTeam { name rank } } } GraphQL also lets you pass in arguments. This time we just want a single player by name and we also want to get their position, team name, and team rank. Get all players whose position is "Seeker" query { getPosition(position: "Seeker") { name onTeam { name } } } For our final example, let's get all players, but we only want those who are Seekers. Get a game and the child game associated with it query { getGame(id: 8) { level winnerId loserId child { id level winnerId loserId } } } This is a great use case for graph-structured data. In this scenario we basically have a sports bracket. We're asking for attributes of a specific game such as who won and who lost. We can also hop over to the next game node and see what the child (or results of this game) was and then get information about that game as well. Creating Auth0 Authorization Rules As mentioned at the beginning of this article, there are a few different ways we can authorize a user to have certain permissions in an application. The most widely used one is role based access control, which is where we have a user and we assign it roles. The roles then dictate the permissions that user has. This structure works fine for small simple applications, but a lot of larger applications make authorization decisions that rely heavily on either attributes of a user or the relationships a user has to data. Now that we've created our GraphQL API, we can use that flexible data to implement two different authorization models: Attribute-based Access Control and Graph-Based Access Control. Creating an ABAC rule Attribute based access control means we're authorizing our user to access something based on an attribute of that user, resources, or the request at hand.. In our quidditch example, let's say our application has special forums where all players with certain attributes can chat with each other. For example, every player who is in the same year at Hogwarts will be able to access the chat for their year. It doesn't matter what team they're on, as long as they have the same value for year. We can actually create this rule pretty easily through the Auth0 dashboard. Let's see it in action. First, sign up for a free Auth0 account here. You'll be prompted to create a tenant domain or just accept the auto-generated one. Fill out your account information and then you'll enter into the dashboard. Click on "Rules" on the left-hand side. Auth0 rules are special functions you can create that will run whenever a user logs into your application. They allow us to add information to a user's profile, ban specific users based on certain attributes, extend permissions to users, and more. Press "Create Rule" and let's make a rule that will extend a chat permission to a user based on what year they're in at Hogwarts. // Give the user permissions to access the chat for their year function (user, context, callback) { const axios = require('axios'); const name = user.name; axios({ url: '', method: 'post', data: { query: ` { getPlayer(name: "${name}") { name position year } } ` } }).then((result) => { if (result.data.data.getPlayer.year) { let playerYear = result.data.data.getPlayer.year; context.accessToken.scope = context.accessToken.scope || []; context.accessToken.scope.push([`year_${playerYear}_chat`]); return callback(null, user, context); } else return callback(new UnauthorizedError('Access denied.')); }).catch(err => { return callback(err); }); } First we're going to require axios so we can make the call to our GraphQL API. We have access to the user who's trying to access the chat through the user variable. Let's just grab the name from the user and pass that into our getPlayer query. Of course in the real world we wouldn't use name since that isn't unique, but this example is just for demonstration. Next we just need to wait for this response and when it comes back, check if that user has a year set. If so, we push the permission for access to that year's chat onto their access token's scope. Let's test that this works. Click "Try this rule" and we can run the rule with a mock user. Our user during login This is what the user object looks like before logging in. We have our user's basic information like id and name. Then in the next image we can see the user's context object, which holds information about the authentication transaction. Notice that the accessToken scope is currently empty. Click "Try" so we can run this rule against this user. After logging in Now our user is returned and if you look at the context object, we can see a year_2_chat permission has been added to the access token's scope. Denying a user This is a quick way to grant permissions dynamically. We can setup our app so that in order to access a certain year's chatroom, you must have the correct permission for that year. So if a player in her 3rd year tries to access Year 2 Chat, she will be denied. Creating a GBAC rule Next up, let's create our graph based rule. For this scenario, let's imagine that we need to restrict view access of player's profiles based on what team they're on. A player can see the profile of every other player on their team, but no one else. We want to create a rule that jumps in after a user logs in and determines what players the user will be able to see. First we'll run the getPlayer query for the user that's logging in. In that query, we'll use the onTeam relationship to pull what team the user is on. From there we can use the players relationship to grab all of the players that are on that team. This is the query and the data that we're going to use to determine what the user can access: Create a new rule with the following: function (user, context, callback) { const axios = require('axios'); if (! user.id) return callback(new UnauthorizedError('Access denied. Please login.')); axios({ url: '', method: 'post', data: { query: ` { getPlayer(name: "${user.name}") { name onTeam { name players { edges { node { name position year } } } } } } ` } }).then((result) => { if (result.data.data.getPlayer.onTeam) { context.viewablePlayers = result.data.data.getPlayer.onTeam.players.edges; return callback(null, user, context); } else return callback(new UnauthorizedError('Please join a team to see players.')); }).catch(err => { return callback(err); }); } Before/during login Harry Potter clicks the login button to get into his dashboard. The rule will run and modify the context object based on those relationships. Just for demonstration purposes to verify it's working, we'll add his list of viewable players to the context object. We could also add specific permissions based on this information as well. After logging in Harry Potter is in and now has access to these teammates: Wrap Up We've covered a lot in this post and even though it takes some work to setup, I hope you can see the value of integrating GraphQL into your application. It gives the client the power to request exactly what they want and it also can help expand the capabilities of your application's authorization flow. We can simplify this even further by using rules in Auth0's dashboard to extend permissions or assign roles based on certain attributes or relationships. Thanks for following along and be sure to leave any questions below!
https://auth0.com/blog/authorization-series-pt-3-dynamic-authorization-with-graphql-and-rules/
CC-MAIN-2020-45
refinedweb
5,297
63.59
Wednesday, March 19, 2008 C++ TR1: stdint.h still missing from Visual Studio! Saturday, March 15, 2008 C++ TR1: array VS 2008 Bug = { 0, 1, 2, 3 }; Produces the output:Produces the output: #include <iostream> #include <array> using namespace std; using namespace std::tr1; int main() { array<int, 4> arr = {1, 2, 3, 4}; cout << "size: " << arr.size() << endl; cout << "max_size: " << arr.max_size() << endl; return 0; } instead of:instead of: size: 4 max_size: 1073741823 If we take a look at the implementation of max_size() we can see the problemIf we take a look at the implementation of max_size() we can see the problem size: 4 max_size: 4 Instead of simply retuning N (the size of the array), it performs the same computation as if this was a vector.Instead of simply retuning N (the size of the array), it performs the same computation as if this was a vector. size_type max_size() const { // return maximum possible length of sequence size_type _Count = (size_type)(-1) / sizeof(_Ty); return (0 < _Count ? _Count : 1); } This issue has been logged as a bug with Microsoft and will hopefully be fixed before the "Gold" release of the feature pack. C++ TR1 TR.
http://stephendoyle.blogspot.com/2008_03_01_archive.html
CC-MAIN-2017-43
refinedweb
195
67.49
Up to [cvs.NetBSD.org] / src / sys / kern Request diff between arbitrary revisions Default branch: MAIN Current tag: chs-ubc2-newbase Revision 1.49 / (download) - annotate - [select for diffs], Mon Feb 7 18:43:26 2000 UTC (15 years, 9 months ago) by jonathan Branch: MAIN CVS Tags: chs-ubc2-newbase Changes since 1.48: +5 -2 lines Diff to previous 1.48 (colored) Make kernel SOMAXCONN patchable. Will add sysctl once we decide on namespace. This form allows you to request diff's between any two revisions of a file. You may select a symbolic revision name using the selection box or you may type in a numeric name using the type-in text box.
http://cvsweb.netbsd.org/bsdweb.cgi/src/sys/kern/uipc_socket.c?only_with_tag=chs-ubc2-newbase
CC-MAIN-2015-48
refinedweb
116
66.84
Add a tag to a page Add tags to a page by adding tags in the frontmatter with values inside brackets, like this: --- title: 5.0 Release Notes permalink: /doc/en/contrib/release_notes_5_0.html tags: [formatting, single_sourcing] --- Tags overview To prevent tags from getting out of control and inconsistent, first make sure the tag appears in the _data/tags.yml file. If it’s not there, the tag you add to a page won’t be read. This helps to ensure that you use tags consistently and don’t add new tags without the corresponding tag archive pages. Additionally, you must create a tag archive page similar to the other pages named tag_{tagname}.html in the tags folder. This theme doesn’t auto-create tag archive pages. For simplicity, make all your tags single words (connect them with hyphens if necessary). Note: We’ve modified the original theme’s tag setup as follows: - Tag archive pages must have a langproperty that specifies the language (e.g. en). The page will list only pages that match that language, since you probably only want to view pages of your own language. - Tag archive pages will list only pages that also match the sidebarproperty. This ensures that the tag archive displays only pages relevant to the same product, for example LoopBack 2.x. - Above means that each language + product must have a tagsfolder with tag archive files, e.g. en/lb2and en/lb3each have a tagsfolder with the requisite tag archive files. - Tag archive lists only pages, not posts (since we’re not using posts on this site), and that’s removed from the table generated by taglogic.html. Setting up tags Tags have a few components. In the _data/tags.ymlfile, add the tag names you want to allow. For example: allowed-tags: - getting_started - overview - formatting - publishing - single_sourcing - special_layouts - content types Create a tag archive file in the tagsfolder for each tag in your tags_doc.ymllist. Name the file following this pattern: tag_collaboration.html. Each tag archive file needs only this: --- title: "Collaboration pages" tagName: collaboration search: exclude permalink: /doc/en/contrib/tag_collaboration.html sidebar: contrib_sidebar --- {% include taglogic.html %} Note: In the _includes/mydocfolder, there’s a taglogic.htmlfile.. </div> Change the title, tagName, and permalink values to be specific to the tag name you just created. By default, the _layouts/page.htmlfile][labels.html for more options on button class names. Retrieving pages for a specific tag If you want to retrieve pages outside of a particular tag_archive page, you could use this code: navigation pages: <ul> {% for page in site.pages %} {% for tag in page.tags %} {% if tag == "navigation" %} <li><a href="{{page.permalink}}">{{page.title}}</a></li> {% endif %} {% endfor %} {% endfor %} </ul> Here’s how that code renders: navigation pages:.
https://loopback.io/doc/en/contrib/tags.html
CC-MAIN-2018-39
refinedweb
461
58.08
ASP.NET MVC is a modern Model-View-Controller framework for building web applications built on Microsoft’s large and reliable .NET environment. You probably already knew that. In fact, you’re probably building an ASP.NET MVC app and you’re looking to globalize it so that you can serve it in different languages. Well, have no fear. In this article, we’ll build a small demo app and globalize its UI and routes, giving you a foundation to build on as you develop and deliver your app to users from all over the world. ASP.NET MVC i18n, here we come! Globalization, i18n, l10n … Oh My! Outside of .NET, we often refer to the process of getting an application ready for delivery to people in different parts of the world as internationalization, abbreviated as i18n. This usually means not hard-coding our UI strings so that their translations can be used dynamically. It also often means we’re aware of regional differences when it comes to dates and calendars, currency, and more. It’s worth noting that Microsoft calls this process globalization in their documentation. So we’ll use the terms globalization and i18n interchangeably here. Localization, or l10n, is the process of building on i18n and providing the actual translations and regional formatting that is required for a given locale. That’s mostly fancy talk for l10n == translation (although there’s a bit more to it). Oh, and while we’re at it: A locale is a combination of a language and a region, like “Canadian English”, and is often denoted with a code like “en-CA”. In .NET, this is called a culture. Again, we’ll use the terms locale and culture interchangeably here. Alright enough with the semantics. Let’s get to building. Find out how continuous localization saves your agile development to grow your business on a global scale.Check out the guide The Demo App Our little demo will be a simple web app called Heveanture, a foray into the world of constellations. Our home page will list constellations Each constellation will have a details page Nothing too crazy, and it will allow us to cover basic i18n pretty well. 🔗 Resource » You can get all the code for the app we will build here from the app’s GitHub repo. Framework and Package Versions We’re using the following IDE, frameworks, and packages to build this demo app, with versions at the time of writing. - Visual Studio 2019 - .NET 4.7 - ASP.NET MVC 5.2 We’ll also be working in C#, although a lot of what we cover should apply to any language that works on top of .NET. ✋🏽 Heads Up » We’re using the traditional .NET framework, which generally requires Windows, not to be confused with .NET Core, the newer cross-platform variant of .NET. Creating the Project We’ll get started by opening Visual Studio and creating a new project. In the Create a new project dialog, let’s select the ASP.NET Web Application (.NET Framework) template with the C# label (again not ASP.NET Core) and click Next. Using the search box can help filter to the template we want In the Configure your new project dialog, we can enter the name of our app, and click Create. We’re using .NET 4.7 here And, in the Create a new ASP.NET Web Application dialog, we can select the MVC template and click Create. Would you create the project already?! Alright, that should be it for creating the project. If we now run the project using the green play button in Visual Studio, we should be greeted with a placeholder home page. Featuring out-of-the-box Bootstrap CSS for styling Building our App Let’s start cleaning up some of the scaffolding that Visual Studio has given us and get our own app logic and styles in place. We’ll start with a mock view model that represents our constellations. We won’t be touching the database layer in this article. If you would like us to cover database globalization with ASP.NET MVC, please let us know in the comments below :). For now, some hard-coded data will get us globalizing the front end. using System; using System.Collections.Generic; namespace Heaventure.Data { public class Constellation { public int Id { get; private set; } public string Name { get; private set; } public string Description { get; private set; } public string ImageUrl { get; private set; } public int StarCount { get; private set; } public DateTime CreatedAt { get; private set; } public static List<Constellation> All() { return new List<Constellation>() { new Constellation() { Id = 1, Name = "Capricornus", Description = "Capricornus /ˌkæprɪˈkɔːrnəs/. Its symbol is Capricorn.svg (Unicode ♑).", ImageUrl = "~/Content/Images/Capricornus.png", StarCount = 3, CreatedAt = new DateTime(2020, 04, 22) }, new Constellation() { Id = 2, Name = "Aries", Description = .", ImageUrl = "~/Content/Images/Aries.png", StarCount = 18, CreatedAt = new DateTime(2020, 04, 20) }, new Constellation() { Id = 3, Name = "Hydrus", Description = "Hydrus /ˈhaɪdrəs/ is a small constellation in the deep southern sky. It was one of twelve constellations created by Petrus Plancius from the observations of Pieter Dirkszoon Keyser and Frederick de Houtman.", ImageUrl = "~/Content/Images/Hydrus.png", StarCount = 18, CreatedAt = new DateTime(2020, 03, 30) }, new Constellation() { Id = 4, Name = "Puppis", Description = "Puppis /ˈpʌpɪs/).", ImageUrl = "~/Content/Images/Puppis.png", StarCount = 9, CreatedAt = new DateTime(2020, 04, 14) }, new Constellation() { Id = 5, Name = "Telescopium", Description = "Telescopium is a minor constellation in the southern celestial hemisphere, one of twelve named in the 18th century by French astronomer Nicolas-Louis de Lacaille and one of several depicting scientific instruments. Its name is a Latinized form of the Greek word for telescope.", ImageUrl = "~/Content/Images/Telescopium.png", StarCount = 2, CreatedAt = new DateTime(2020, 04, 19) }, new Constellation() { Id = 6, Name = "Ursa Major", Description = ,\" referring to and contrasting it with nearby Ursa Minor, the lesser bear.", ImageUrl = "~/Content/Images/UrsaMajor.png", StarCount = 20, CreatedAt = new DateTime(2020, 03, 21) } }; } public static Constellation FindById(int id) => All().Find(constellation => constellation.Id == id); } } Constellation has a few simple properties, a hard-coded All() method that retrieves a List<Constellation>, and a FindById() method that finds and returns a constellation by its Id. Let’s wire this model up to our Home controller. using Heaventure.Data; using System.Web.Mvc; namespace Heaventure.Controllers { public class HomeController : Controller { public ActionResult Index() { var model = Constellation.All(); return View(model); } public ActionResult Details(int id) { var model = Constellation.FindById(id); return View(model); } } } We just query the data and pass it on to our views. Speaking of which, let’s build those. @model List<Heaventure.Data.Constellation> @{ ViewBag. @foreach (var constellation in Model) { <div class="col-md-4"> <div class="panel panel-default"> <div class="panel-body panel-img-container"> <a href="@Url.Action("Details", new { Id = constellation.Id })"> <img src="@Url.Content(constellation.ImageUrl)" class="img-responsive" /> </a> </div> <div class="panel-footer"> <h3 class="panel-title text-center"> @Html.ActionLink( constellation.Name, "Details", new { Id = constellation.Id }) </h3> </div> </div> </div> } </div> Our index view displays our constellations as columns, with images and names. The image and name of each constellation link to its respective details page. @model Heaventure.Data.Constellation @{ ViewBag.Title = Model.Name; <div class="col-md-4"> <img src="@Url.Content(Model.ImageUrl)" class="img-responsive img-rounded" /> </div> <div class="col-md-8"> <h2 class="mt-0">@Model.Name</h2> <p>@Model.Description</p> <dl class="dl-horizontal"> <dt>Number of Stars</dt> <dd>@Model.StarCount</dd> <dt>Added</dt> <dd>@Model.CreatedAt</dd> </dl> </div> </div> In our details view, we simply display the constellation’s image and properties in an orderly fashion. We also update our CSS, adding a Boostrap theme from Bootswatch called Cyborg, removing extraneous views and actions, and adding our constellation images. 🔗 Resource » If you would like to see all the changes we made up to this point, checkout the commit tagged “start” in the demo app’s GitHub repo. You can also checkout the start commit if you want to code along with us, and you want to get to globalization right away, without building everything we have up to this point yourself. When we run our app now, we see this beauty: Humans have always connected the dots peppering the void of space The Great Bear growls Using Resource Files for Localized Messages .NET supports resource files (.resx) for translation messages. It can be a bit tricky to set up resource files if you’ve never done it before, however. Let’s walk through it step-by-step. Creating Resources Files in Visual Studio First, let’s create a folder/namespace that houses our resource files. In the Visual Studio Solution Explorer, we can right-click on our project (Heaventure), and select Add > New Folder. We can name this folder anything we want. I’ll go with Resources. Next, let’s create our default resources file. We can right-click on the folder we just created and select Add > New Item. There doesn’t seem to be a template for resources files in Visual Studio 2019, but there’s an easy workaround for this. We can select the Visual C# > General tab in the sidebar and select Text File. Then, we can name the file Resources.resx, and click Add. Make sure to change the file extension to .resx Once we’ve added the file, we can open it in Visual Studio. A collection of name-value pairs ✋🏽 Heads Up » At this point, we need to make sure to click the Access Modifier dropdown and select Public. Otherwise, our resource file won’t work. We just created our app’s default, English resource file. We can now repeat the above process for each additional culture our app supports. We need to follow the naming convention Resources.{culture-code}.resx, otherwise .NET won’t load the correct file when we switch cultures later. I’ll add an Arabic resource file named Resources.ar.resx, and make sure to set its Access Modifier to Public. Adding Translations to Resource Files At this point, we can add a string to Resources.resx. Let’s open the file and add our application’s name in English. Don’t forget to save the file If we want to add an Arabic translation for our app’s name, we can open our Resources.ar.resx file and add a string with the same Name we used in our English Resources.resx. We then can add the Arabic translation as the Value for the string. Same name, different language Using Resource Strings in Our Views Let’s pull our newly adding string into our _Views > Shared > Layout.cshtml file. <!DOCTYPE html> <html> <head> <!-- ... --> <title>@ViewBag.Title - @Heaventure.Resources.AppName</title> <!-- ... --> </head> <body> <div class="navbar navbar-inverse navbar-fixed-top"> <div class="container"> <div class="navbar-header"> <!-- ... --> @Html.ActionLink( Heaventure.Resources.AppName, "Index", "Home", new { area = "" }, new { @class = "navbar-brand" }) </div> <!-- ... --> </div> </div> <!-- ... --> </body> </html> Instead of the hard-coded string, we’re now using Heaventure.Resources.AppName. To see the benefit of what we’ve just done, we can go into our HomeController‘s Index action and set the culture to Arabic before we return our view. using Heaventure.Data; using System.Web.Mvc; namespace Heaventure.Controllers { public class HomeController : Controller { public ActionResult Index() { var ar = new System.Globalization.CultureInfo("ar"); System.Threading.Thread.CurrentThread.CurrentCulture = ar; System.Threading.Thread.CurrentThread.CurrentUICulture = ar; var model = Constellation.All(); return View(model); } // ... } } We’ll go through setting culture in more detail in a little bit. We’re just trying to see if our resource files are working for now. After adding the code above, we can run our app and visit the root route (/) to load the index view. We now have translated messages! We can now remove the hard-coded culture setting we added to HomeController; we won’t be needing it. 🗒 Note » .NET will automatically fall back onto the string with the same name in the default Resources.resx if it can’t find it in Resources.{current-culture}.resx. Adding the Resources Namespace to Web.Config Typing Heaventure.Resources.{Name} everywhere we want a translated message seems a bit too verbose. We don’t have to use the fully qualified namespace, however. We can make our lives easier by adding the Heaventure.Resources namespace to our Web.config. <?xml version="1.0"?> <configuration> <!-- ... --> <system.web.webPages.razor> <host factoryType="System.Web.Mvc.MvcWebRazorHostFactory, System.Web.Mvc, Version=5.2.Optimization"/> <add namespace="System.Web.Routing" /> <add namespace="SmartFormat"/> <add namespace="Heaventure" /> <add namespace="Heaventure.Resources"/> </namespaces> </pages> </system.web.webPages.razor> <!-- ... --> </configuration> Inside the <namespaces> element, we can <add namespace="Heaventure.Resources"/>. Once we do, we can type Resources.AppName instead of Heaventure.Resources.AppName in our views. Alright, that’s basic translation strings taken care of. Now let’s see how we can set our app’s current culture via routes. .NET Culture Let’s take a look at how .NET deals with locales, or cultures. .NET sets a culture per thread, so when we want to set or get the current culture, we need to do something like the following. using System; using System.Threading; using System.Globalization; class MainClass { public static void Main (string[] args) { // Will print the current culture, which // depends on your system settings Console.WriteLine(Thread.CurrentThread.CurrentCulture); // Will print the current UI culture, which // depends on your system settings Console.WriteLine(Thread.CurrentThread.CurrentUICulture); var french = new CultureInfo("fr"); Thread.CurrentThread.CurrentCulture = french; Thread.CurrentThread.CurrentUICulture = french; // Will print "fr" Console.WriteLine(Thread.CurrentThread.CurrentCulture); // Will print "fr" Console.WriteLine(Thread.CurrentThread.CurrentUICulture); } } CultureInfo is the class that defines culture in .NET. It contains a wealth of information about a given culture, including its name, currency format, calendar, and much more. 🔗 Resource » Check out the official .NET documentation for more information about CultureInfo. The Difference Between Culture and UICulture You may have wondered why we’re setting both CurrentCulture and CurrentUICulture in the code above. Well, the two properties are responsible for different things. CurrentUICulture deals with resource files (.resx), like the ones we created above. If we set CurrentUICulture to Arabic, for example, .NET will load the Resources.ar.resx file automatically. CurrentCulture deals with almost everything else when it comes to localization: formatting and parsing of values and sorting, among other things. Setting our App’s Culture Let’s get back to coding, and use the information we know about .NET cultures to set the culture in our app depending on a route parameter. This means that hitting a route like /en/Details/1 will load our app in the default, English culture. Hitting a route like /ar/Details/1 will load the same view in Arabic. Localized Routes We can configure routes like the ones we outlined above by updating our App_Start > RouteConfig.cs file. using System.Web.Mvc; using System.Web.Routing; namespace Heaventure { public class RouteConfig { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( name: "Root", url: "", defaults: new { controller = "Base", action = "RedirectToLocalized" } ); routes.MapRoute( name: "Default", url: "{culture}/{controller}/{action}/{id}", defaults: new { culture = "en", controller = "Home", action = "Index", id = UrlParameter.Optional }, constraints: new { culture = "en|ar" } ); } } } Note that we’re redirecting our root route (/) to a localized route with our default culture (English in this case). To accomplish this, we’re setting a default culture value in our localized route. We’re also routing our root to BaseController.RedirectoToLocalized() action. A Base Controller Adding a base controller for all our other controllers gives us a place to put common behavior that may look awkward in other controllers, like the RedirectToLocalized() action. BaseController, of course, has to derive from .NET’s MVC Controller. using System.Globalization; using System.Threading; using System.Web.Mvc; namespace Heaventure.Controllers { public class BaseController : Controller { public ActionResult RedirectToLocalized() { return RedirectPermanent("/en"); } } } The BaseController is also a good place to set the app’s culture based on the current culture route parameter. To do this, we can override Controller‘s OnActionExecuting(), which is run before every action on the controller. using System.Globalization; using System.Threading; using System.Web.Mvc; namespace Heaventure.Controllers { public class BaseController : Controller { protected override void OnActionExecuting( ActionExecutingContext filterContext) { // Grab the culture route parameter string culture = filterContext.RouteData.Values["culture"]?.ToString() ?? "en"; // Set the action parameter just in case we didn't get one // from the route. filterContext.ActionParameters["culture"] = culture; var cultureInfo = CultureInfo.GetCultureInfo(culture); Thread.CurrentThread.CurrentCulture = cultureInfo; Thread.CurrentThread.CurrentUICulture = cultureInfo; // Because we've overwritten the ActionParameters, we // make sure we provide the override to the // base implementation. base.OnActionExecuting(filterContext); } public ActionResult RedirectToLocalized() { return RedirectPermanent("/en"); } } } We yank the culture value out of the RouteData.Values dictionary, and use it to set our CurrentCulture and CurrentUICulture in our app. Now we can update our HomeController (and any other controller in our app) to derive from BaseController. using Heaventure.Data; using System.Web.Mvc; namespace Heaventure.Controllers { public class HomeController : BaseController { // ... } } With that in place, when we attempt to hit the root route (/), we’re redirected to /en. If we go to /ar, we can see our app name appearing in Arabic. Our localized routes are now setting our app’s culture 🔗 Resource » We go into setting an app’s culture in more detail in our dedicated article How Do I Set Culture in an ASP.NET MVC App? A Simple Language/Culture Switcher Let’s provide our app’s users a simple dropdown to allow them to switch cultures using our new localized route system. <ul class="nav navbar-nav navbar-right"> <li class="dropdown"> <a href="#" class="dropdown-toggle" data- @System.Threading.Thread.CurrentThread.CurrentCulture.EnglishName <span class="caret"></span> </a> <ul class="dropdown-menu"> <li><a href="/en">English</a></li> <li><a href="/ar">Arabic</a></li> </ul> </li> </ul> We’re using a Bootstrap .dropdown here, and we’re wrapping it in a .navbar-right so we can embed it in our main _Layout.cshtml. <!DOCTYPE html> <html> <!-- ... --> ( Resources.AppName, "Index", "Home", new { <ul class="nav navbar-nav"> <li>@Html.ActionLink("Home", "Index", "Home")</li> </ul> @Html.Partial("_CultureSwitcher") </div> </div> </div> <!-- ... --> </body> </html> Now, when we run the app, we have a working language switcher. Clicking a language takes you to its localized route 🔗 Resource » Grab all the code for the demo app we built here from the app’s GitHub repo. And with that in place, we have working globalization in our app! Conclusion We hope you’ve enjoyed this little adventure into ASP.NET MVC i18n. Globalization can be a lot of work, but it doesn’t have to be a pain in the neck. Imagine that you can run a CLI command, and your resource files are automatically sent to translators. When the translators are done working with the resource files in a beautiful web UI, they can save them, and you can sync them back to your project. This and more is possible with Phrase. Built by developers for developers, Phrase is a battle-tested localization platform with a developer CLI and API. Featuring GitHub, GitLab, and Bitbucket sync, Phrase takes care of the i18n plumbing to allow you to focus on the creative code you love. Check out all of Phrase’s features, and sign up for a free 14-day trial.
https://phrase.com/blog/posts/getting-started-with-asp-net-mvc-i18n/
CC-MAIN-2021-04
refinedweb
3,221
50.63
The out method parameter keyword on a method parameter causes a method to refer to the same variable that was passed into the method. Any changes made to the parameter in the method will be reflected in that variable when control passes back to the calling method. Declaring an out method is useful when you want a method to return multiple values. A method that uses an out parameter can still return a value. A method can have more than one out parameter. To use an out parameter, the argument must explicitly be passed to the method as an out argument. The value of an out argument will not be passed to the out parameter. A variable passed as an out argument need not be initialized. However, the out parameter must be assigned a value before the method returns. A property is not a variable and cannot be passed as an out parameter. An overload will occur if declarations of two methods differ only in their use of out. However, it is not possible to define an overload that only differs by ref and out. For example, the following overload declarations are valid: class MyClass { public void MyMethod(int i) {i = 10;} public void MyMethod(out int i) {i = 10;} } while the following overload declarations are invalid: class MyClass { public void MyMethod(out int i) {i = 10;} public void MyMethod(ref int i) {i = 10;} } For information on passing an array, see Passing Arrays Using ref and out. // cs_out.cs using System; public class MyClass { public static int TestOut(out char i) { i = 'b'; return -1; } public static void Main() { char i; // variable need not be initialized Console.WriteLine(TestOut(out i)); Console.WriteLine(i); } } -1 b C# Keywords | Method Parameters
http://msdn.microsoft.com/en-us/library/t3c3bfhx(VS.71).aspx
crawl-002
refinedweb
291
62.17
Description Guys today we are going to interface seven segment display with our LPC2148. First we will go into some basic information about seven segment. It’s so called because it have seven blocks. Each block have a led and different pin to control the block as seen in the figure below IODIR0 |= 0xff; // setting Port0.0 to Port0.7 as output IOSET0 |= 0x06; // to set segments 'b' and 'c' as high to display 1. and so on. I’ll use timer0 to use delay of 500ms between two corresponding digits. CODE #include <lpc214x.h> #include "pll.h" #include "timer0.h" char display[] = {0x3f,0x06,0x5b,0x4f,0x66,0x6d,0x7c,0x07,0x7f,0x67}; int main () { PINSEL0 = 0; // setting pinsel for normal io operations IODIR0 |= 0xff; // setting Port0.0 to Port0.6 as output pll_init (); // initialising pll VPBDIV = 0x00; // setting pclk = 15 MHz timer0_init (15000); // initialsing timer at 1ms as base while (1) { for (int i=0;i<10;i++) { IOSET0 |= display[i]; // display digits 0 to 9 timer0_delay (500); // delay of 500 ms IOCLR0 |= display[i]; // clear pins } } } Result 100% DOWNLOAD You can buy me a coffee sensor 🙂 download the CODE below Well the code works perfectly. But can u show how to interface 14 segment display.
https://controllerstech.com/interfacing-seven-segment-display-with-lpc2148/
CC-MAIN-2019-51
refinedweb
207
76.01
I am part of a team that is upgrading from Alfresco 3.3 to 5.2. At the moment we are having a go at just connecting Alfresco 5.2 to an Oracle database and getting it to work from our application. I loaded a custom model via the model manager. I tried this when using the default postgres database and everything worked well. But when I switched to the oracle database, I don't see the model details in the ALF_QNAME and ALF_NAMESPACE tables. Do I need to do anything special for Oracle. Entries in alf_qname and alf_namespace are only created if a namespace / qname is actually being used / referenced in another entity that has some sort of "qname_id" column in its table. E.g. when you create a node of a type from that model, or apply an aspect. Just the act of defining the model does not automatically create those data entries.
https://community.alfresco.com/thread/232230-i-am-conecting-alfresco-52-enterprise-trial-version-to-an-oracle-database-when-i-created-a-custom-model-i-expected-to-see-records-in-alfnamespace-and-alfqname-tables-just-as-it-happens-with-the-postgress-database-but-i-dont-see-this-i-loaded-my-mod
CC-MAIN-2018-39
refinedweb
155
66.13
> - The available bug ID's should be fetched automaticaly without any user > intervention. Then, they should show up like the autocompletion while > the user is typing --> a special char has to be defined to trigger it. > How define that trigger char? Separate property? Regex? Defined by a regex can can be swaped in and out. A user could type --- closes #1, fixes #5 --- Autocomplete will fire upon a regex that sniffs 'closes' & 'fixes' - other terms could also be used - just list them in the regex. > whole user name thing to pull the users 'tickets' I have to say that this would be an AWESOME addition - but hell it would be complex! I think that it is the job of the issue tracker to supply a 'report' which can be called upon - how? What comes to mind is the idea of an 'issue-query abstraction layer' - follow class factory pattern. *you* would define an interface, that an 'issue-implementation' class would implement, and code the class factory. And *we* can code for a specific issue tracker - supply a 'report' & issue-implementation class that will turn that report into something useful by TSVN. That would take all the pressure off you to code up for every single issue tracker out there. <C# pseudo code> /* interface that defines what we are going to do with issues */ public interface IIssueQuery { Issue[] GetUsersIssues(string user); Issue[] GetUsersIssues(string user, string password); } /* class that defines an issue */ public class Issue {...} /* an issue factory class that calls the appropriate * issue implementation class */ public class IssueQueryFactory : IIssueQuery public Issue[] GetUsersIssues(...) { /* code to find out the type of * 'issue-implementation' class * were using, make the call return * string array of details */ } /* a set of issue-implementation classes that * can talk to a specific issue-tracker */ class TracIssueQuery : IIssueQuery { public Issue[] GetUsersIssues(...) { /* code that will talk to the trac issue tracker * mabye run a specific report to pull the information */ } class ZillaIssueQuery : IIssueQuery {...} </C# pseudo code> --------------------------------------------------------------------------- Steven Higgan - 3rd Year B.I.T. Otago Polytechnic, Dunedin New Zealand .net geek ---------------------------------------------------------------------.
http://svn.haxx.se/tsvn/archive-2005-02/0809.shtml
CC-MAIN-2016-22
refinedweb
340
51.78
Type: Posts; User: 330xi in different places of code: typedef void (*toFileWrite)(vector<int>, const char *); [...] toFileWrite _toFileWriteFunc; [...] _toFileWriteFunc = (toFileWrite)GetProcAddress(hInstLibrary,... Oh, thanks a lot for such detailed answer, Paul! It really helped. At least directed me for many other interesting articles. As for my application and dll I have reorganised them in the other way,... Hi! I make a dll and its using by this tutorial. I make absolutely the same files with the same code. Ok. It works just fine. Now I add my function with #include vector, using namespace std and... Resolved by checking processes monikers in ROT, then getting their IDispatch and then transforming it to docvariable. I got the wanted result: I start to send messages as if this keys have been pressed out of mousewheel handler instead of calling their handlers.. For perfoming such operation I used Method QueryInterface: m_pDocDisp->QueryInterface(__uuidof( Word::_Application), (void **)&myApp); Where myApp is _ApplicationPtr m_pDocDisp is ... Hi! I have a Moniker from ROT that points on desirable process. And I have an _ApplicationPtr var to perfom some actions with running application. HOWTO make my _ApplicationPtr var to be... oh, thanks!) At least I know the general form of my problem now. I'm sure it helps me to find other people experience in solving it and explore it better. I do. May be later I will come back to this problem. The counting was that somebody already had such symptoms in his experience and the disease had been eliminated. Hi! I want to have some data exchange between my application and ms word. I start my work basing on this article. But there is no info or examples how to perform a quiet an ordinary action: to... my function connected with a plenty of initialization commands and variables that exist only in OnCreate. I've tried but it cause a tree of run-time errors. But on this exact topic, if in next 12... Have the same "bug fixing" And i also need to simulate AfxMessageBox "Tell us what the problem is" Okay In OnCreate of MainFrame I have a function call that connected with two other custom... I do exactly like that: I try to call my keypress handlers (OnDemokey(); OnMinuskey() ) in wheel handler (OnMouseWheel), problem is that this call unexpectedly repeats many-many times until program... Very often, the element between left and right mouse button is a kind of wheel. You can push or you can roll it forward or backward. The second kind of movement people perform when they want to move... I want to run soem code by rolling a mouse wheel. MFC, doc/view In Mainframe header: afx_msg BOOL OnMouseWheel (UINT nFlags, short zDelta, CPoint pt); In source message map: I think it is easier, but I don't want user to lose any "very important" information that will be in clipboard in "that" moment for sure) Hi! I want to add a picture to my opened word file. But not from file. I have some picture on a DC in my program and want it directly to Word document. Does anybody know how? At least may be some... Oh, thanks for this note! I've read msdn. How could I be so inattentive?..( Solved: void CMainFrame::Hide() { this->SetMenu(NULL); } void CMainFrame::Revoke() { Hi! I want to hide a standard main menu toolbar of my MFC doc/view application. Howto? I tried this ShowControlBar(&m_wndToolBar,TRUE,FALSE); - no reaction thanks in advance. One of my variants was like this. The view class has a variable of some custom class with many methods of processing image - RXPicture. With next methods I save picture to disk, and one of them... Hi! In my MFC doc/view programm I have two classes with their windows. The first one is view, the second is a dialog. The view is constantly displaying image form camera. I want the second window,...
http://forums.codeguru.com/search.php?s=460cdf5c82caa02d3d474e72d93cf2bc&searchid=4875501
CC-MAIN-2014-35
refinedweb
655
75.61
A Stack is a Last In First Out (LIFO) data structure. In this tutorial, we will be discussing the Stack Class in Java, what are the methods in the stack class, how to create a java stack, and Stack Implementation with Example. Go through the direct links and learn the java stack class thoroughly. This Stack Class in Java Tutorial Contains: - Java Stack Class - Declaring a Stack in Java - Interfaces Implemented in Stack Declaration - Stack Class Constructor - Creating a Stack Class in Java - Java Stack Methods - Stack Implementation - Example on Stack Class in Java Java Stack Class Stack in Java is a class that presents in java.util package. A stack is a child class of the vector that implements standard last-in,first-out(LIFO) stack data structure. It only defines the default constructor, which is used to create an empty stack. The stack includes all the methods defined by vector class and adds several of its own methods. To put an element at the top of the stack we can use the push() method. To remove the top element we can use a pop() method. An EmptyStackException is thrown if you call the pop() method and your invoking stack is empty. Also Refer: Declaring a Stack in Java public class Stack<E> extends Vector<E> Interfaces Implemented in Stack Declaration - Serializable: This is a marker interface that classes must perform if they are to be serialized and deserialized. - Cloneable: This is an interface in Java that needs to be performed by a class to allow its objects to be cloned. - List<E>: The List interface gives a way to store the ordered collection. The list is also a child interface of Collection. - Iterable<E>: Iterable interface specifies a collection of objects which is iterable — meaning which can be iterated. - RandomAccess: This is a marker interface used by List implementations to show that they support fast (generally constant time) random access. - Collection<E>: A Collection describes a group of objects acknowledged as its elements. The Collection interface is used to pass around collections of objects where maximum generality is desired. Stack Class Constructor The Stack class includes only the default constructor that creates an empty stack. That is, as follows: public Stack() Creating a Stack Class in Java If you don’t know how to create a stack in java then this section will help you a lot. Here, we are discussing creating a java stack class. To create a stack, it is necessary to import the java.util.Stack package. Follow the below syntax after importing the package. Stack<Type> stacks = new Stack<>(); Here, Type indicates the stack’s type. For instance, // Create Integer type stack Stack<Integer> stacks = new Stack<>(); // Create String type stack Stack<String> stacks = new Stack<>(); Methods of the Stack Class in Java 1. Object push(Object element): This method is used to push an element at the top of the stack. 2. Object pop(Object element): This method is used to pop the element at the top of the stack. 3. Object peek(): This method is used to returns the elements at the top of the stack, but doesn’t remove it. 4. boolean empty(): This method is used to check the stack is empty or not. If the stack is empty returns true, and it returns false if the stack is not empty. 5. int search(Object element): This method is used to search whether a particular element is available in a stack or not. If the element is found in the stack it returns the position of the element else it returns -1. Stack Implementation In the stack, elements are stored and obtained in the Last In First Out manner. In other words, elements are inserted to the top of the stack and removed from the top of the stack. Example on Stack Class in Java import java.util.*; class Person { //pushing element at the top of stack static void showPush(Stack st, int a) { st.push(a); System.out.println("Push(" +a + ")"); System.out.println("Stack: " +st); } //pop element from the stack static void showPop(Stack st) { System.out.print("Pop -> "); Integer a =st.pop(); System.out.println(a); System.out.println("Stack: " +st); } //displaying top element from stack static void showPeek(Stack st) { Integer b = (Integer) st.peek(); System.out.println("Element at the top of the stack is: " +b); } //searching element in stack static void showSearch(Stack st, int element) { Integer position = (Integer) st.search(element); if(position == -1) System.out.println("Element not found"); else System.out.println("Element is found at position " + position); } public static void main(String args[]) { Stack st = new Stack(); System.out.println("Stack: " +st); showPush(st,10); showPush(st,20); showPush(st,30); showPush(st,40); showPop(st); showPeek(st); showSearch(st,20); showPop(st); showPop(st); showPop(st); try { showPop(st); } catch(EmptyStackException e) { System.out.println("Empty Stack"); } } } Output:
https://btechgeeks.com/stack-class-in-java-with-example/
CC-MAIN-2021-31
refinedweb
819
55.24
Raising questions Description: The LRU (least recently used) cache structure is designed. The size of the structure is determined during construction, assuming that the size is k, and has the following two functions set(key, value): insert a record (key, value) into the structure get(key): returns the value corresponding to the key Basic analysis LRU is a very common page replacement algorithm. The translation of LRU into colloquialism is: when some data has to be eliminated (usually the capacity is full), select the data that has not been used for the longest time to eliminate. Let's implement a fixed capacity LRUCache. If it is found that the container is full when inserting data, first eliminate a data according to LRU rules, and then insert new data. Both "insert" and "query" are counted as "use" at one time. It can be understood through a case. Assuming that we have a capacity of [external chain picture transfer failure, the source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-qbimq466-1637490443898)( )]LRUCache and test key value pairs [1-1,2-2,3-3] of are inserted and queried in order: - Insert 1-1, and the latest usage data is 1-1 [(1,1)] - Insert 2-2, and the latest usage data becomes 2-2 [(2,2), (1,1)] - Query 1-1, and the latest used data is 1-1 [(1,1,), (2,2)] - Insert 3-3. Since the container has reached its capacity, you need to eliminate the existing data before inserting. At this time, 2-2 and 3-3 will be eliminated and become the latest usage data * * [(3,3), (1,1)]** In terms of key value pair storage, we can use the "hash table" to ensure that the complexity of insertion and query is [the external chain picture transfer fails, and the source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-8rbyeb6j-1637490443901)( (1)]&preview=true). In addition, we need to maintain an additional "order of use" sequence. We expect that when "new data is inserted" or "key value pair query occurs", the current key value pair can be placed at the head of the sequence. In this way, when LRU elimination is triggered, we only need to delete the data from the tail of the sequence. It is expected that if the [external chain image transfer fails, the source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-i6ot4sk7-1637490443904)( (1) ] & preview = true) to adjust the position of a node in the sequence within the complexity, it is natural to think of a two-way linked list. Bidirectional linked list Specifically, we use the hash table to store "Key Value pairs". The Key of the Key Value pair is used as the Key of the hash table, while the Value of the hash table uses our own encapsulated Node class, and the Node is also used as the Node of the two-way linked list. - Insert: check whether the current key value pair already exists in the hash table: - If it exists, update the key value pair and adjust the Node corresponding to the current key value pair to the head of the linked list (addHead operation) - If not, check whether the hash table capacity has reached the capacity: - Capacity not reached: insert the hash table and adjust the Node corresponding to the current key value pair to the remove part of the chain header (addHead operation) - Reached capacity: first find the element to be deleted from the tail of the linked list for deletion (remove operation), then insert the hash table, and adjust the Node corresponding to the current key value pair to the head of the linked list (addHead operation) - Query: if the Key is not found in the hash table, it will directly return [external chain picture transfer failed. The source station may have anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-s2mbaizx-1637490443908)( -1&preview=true)]; If the Key exists, return the corresponding value and adjust the Node corresponding to the current Key value pair to the linked list header (addHead operation) Some details: - In order to reduce the "empty judgment" operation of the left and right nodes of the two-way linked list, we create two "sentinel" nodes head and tail in advance. package com.bugchen.niuke.excirse.list; import java.util.HashMap; import java.util.Map; /** * The LRU (least recently used) cache structure is designed. The size of the structure is determined during construction, assuming that the size is k, and has the following two functions * 1. set(key, value): Insert the record (key, value) into the structure * 2. get(key): Returns the value corresponding to the key * <p> * Tips: * 1.Once the set or get operation of a key occurs, it is considered that the record of the key has become the most commonly used, and then the cache will be refreshed. * 2.When the size of the cache exceeds k, the least frequently used records are removed. * 3.Enter a two-dimensional array and k. each dimension of the two-dimensional array has 2 or 3 numbers. The first number is opt, and the second and third numbers are key and value * If opt=1, the next two integers, key, value, represent set(key, value) * If opt=2, the next integer key indicates get(key). If the key does not appear or has been removed, it returns - 1 * For each opt=2, output an answer * 4.In order to distinguish the key and value in the cache, the key in the cache described below is wrapped with "" number * <p> * Requirement: the complexity of set and get operations is O(1)O(1) * * @Author:BugChen * @Description:hashMap+Two way linked list * @Date: 2021-11-21 * @Method:get() put() */ public class LRUCache { //A two-way linked list is needed to maintain the order of the most recently updated nodes //Establish mapping relationship between hashMap and bidirectional linked list //The head of the two-way linked list is inserted and the tail is deleted to change the position of the update node in the two-way linked list //There are two sentinel nodes in the two-way linked list, that is, the head node and the tail node, so as to reduce empty judgment //1. Bidirectional linked list (maintain the order of nodes, mainly the order after update) private class Node { int key; int value; Node pre;//Precursor node Node next;//Successor node Node(int key, int value) {//Initialize bidirectional linked list this.key = key; this.value = value; } } //2. Maximum capacity of LRUCache private int n; //3. Two sentinel nodes private Node head; private Node tail; //4. Hash table Map<Integer, Node> lruMap; //Initialization of LRUCache (lru cache) public LRUCache(int capacity) { this.n = capacity; this.head = new Node(-1, -1); this.tail = new Node(-1, -1); this.head.next = this.tail; this.tail.pre = this.head; this.lruMap = new HashMap<>(); } //5. get and put methods of LRUCache public int get(int key) { Node node = null; //There are two cases: there is a corresponding node in lruMap and there is no corresponding node in lruMap if (lruMap.containsKey(key)) { //If it exists, skip the position of the node in the bidirectional linked list to the first position node = lruMap.get(key); addHead(node); return node.value; } return -1; } public void put(int key, int value) { Node node = null; //put can be divided into three types: normal join, repeated join and full join if (lruMap.containsKey(key)) {//It means to join repeatedly, get the current node, and modify the value value node = lruMap.get(key); node.value = value; } else { if (lruMap.size() == n) {//Indicates that it is full. At this time, you need to delete the tail node of the two-way linked list Node del = tail.pre; lruMap.remove(del.key);//The corresponding mapping also needs to be deleted remove(del); } node = new Node(key, value); lruMap.put(key, node); } addHead(node); } //addHead() method: put the most recently operated node in the first position of the two-way linked list (header insertion) private void addHead(Node node) { //It is mainly divided into two steps: //The first step is to delete the current node from the two-way linked list //Then insert the node into the head node remove(node); //Note: head and tail are always sentinel nodes; head.next is the head node of the useful node node.next = head.next; node.pre = head; head.next.pre = node; head.next = node; } //remove() method: delete the node in the bidirectional linked list private void remove(Node node) { //delete: removes the current node from the bidirectional linked list //Since we create two sentinels head and tail in advance, if node.pre is not empty, it means that the node itself exists in the two-way linked list (not a new node) //The nodes to be deleted need to be in the linked list, otherwise they do not need to be deleted if (node.pre != null) {//This indicates that the node exists in the bidirectional linked list Node pre = node.pre;//The nodes to be operated need to be saved pre.next = node.next; node.next.pre = pre; } } } Test: package com.bugchen.niuke.excirse.list; import java.util.ArrayList; import java.util.Arrays; import java.util.List; public class LRUSolution { /** * lru design * * @param operators int Integer two-dimensional array the ops * @param k int Integer the k * @return int Integer one-dimensional array */ public int[] LRU(int[][] operators, int k) { List<Integer> list = new ArrayList<>();//Store results LRUCache lru = new LRUCache(k);//Initialize LRU cache for (int[] op : operators) {//Traversing a two-dimensional array int type = op[0];//Gets the operand, 1 for set and 2 for get if (type == 1) { // set(k,v) operation lru.put(op[1], op[2]); } else { // get(k) operation list.add(lru.get(op[1])); } } int n = list.size(); int[] ans = new int[n];//Return results for (int i = 0; i < n; i++) ans[i] = list.get(i); return ans; } public static void main(String[] args) { LRUSolution lruSolution = new LRUSolution(); int[][] operators = new int[][]{{1,1,1},{1,2,2},{1,3,2},{2,1},{1,4,4},{2,2}}; System.out.println(Arrays.toString(lruSolution.LRU(operators,3))); } }
https://programmer.help/blogs/lru-algorithm-design.html
CC-MAIN-2022-21
refinedweb
1,742
58.01
WiFi stable at home network, unstable at work network Hi I've recently experienced some problems with a unstable wifi connection, and managed to narrow down on the issue. When I run the code (connect to wifi, mqtt server and publish) on my home network I have no stability issues, however, when I am connected to the quest network at my workplace I experience instability. Wifi setup: from network import WLAN wlan = WLAN(mode=WLAN.STA) wlan.connect("my-wifi-name", auth=(WLAN.WPA2, "my-wifi-pw"), timeout=5000) Below is a graph showing ping response (pinged every x seconds), where the broken lines illustrate a time out. I'm clueless; anyone else experienced similar issues, do I have to setup the wifi connection different on a workspace network (which is usually more secure)? Thanks. I have encountered similar problems with a Wipy connecting to MQTT broker. Connection was perfect at my home office in a rural setting, however, not so good when tested at a bay in an industrial area. The signal was good at -40 but neighbours were transmitting on same channel with hidden SSID. The noise floor exceeded my signal. A spectrum analyzer would be helpful or even an app on your mobile could help track the problem down. You may only need to change the channel on your router. In my case not a Wipy problem. @jcaron if I remember correctly I get values between -30 and -60 depending if I use external or internal antenna. The office building is located in a small village, and my home network is like ~500 m away. We also have a pycom device (WiPy) inside a mountain (power plant), with limited 2.4 GHz traffic, experiencing the same problem (connected to the same guest network). I have not tested another device that uses only the 2.4 GHz band yet, but can test tommorow and see. @Asb if you use WLAN.scan(), what kind of RSSI are you seeing for that network? Also, is the radio environment very busy (office building in the middle of a city with lots of people streaming all day over WiFi, for instance)? The 2.4 GHz band tends to be very very crowded in some areas, and the ESP32 only supports that band, which could explain bad performance and packet loss. What kind of performance are you seeing on other devices using only the 2.4 GHz band in the same place?
https://forum.pycom.io/topic/4913/wifi-stable-at-home-network-unstable-at-work-network
CC-MAIN-2019-43
refinedweb
409
63.39
Storing a part of the hierarchy @zipit, I got the code from another thread: And if you change subBc[2000] = "world!" to subBc[2000] = op it also works! I tried subBc.SetLink(2000, newDoc) and indeed it does not work. It returns a None on reading back the subcontainer after storing the file. @zipit, yes I know that functionality (and use it often). However, although it is hidden it still consumes cpu power. We like to minimalize cpu power by converting a node using csto, hide the node and disabling generators and deformers in the node. Hi, subBc[2000] = "world!" to subBc[2000] = op That is expected as op is also an C4DAtom. it also works! I tried subBc.SetLink(2000, newDoc) and indeed it does not work. It returns a None on reading back the subcontainer after storing the file. The method SetLink()does not have a (boolean) return type. So this is to be expected too. I am not quite sure if you got my point, so here is a modified version of your first script - which you assume to be "working" as it printed back your document. I did modify it in such a way, that it does not work any more - I am really good at this specific task ;). I hope the code and the comments make more clear what I am trying to convey: That you are not storing your document, but a link to it. import c4d PLUGIN_ID = 1234567 MyUniqueId = 456789 def main(): #retrieves the document baseContainer docBC = doc.GetDataInstance() sel = doc.GetSelection() # !!! only select the parent, not the children newDoc = c4d.documents.IsolateObjects(doc, sel) #create a sub-BaseContainer subBc = c4d.BaseContainer() subBc[1000] = "hello" subBc[2000] = newDoc # Get the type of the ID we set print "ID 2000 is a link", subBc.GetType(2000) == c4d.DA_ALIASLINK # do the same thing in green. That SetLink() returns None # is expected as it has no return value (which is implictly # None in Python) subBc.SetLink(2000, newDoc) print "ID 2000 is a link", subBc.GetType(2000) == c4d.DA_ALIASLINK # Add the container to the "main" Container docBC.SetContainer(MyUniqueId, subBc) # Updates the document container doc.SetData(docBC) # Print the values stored in our container. for cid, value in doc.GetDataInstance().GetContainer(MyUniqueId): print cid, value print "\nDeleting newDoc...\n" # This part is a bit dodgy since Pythons garbage collector cannot # be trusted. But what I am trying to show is that you stored # a reference, not an object. For me it works here, but in some cases # an object can linger even after marking it for garbage collection. newDoc.Flush() del(newDoc) # Print the values stored in our container. for cid, value in doc.GetDataInstance().GetContainer(MyUniqueId): print cid, value if __name__=='__main__': main() this should print out something like: ID 2000 is a link True ID 2000 is a link True 1000 hello 2000 <c4d.documents.BaseDocument object called '' with ID 110059 at 0x00000135BC7F9030> Deleting newDoc... 1000 hello 2000 None >>> Cheers zipit Thanks for the explanation. Two questions: the line <c4d.documents.BaseDocument object called '' with ID 110059 at 0x00000135BC7F9030> always show a strange name ''? Why is it not showing the name of the document? Back to the main questions, what to do, to store the node in scene file. - m_magalhaes last edited by m_magalhaes hello, 1 - what kind of strange name ? 2 - you can store the data itself in the tag and than with the function read and write store them in the Scenefine I'm using a file on the disk on this example but that should work with the HyperFile link provide by the read and write functions. Be aware that HyperFile.WriteMemory is storing byte sequences that will be platform dependent. import c4d from c4d import gui # Welcome to the world of Python import os # Main function def main(): path = c4d.storage.LoadDialog(c4d.FILESELECTTYPE_SCENES, flags=c4d.FILESELECT_DIRECTORY) path = os.path.join(path, "prout.txt") # using a MemoryFileStructure to store a document mfs = c4d.storage.MemoryFileStruct() mfs.SetMemoryWriteMode() newdoc = c4d.documents.IsolateObjects(doc , [op]) # Save the document to the MemoryFileStructure c4d.documents.SaveDocument(newdoc, mfs, c4d.SAVEDOCUMENTFLAGS_NONE, c4d.FORMAT_C4DEXPORT) #Retrieve the data and store it somewhere, could be self.myData myData = mfs.GetData() #Save the data to a hyperfile myFile = c4d.storage.HyperFile() if not myFile.Open(0, path, c4d.FILEOPEN_WRITE, c4d.FILEDIALOG_NONE): raise RuntimeError("Failed to open the HyperFile in write mode.") # Store the size myFile.WriteInt32(myData[1]) # Store the data itself if myFile.WriteMemory(myData[0]) is False: raise ValueError("can't write the file") myFile.Close() # Read the data from the HF if not myFile.Open(0, path, c4d.FILEOPEN_READ, c4d.FILEDIALOG_NONE): raise RuntimeError("Failed to open the HyperFile in read mode.") size = myFile.ReadInt32() data = myFile.ReadMemory() myFile.Close() #Set the MFS mfs2 = c4d.storage.MemoryFileStruct() mfs2.SetMemoryReadMode(data,size) c4d.documents.MergeDocument(doc, mfs2, c4d.SCENEFILTER_OBJECTS) c4d.EventAdd() # Execute main() if __name__=='__main__': main() Once again, be aware of where you are modifying the scene (on main thread and not elsewhere) by the way, are you going to use c++ or python at the end ? (just for the tags of this thread) Cheers, Manuel Great, thank you very much. I am beginning to see the light. And yes, my fault, apparently I indicated it as c++, but I am doing it in Python. About your warning. My plan was to do it all in a tag plugin. There I can have the interface, do the isolate and the read/write in a hf. Or is it better to do it in a Object plugin? Hi, @pim said in Storing a part of the hierarchy: - the line <c4d.documents.BaseDocument object called '' with ID 110059 at 0x00000135BC7F9030> always show a strange name ''? Why is it not showing the name of the document? Not quite sure what you do mean by that: - The fact that the it prints the empty string for the document name is probably because you run the script on a unsaved document. This whole untitled_x.c4d stuff you see in c4d app is all smoke and daggers. An unsaved document will return the empty string for GetName(). - If you mean how objects are printed out, this is because of the way it is convention to implement __repr__()for an object. I found that obsession with memory addresses also always quite bizarre, but, hey, everyone is doing it ;) Cheers zipit @pim said in Storing a part of the hierarchy: My plan was to do it all in a tag plugin. There I can have the interface, do the isolate and the read/write in a hf. Or is it better to do it in a Object plugin? Hi, it does not really matter what kind of NodeDatayou use, what @m_magalhaes meant, was that you should be careful with the threaded context of methods like Execute()in TagDataor GetVirtualObjects()in ObjectData. When you execute your code from a method that is executed from the main thread, you are fine (e.g. NodeData.Message()). You can always check there with c4d.threading.GeIsMainThread()if you are in the main thread to be extra sure. Cheers zipit - m_magalhaes last edited by m_magalhaes An ObjectData (generator) would be a bit better for that. But you are exploring possibilities, that's also the way to find new solutions / workflow. About my warning it's just a reminder. You will have so sent message probably and use a MessageData to react to that message in order to do some action in the main thread. Cheers, Manuel Thanks for all the support. I will use all the knowledge gained and start testing. I am sure, I will be back with more questions. -Pim
https://plugincafe.maxon.net/topic/11856/storing-a-part-of-the-hierarchy/20
CC-MAIN-2020-24
refinedweb
1,282
68.06
Hi, In the article from 2010, Tomáš was describing the limitations on WP7:-... ." Are these two addressed in the latest version of IronRuby for WP7? Thanks! on 2011-12-11 17:38 on 2011-12-11 20:30 This still is not supported in IronRuby, as the version of the CLR on the Windows Phone (at least last i checked) was really the compact framework, which does not support any types usually found in the full .NET Framework's System.Reflection.Emit namespace. Therefore, IronRuby cannot emit the correct backing class. The ref params is also a CF limitation. We were thinking of doing some static analysis to generate C# code which when compiled would produce the required static types, and then know to use those as backing types, but any static analysis solution would not always work in a dynamic language. I haven't used WP7 devtools in a while, so I'd be curious if the Ref.Emit limitation still exists in Mango, but I suspect it is still an issue. ~Jimmy on 2011-12-20 10:11 Hi James! System.Reflection.Emit is available in Mango, partially...... DynamicMethod, ILGenerator and OpCodes looks like they are fully supported. What do you think? Jurassic for example has made his way on WP7. on 2012-01-04 05:20 I
https://www.ruby-forum.com/topic/3193816
CC-MAIN-2016-30
refinedweb
220
66.44
here is my code: #include <fstream> #include <iostream> #include <string> #include <fstream> #include <string> using namespace std; void displayjulianDates(int, int, int, int, int, int); int main() { //Declare variables int month1; int day1; int year1; int month2; int day2; int year2; //Gets input from user cout << "Please enter the first month in the following format: MM " << endl; cin >> month1; cout << "Please enter the first day as follows: DD " << endl; cin >> day1; cout << "Please enter the first year as follows: YYYY " << endl; cin >> year1; cout << endl; cout << "Please enter the second month as follows: MM " << endl; cin >> month2; cout << "Please enter the second day as follows: DD " << endl; cin >> day2; cout << "Please enter the second year as follows: YYYY " << endl; cin >> year2; //Function Call displayjulianDates (month1, day1, year1, month2, day2, year2); cin.get(); }//Ends main //Function that calculates and displays julian dates and difference void displayjulianDates(int month1, int day1, int year1, int month2, int day2, int year2) { long intRes1; long intRes2; long intRes3; long intRes4; long intRes5; long intRes6; long jdn1; long jdn2; long julianDifference; //Calculates first julian date intRes1 = ((2 - year1 / 100) + (year1 / 400)); intRes2 = int(365.25 * year1); intRes3 = int(30.6001 * (month1 + 1)); jdn1 = (intRes1 + intRes2 + intRes3 + day1 + 1720994.5); //Displays first julian date cout << "The first Julian Date is " << jdn1 << endl; //Calculates second julian date intRes4 = ((2 - year2 / 100) + (year2 / 400)); intRes5 = int(365.25 * year2); intRes6 = int(30.6001 * (month2 + 1)); jdn2 = (intRes4 + intRes5 + intRes6 + day2 + 1720994.5); //Displays second julian date cout << "The second Julian Date is " << jdn2 << endl; //Calculates julian date difference julianDifference = jdn2 - jdn1; //Displays julian difference cout << "The difference between the julian dates is " << julianDifference << endl; cin.get(); }//Ends Function It works fine i have a little problem related to date format code i.e: I want to allow user to Enter 8 character mamimum and date format should be like this dd/mm/yyyy. 8 characters maximum and 3rd and 6 character is " / " Any help. the
https://www.daniweb.com/programming/software-development/threads/191606/date-difference-program
CC-MAIN-2018-43
refinedweb
326
51.41
Agenda See also: IRC log <noah> As noted in my regrets, I have some conflicts with today's call, but I'll try to keep an occasional eye out for IRC, and dial in if something comes up for which I am needed. Thank you. <scribe> Agenda: SW: Other topics? DC: Assume that package URIs stuff would come up some time SW: Issue 61? We will add this SW: Minutes from 16 October: ... and 6 Nov. f2f: <DanC> +1 approve RESOLUTION: Approved as circulated <DanC> (noah, you're ok to scribe 20 Nov?) Meet next on 20 November, scribe duty to Noah, whom failing DanC Meeting of 27 November is cancelled NW: I've reviewed this, and as far as I understand it, I think they are using proxies in the way they are meant to be used SW: What about relation to Generic Resources? NW: Didn't see that explicitly, but any transformation gives a new representation DC: Are there multiple URIs? NW: I think not DC: TV, any thoughts on that? TVR: Not at the moment SW: Anything we need to push on? ... Last Call has actually expired NW: I see no need to do anything other than say "Fine" DC: Can you tell me a typical use case story? NW: There are proxies set up so that e.g. a rich web site goes through the proxy and is transformed to something viewable on your mobile -- I think sidekick exploits this DC: Any good recommendations NW: Well, yes, don't change request headers was one bit DC: Ah, perhaps the HTTP working party should look at this NW: Good idea SW: I will send a courtesy message saying we have nothing to say. . . DC: HST has the ball HST: I foresee progress in the new year ... So we could close the issue w/o completing the action (yet) <DanC> ACTION-23 due 2008-02-01 <trackbot> ACTION-23 track progress of #int bug 1974 in the XML Schema namespace document in the XML Schema WG due date now 2008-02-01 DC: The two are now linked, via the Issue being in state Pending Review SW: Some items already suggested: Self-describing Web, Uniform Access to Metadata, Versioning ... Wrt UAM, JR has an action to produce some words, but not due until next year JR: I will try to get something before us -- at least some slides <noah> I remain somewhat optimistic of having a new Self-Describing Web draft. Bad news: unlikely to be as far ahead of F2F as I would like; Good news: I would expect changes to be well-isolated and easy to review, given thorough discussion we had in Bristol. SW: DO, what about Versioning? DO: I hope to get to it next week or the week after JR: I believe I'm waiting for some input from DO DO: I believe I'm waiting for JR SW: Sounds like you should talk <DanC> action-181? <trackbot> ACTION-181 -- Jonathan Rees to update versioning formalism to align with terminology in versioning compatibility strategies -- due 2008-10-16 -- OPEN <trackbot> <DanC> action-182? <trackbot> ACTION-182 -- David Orchard to provide example for jar to work into the formalism -- due 2008-10-23 -- OPEN <trackbot> <DanC> action-183? <trackbot> ACTION-183 -- David Orchard to incorporate formalism into versioning compatibility strategies -- due 2008-10-23 -- OPEN <trackbot> <DanC> (indeed, the tracker state looks like... or is consistent with... deadlock) SW: JR, DO will talk offline SW: I have suggested giving each member a slot to motivate a topic, one they care about, either new, ongoing or forgotten ... HST, URNsAndRegistries? HST: Yes, I will have new prose in time for f2f <DanC> on tagSoup: DC: Mike Smith is working on a language spec. document for HTML 5 ... ref. TagSoupIntegration ... New W3C travel policy would mean I might get this trip and no others until TPAC SW: So you are asking if we should meet? DC: Yes HST: I had assumed we would meet, planning to buy tickets soon SW: NW and TVR will not be there, DO uncertain. NW and DO will join by 'phone HST: I believe we will have enough people to do useful work SW: We will meet, HST can buy tickets ... I would request more responses when I ask for agenda input <noah> I will be at the December meeting (which if course is convenient for me). SW: Let's look at the list of open actions, by issue: ... Is ACTION-24 a worthwhile thing for Tim to pursue? DC: Well, TBL does say when asked that we should keep this open ... I proposed to close on the basis of the XQuery spec. ... and there's the HTML5 spec's new input on this SW: So the topic title asks a question DC: That's overtaken for sure: W3C specs do support IRIs ... What's at the heart of WebArch, IRIs or URIs -- answer 'yes' <DanC> ACTION-188? <trackbot> ACTION-188 -- Dan Connolly to investigate the URL/IRI/Larry Masinter possible resolution of the URL/HTML5 issue. -- due 2008-10-31 -- OPEN <trackbot> SW: Anyone want to work on this? DC: Even if not, OK to have the issue there as a marker SW: ISSUE-30 / ACTION-176 -- NM, DO, any progress? <DanC> action-176? <trackbot> ACTION-176 -- Noah Mendelsohn to work with Dave to draft comments on exi w.r.t. evaluation and efficiency -- due 2008-09-30 -- OPEN <trackbot> DO: I think NM has made some progress, I request to be released from this, too much load elsewhere <DanC> (noah, are you OK to keep ACTION-176 open without Dave?) SW: ISSUE-34 / ACTION-113 HST: Yes, it will happen someday SW: ISSUE-35 / ACTION-130 XHTML/GRDDL DC: Namespace doc't has been updated SW: If you think it can be closed, please do so, leave a pointer to where the action is addressed DC: OK ... What about the issue? <DanC> action-130: rev 2008/10/14 22:08:29 <trackbot> ACTION-130 Consult with Dan and Ralph about the gap between the XHTML namespace and the GRDDL transformation for RDFa notes added SW: XHTML + RDFa has done it, right? <DanC> close action-130 <trackbot> ACTION-130 Consult with Dan and Ralph about the gap between the XHTML namespace and the GRDDL transformation for RDFa closed HST: As long as the issue is XHTML, we're good TVR: RFDa works fine with HTML HST: I dispute the 'fine' SW: and I wonder about the 'works' [TagSoup digression] SW: Propose to close ISSUE-35 TVR: By pointing to RDFa DC: And GRDDL <DanC> (indeed, -1 on the empty proposal to close; we need a technical decision.) RESOLUTION: Close ISSUE-35 on the basis the RDFa and GRDDL provide the desired solution HST: We need an action to explain the resolution to the public DC: I will take it trackbot, status? <DanC> ACTION: Dan announce decision on rdf-in-html-35 and invite feedback [recorded in] <trackbot> Created ACTION-191 - Announce decision on rdf-in-html-35 and invite feedback [on Dan Connolly - due 2008-11-20]. <scribe> ACTION: Dan to close ISSUE-35 with a public explanation [recorded in] <trackbot> Created ACTION-192 - Close ISSUE-35 with a public explanation [on Dan Connolly - due 2008-11-20]. <noah> Am I right that we instructed me to include in next draft of Self-describing Web a story on how you could follow your nose from HTML media types to RDFa? SW: ISSUE-41 / outstanding actions ... Assuming there will be progress by the F2F JR: Yes <DanC> close action-192 <trackbot> ACTION-192 Close ISSUE-35 with a public explanation closed SW: ISSUE-50 / ACTION-33 <DanC> action-192: dup of 191 <trackbot> ACTION-192 Close ISSUE-35 with a public explanation notes added trackbot, close ACTION-189 <trackbot> ACTION-189 S. Send public comment to www-tag about the XRI proposal and the establishment of base URI. closed HST: Others are indeed open SW: ISSUE-52 / ACTION-150 ... Finding published ... and announced <DanC> action-150: done. see <trackbot> ACTION-150 Finish refs etc on passwords in the clear finding [inc post Sept 2008 F2F updates] notes added <DanC> issue-52: finding: <trackbot> ISSUE-52 Sending passwords in the clear notes added trackbot, close ACTION-150 <trackbot> ACTION-150 Finish refs etc on passwords in the clear finding [inc post Sept 2008 F2F updates] closed DC: Did we hear back from anyone? Is there anyone we should be waiting on? SW: We could ask Ed Rice? DO: I will do so DC: Do we have any recent input from Security Context? SW: Not from the group, no DO: We did our best to address several individual comments SW: Any response to the publication announcement? DO: Not that I'm aware of <DanC> issue-52? <trackbot> ISSUE-52 -- Sending passwords in the clear -- RAISED <trackbot> <DanC> close issue-52 SW: Close the issue now? Wait for Ed? <DanC> issue-52? <trackbot> ISSUE-52 -- Sending passwords in the clear -- CLOSED <trackbot> TVR: Not necessary, close it and notify him as a courtesy SW: ISSUE-54 / three actions wrt TagSoup DC: Recent progress on validator, some of it public. . . <DanC> action-7: <trackbot> ACTION-7 draft a position regarding extensibility of HTML and the role of the validator for consideration by the TAG notes added DC: Blog posting by Olivier Théreaux, which has attracted favourable comment SW: Waiting for Tim on the other two <DanC> ACTION-188 due 20 Nov 2008 <trackbot> ACTION-188 Investigate the URL/IRI/Larry Masinter possible resolution of the URL/HTML5 issue. due date now 20 Nov 2008 HST: Wrt ACTION-145, I still hope TBL will produce a publication from the positive parts of his paper and his TPAC slides DC: I'm about to get going on ACTION-188 <DanC> (I'd like us to keep due dates in the future; if the chair expects tbl to continue work on 116, let's give it a due date in the future... e.g. the ftf agenda timeframe...) SW: ISSUE-57 / three actions <DanC> action-116 due 1 Dec 2008 <trackbot> ACTION-116 Align the tabulator internal vocabulary with the vocabulary in the rules, getting changes to either as needed. due date now 1 Dec 2008 JR: ACTION-184 is about to be done SW: We're expecting something on ACTION-178 for the F2F ... ISSUE-58 / ACTION-163 NW: I still hope to work with Ted Guild on this, it is important SW: ISSUE-60 / three actions NW: I have sent TVR a review SW: I will do ACTION-143 at some point <DanC> (possible ftf fodder: the iphone urls thread: ) SW: ACTION-106 NW: No progress, but I will try to get that ready for the f2f SW: I have done ACTION-190 trackbot, close ACTION-190 <trackbot> ACTION-190 Make the above resolution visible on www-tag closed <Stuart> close action-190 <trackbot> ACTION-190 Make the above resolution visible on www-tag closed ACTION-106: NW sent comments to TVR privately <trackbot> ACTION-106 Make a pass over the WebArch 2.0 doc't which adds a paragraph, and connects up to issues list notes added trackbot, close ACTION-106 <trackbot> ACTION-106 Make a pass over the WebArch 2.0 doc't which adds a paragraph, and connects up to issues list closed TVR: I'm not sure about how to take this forward ... I don't plan to pick it up, except to possibly add new uses ... I don't see how to get it to the right audience. . . SW, DC: Is there a blog article in there? <DanC> found it... TVR: Perhaps. . . DC: Maybe I'll try to adapt it TVR: I will help <scribe> ACTION: Dan to try to draft a blog posting adapted from, with help from TVR [recorded in] <trackbot> Created ACTION-193 - Try to draft a blog posting adapted from, with help from TVR [on Dan Connolly - due 2008-11-20]. <Stuart> issue-61? <trackbot> ISSUE-61 -- URI Based Access to Packaged Items -- OPEN <trackbot> SW: We will discuss that next week <DanC> DC: HTML5 and URLs, reread Doug Crockford's safe JavaScript ... He's added a mode to JSLINT which verifies this ... He's very critical of the work on cross-site access controls ... He has an alternative, namely JSON-request ... What could be the improvement, by using JSON instead of XML? ... We could study that space, perhaps TVR: I tried to find the answer to that question, but didn't see it SW: DC, could you assemble a reading list? ... If we scheduled this at the right time, could you join us by phone, TV? TVR: No, sorry, I will be travelling on the 11, and preparing on the day before -- Tuesday might be possible. SW: Sounds like a good idea in any case, DC, reading list please <DanC> iphone urls thread: DC: There there's the iphone: URL thread: ... I can't get this started yet ... MNot says [tongue in cheek?] "We need an Arch Group for this sort of thing" ... I like tel: . . . blog entry: ??? SW: We can talk about this on a call -- let's find a slot on one of the next two calls <DanC> (blog entry that celebrates tel: support ) TVR: We should maybe write down URI schemes we know about <DanC> (I try to garden somewhat actively) TVR: [lists some] ... It does help to look at these <HST> HST does review the registered and unregistered schemes lists with some regularity <DanC> DC: p2p ones are not lookup + hierarchy ... I am bored by proposals which suggest replacing DNS ... but these don't do that TVR: There are 4 parts to a URI: protocol, host, path and port ... But consider ??? -- doesn't change the host/DNS part, but changes the handler ... Or ado:, as a protocol identifier for local work [missed some] <DanC> (ado isn't among the list in . hmm.) <Norm> Yes, the fact that protocol handlers are easy to register is the interesting angle to me <Stuart> kind of browser architecture stuff... maybe html5 should say something about plugin handlers... DC: SchemeProtocols is a good area to wander around periodically, not necessarly to try to draw hard conclusions SW: ADJOURNED
http://www.w3.org/2008/11/13-tagmem-minutes
CC-MAIN-2015-22
refinedweb
2,380
65.56
Java is the booming technology across the world, and it is a simple and robust language to code and even learn. Code reusability is one of the prominent features of Java which is not possible with the C language. Inheritance is an important concept that is introduced in Java for code reusability. Java is used everywhere as it is open-source software, and it also provides a platform for many users to perform a specific task effectively. According to Oracle 3 billion devices run on Java, 3 billion devices run on Java. Some of the applications where Java is used are Access control systems, Automobiles, IoT gateways, optical sensors, and many more. If you are aspiring to start your career in the field of Java as a developer, then you are in the right place. These days, cracking the Java interview has become more critical due to increased complexity in the interview process. We have gathered a bunch of Frequently asked Java interview questions over here. These questions would help you stand apart from the crowd and crack the interview very easily. So, let’s get started with the interview questions. Java is a popular object-oriented programming language. It is defined as a complete collection of objects. By using Java, we can develop lots of applications such as gaming, mobile apps, and websites. In the year 1991, a small group of engineers called ‘Green Team’ led by James Gosling, worked a lot and introduced a new programming language called “Java”. This language is created in such a way that it is going to revolutionize the world. In today’s World, Java is not only invading the internet, but also it is an invisible force behind many of the operations, devices, and applications. The differences between C and Java are as follows: Java is the first programming language that is used to write code on the virtual machine, so that is the reason why it is called JVM (Java Virtual Machine). This JVM is a new concept that is introduced in Java. It also provides a new feature called code reusability which is not possible in C. [ Related Article:- Clojure Tutorial] The following are the notable features in Java: Class is defined as a template or a blueprint that is used to create objects and also to define objects and methods. The instance of a class is called an object. Every object in Java has both state and behavior. The state of the object is stored in fields and the behavior of the objects is defined by methods. [Related Blog: Classes and Objects in Java] class Mindmajix { public static void main(String args[ ]) { System.out.println("Hello World"); } } JVM is the abbreviation for Java Virtual Machine. It is a virtual machine that provides a runtime environment to write code. JVM is a part of JRE (Java Runtime Environment) and is used to convert the bytecode into machine-level language. This machine is responsible for allocating memory. Top 10 Programming Languages that you need to watch out to boost your career in 2021 There are totally five memory areas that are allocated by the JVM, and they are: ClassLoader: class loader is a subschema of JVM which is used to load class files. Whenever we run Java programs, the data will be first loaded from the classloader. There are mainly three in-built classloaders in JVM, they are: Java Development Kit is one of the three prominent technology packages used in Java programming. JDK is used as a standalone component to run the Java programs by JVM and JRE. This kit is used to implement Java platform specifications, including class libraries and compiler. Java Runtime Environment (JRE) is a collection of software tools that are designed for the development of Java applications. This is a part of JDK, but it can be downloaded separately. JRE is mainly responsible for orchestrating the component activities. We can understand the difference between JVM, JDK, and JRE by the following diagram: Just In Time Compiler is the component of JRE, which is used to compile the bytecodes of the particular method into the native machine code. This compiled code of the method is directly called by JVM without interpreting it. Variables in Java can be defined as a basic storage unit of a program. It is a storage unit that holds the value during the program execution. Always the variable is assigned with a datatype. For Example: int a = 10; There are mainly three different types of variables available in Java, and they are: Static Variables: A variable that is declared with the static keyword is called a static variable. A static variable cannot be a local variable, and the memory is allocated only once for these variables. Local Variables: A variable that is declared inside the body of the method within the class is called a local variable. A local variable cannot be declared using the static keyword. Instance Variables: The variable declared inside the class but outside the body of the method is called the instance variable. This variable cannot be declared as static and its value is instance-specific and cannot be shared among others. Example: class A{ int num=30;//instance variable static char name=pranaya;//static variable void method(){ int n=90;//local variable } }//end of class Typecasting in Java is done explicitly by the programmer; this is done to convert one data type into another data type. Widening (automatically) - conversion of a smaller data type to a larger data type size. byte -> short -> char -> int -> long -> float -> double Narrowing (manually) - converting a larger type to a smaller size type double -> float -> long -> int -> char -> short -> byte Type conversion can be defined as converting one data type to another data type automatically by the compiler. There are two types of type conversions, and they are: Datatypes in Java specify the values and sizes that can be stored in the variables. There are mainly two types of data types; they are: Primitive data types in Java are the major constituents of data manipulation. These are the most basic data types that are available in Java. The primitive data types include int, char, byte, float, double, long, short, and boolean. The non-primitive data types are something that is different from primitive data types, and these non-primitive data types include String, arrays, and structures. This is only due to the reason that Java uses the Unicode system. The notation ‘u0000’ is the lowest range of the Unicode system, and it is the default value of the char data type. The new keyword is used to create an object. For example: Mindmajix m1= new Mindmajix();// create an object for Mindmajix. Unicode is a Universal International Standard Character Encoding which is an adequate resource for representing most of the languages written worldwide. To overcome the problems present in the previous language standards, the Unicode system has been introduced. Java uses the Unicode system because the character default size provided by Unicode is 2 bytes and Java also needs only 2 bytes for the character. It is because the Java compiler converts the code into byte code which is the main transitional language between machine code and source code. This byte code is not platform-dependent so that it can be compiled and executed on any platform. No, they are no such keywords in Java. Both the compilation and execution of the programs are done correctly because in Java specifiers order doesn’t matter. No, local variables are not initialized by any default values. Eight types of operators are available in java, and they are: Java provides a set of rules and regulations for particularly specifying the order in which operators are evaluated. If the expression has many numbers of operators then the operator precedence comes into action. This operator precedence evaluates the operators present in the expressions based on the priority. For example, multiplication has the highest priority when compared to addition and subtraction. [Related Blog: Operators In Java] This type of operator has only one operand and is mainly used to perform various operations including negating an expression, either incrementing/ decrementing the value by one, and invention on boolean values. An example for the unary operator is given below: class UnaryExample{ public static void main(String args[]){ int x=15; System.out.println(x++);//15 (16) System.out.println(++x);//17 System.out.println(x--);//12 (16) System.out.println(--x);//15 }} ++a is a prefix increment and a++ is the postfix increment. The prefix increment is used to return the value after incrementing the present value. Whereas in postfix increment, the value is returned before incrementing it. Left Shift: This left shift is a bitwise operator in which bits are moved towards the left side and zeros are placed at the rightmost places. Example: public class LeftShiftOperator { public static void main(String[] args) { int a=2;// int i; i=a<<1;//4 System.out.println("the value of a before left shift is: " +a); System.out.println("the value of a after applying left shift is: " +i); } } Output: 4 Right Shift: It is also of the bitwise operator in which bits are moved towards the right-hand side and zeros are places at the leftmost places. Example: public class RightShiftOperator { public static void main(String[] args) { int a=2; int i; i=a>>1; System.out.println("the value of a before right shfit is: " +a); System.out.println("the value of a after applying right shfit is: " +i); } } Output: 1 Bitwise operators are mainly used to work on bits and these operators continue to work on bit-by-bit operations. The following are the bitwise operators in Java, and they are: Ternary operator in Java is used to replace the if-else statement. The representation or the syntax for the ternary operator is given as: variable= (expression) ? expression true : expression false Yes, Java allows us to save our Java file by .java only, we need to compile it by javac.java and run it by Java class name. For example: //save by .java only class A{ public static void main(String args[]){ System.out.println("Hello Mindmajix"); } } Java keywords are also called “Reserved keywords” that act as a key to a code. Keywords in Java are predefined that cannot be used as an object name or variable. There are many keywords in Java, and some of them are: There are four access specifiers present in Java, and they are: The following are the advantages of packages in Java, and they are: The program is as follows: class Java { public static void main (String args[]) { System.out.println(10 * 50 + "Mindmajix"); System.out.println("Mindmajix" + 10 * 50); } } Output: 500Mindmajix Mindmajix500 In Java control statements are divided into three types. They are: A selection statement is mainly used to transfer program control to a specific flow based upon the condition either true or false. These selection statements are also called conditional statements. Selection/Conditional statements in Java include: The iterative statements in Java are also called looping statements, these statements are the set of statements that repeat continuously until the condition for the termination is not met. Looping/iterative statements in Java include: In Java, jump statements are mainly used to transfer control to another part of our program depending on the condition. Moreover, these statements are used to jump directly to other statements. [Related Blog: Control Statements in Java] For-each is another kind of array traversing technique in Java which is the same as that of for loop, while loop. It is most commonly used to iterate over a collection or an array such as ArrayList. An example for for-each loop is as follows: class ForEachPro{ public static void main(String args[]){ //declaring an array int arr[]={12,13,14,44}; //traversing the array with for-each loop for(int i:arr){ System.out.println(i); } } } Output: 12 12 14 44 In the case of a while loop the condition is tested first and then if the condition is true then the loop continues if not it stops the execution. Whereas in the case of the do-while loop first the condition is executed and at the end of the loop, the condition is tested. The syntax for the while loop is as follows: while(condition){ //code to be executed } Java comments are statements that are not executed by the interpreter and compiler. These are used to provide information about the class, variables, methods, and any statements. Comments are mainly used to hide program code for a specific time. There are mainly three types of comments in Java, and they are: OOPs is an abbreviation for Object-Oriented Programming Language which entirely deals with an object. The main aim of OOPs is to implement real-world entities such as objects, classes, inheritance, polymorphism, and so on. Simula is considered the first object-oriented programming language. The most popular programming languages are Java, Python, PHP, C++, and many others. The following are the OOPs concepts that are included in Java: In simple words, abstraction can be defined as hiding unnecessary data and showing or executing necessary data. In technical terms, abstraction can be defined as hiding internal processes and showing only the functionality. Binding data and code together into a single unit are called encapsulation. The capsule is the best example of encapsulation. When an object of child class has the ability to acquire the properties of a parent class then it is called inheritance. It is mainly used to acquire runtime polymorphism and also it provides code reusability. Polymorphism in Java provides a way to perform one task in different possible ways. To achieve polymorphism in Java we use method overloading and method overriding. For example, the shape is the task and various possible ways in shapes are triangles, rectangle, circle, and so on. The following are the advantages of OOPs, and they are: An object-based programming language provides the most effective way to follow all the features of OOPs concepts except inheritance. VBScript and JavaScript are examples of Object-based programming languages. Whereas Object-oriented programming language supports all the features of OOPs concepts and examples of Object-oriented programming language is Java, Python, and so on. All the object references in Java are initialized to null. It is a paradigm entirely based on objects having defined methods in the class to which it belongs. This is mainly used to incorporate the advantages of reusability and modularity. Objects are defined as instances of classes that interact with one another to design programs and applications. The features of the object-oriented paradigm are as follows: Java naming convention is a rule to be followed for naming your identifiers such as package, variable, method, and constants. These conventions are supported by many Java communities such as Netscape and Sun Microsystems and mostly all the fields of java programming are given according to Java naming conventions. The rules which we need to follow to declare a class are as follows: An example for the class is, public class Thread { //code snippet } Constructor is a special type of method with a block of code to initialize the state of an object. A constructor is called only when the instance of the object is created. Every time in Java object is created using the new keyword and by default, the constructor is called. The rules we need to follow while creating a constructor are: There are two constructors in Java, and they are: Default constructor: It is also called a no-argument constructor and it is mainly used to initialize the instance variable with the default value. Moreover, it is also used to perform some useful tasks on object creation. This default constructor is implicitly invoked by the compiler if there is no constructor for a particular class. Parameterized constructor: A parameterized constructor is one type of constructor which is mainly used to initialize the instance variables with the given values. In simple words, the constructor that accepts arguments is called a parameterized constructor. An example of a default constructor is as follows: //Java Program to create and call a default constructor class Mindmajix1{ //creating a default constructor Mindmajix1() { System.out.println("Welcome to Mindmajix"); } //main method public static void main(String args[]){ //calling a default constructor Mindmajix1 m=new Mindmajix1(); } } Output: Welcome to Mindmajix An example of the parameterized constructor is as follows: //Java Program to demonstrate the use of the parameterized constructor. class Training{ int id; String name; //creating a parameterized constructor Training(int i,String n){ id = i; name = n; } //method to display the values void display() { System.out.println(id+" "+name); } public static void main(String args[]){ //creating objects and passing values Training t1 = new Training(111,"DevOps"); Student4 s2 = new Student4(222,"Oracle"); //calling method to display the values of object t1.display(); t2.display(); } } Output: 111 DevOps 222 Oracle Yes, the constructor will return the current or present instance of the class. No, the constructor cannot be inherited. Yes, It is possible to overload a constructor by changing the number of arguments for each constructor in a particular program or it is possible to overload a constructor by changing the parameter data types. No, We can not declare the constructor as final, if we declare it as final compiler throws a “modified final not allowed” error. Yes, there is a class called Constructor class available in Java. The purpose of the Constructor class is to get the internal information of the constructor in the class. It is present in java.lang.reflect package. There is no copy constructor in Java, we can copy the values from one object to another same as that of a copy constructor in C++. In Java, there are many ways to copy the values of one object to another and they are: In Java method is defined as a set of code that is represented by a name and can be invoked at any point in a program with the help of the method name. Each and every method in the program has its own name which is not the same as that of a class name. The difference between constructor and method are as follows: In Java method signature is given the specified format followed by the method name, type, and order of its parameters. Exceptions are not considered as a part of the method signature. return-type method name (parameter list) { //code } Keyword static in Java is mainly used for memory management and we can declare block, variable, method, and nested class as static. In Java, we can declare a variable as static, if we declare any variable as static the following can be done, and they are: Example for static variable: //Program of static variable class Mindmajix1{ int ID; String employee name; static String office ="Appmajix"; Mindmajix1(int i,String n){ ID = i; employee name = n; } void display () { System.out.println(ID+" "+name+" "+office);} public static void main(String args[]){ Mindmajix1 m1 = new Mindmajix1(346,"Pranaya"); Mindmajix1 m2 = new Mindmajix1(222,"Lilly"); m1.display(); m2.display(); } } Output: 346 Pranaya Appmajix 222 Lilly Appmajix If we declare a method as static, the following operations take place, and they are: The restrictions that we face if we declare a method as static are: The main reason is that the object is not required to call for a static method so, if we declare the main method as non-static we need to create an object first and then call the main() method. In order to save memory, we declare the main method as static in Java. No, we can’t override the static method in Java. Static block in Java is mainly used to initialize the static data members. The specialty of the static block is that it is executed before the main method at the time of class loading. An example of the static block is as follows: Class Mindmajix{ static{System.out.println("static block"); } public static void main(String args[]){ System.out.println("Hello World"); } } Output: static block Hello World Yes, we can execute the program in Java without the main method using a static block. It was possible only till JDK 1.6 and from JDK 1.7 it is not possible to execute the program without the main method in Java. The program will be compiled, but at runtime, it throws NoSuchMethodError error. As we know that the static context is suitable for only class, variable, and method, not for the object. So the constructors are invoked only when an object is created, so there is no possibility to declare the constructor as static in Java. No, if we declare abstract methods as static it becomes a part of the class, and we can directly call it which is not required. Calling an undefined method is unnecessary. Therefore declaring an abstract method as static is not allowed. Yes, as we know that there is no need for an object to access the static block, therefore we can access static methods and variables declared inside the abstract class by using the abstract class name. Consider the following example. abstract class Check { static int i = 113; static void CheckMethod() { System.out.println("hi, how are you"); } } public class CheckClass extends Check { public static void main (String args[]) { Check.CheckMethod(); System.out.println("i = "+Check.i); } } Output: hi, how are you i = 113 The main use of this keyword is to refer to the current object in Java programming. The following are the usage provided by this keyword in Java, and they are: Yes, this keyword is used to refer to the current class instance variable and it is also used to initiate or invoke the current class constructor. An object class is a superclass for all the classes in Java. Singleton class can be defined as the class that consists of only one single instance. In this class, all the methods and variables belong to only one instance. Singleton class is mainly used in a situation where we need to limit the objects for a class. In Java, we use inheritance mainly for two uses, and they are: The syntax for inheritance in Java is as follows: class Superclass extends subclass { //code } By using the Math.random() method we can generate random numbers in Java ranging from 0.1 to less than 1.0. Moreover, the random numbers can be generated by using the Random class present in java. util package. The main() method in Java doesn’t return any data because it is declared with a void return type. In Java, the package is defined as a collection of interfaces and classes that are bundled together and related to each other. Usage of packages in the program will help developers to group the code for proper re-use. In Java, packages are used by importing them into different classes. Java doesn’t support multiple inheritances in order to reduce the complexity and simplify the language. An example of multiple inheritances in Java is given below: class X{ void msg() { System.out.println("Hello");} } class Y{ void msg() { System.out.println("Welcome");} } class Z extends X,Y{//suppose if it were public static void main(String args[]){ Z obj=new Z(); obj.msg();//Now which msg() method would be invoked? } } Output: Compile-time error So, the above example shows that Java doesn’t support multiple inheritances. In Java, main() method must be always public static to run any application or program correctly. Suppose, if the main method is declared as private there will be no complications but it will give a runtime error. Yes, the classes present in Java can have multiple constructors with different parameters. If a class in Java has multiple numbers methods with the same name and different parameters, then it is called as method overloading. The main advantage of method overloading is that it increases the readability score of a program. There are two different ways to overload a method, and they are: class Addition{ static int add(int x,int y) { return x+y; } static int add(int x,int y,int z) { return x+y+z; } } class TestOverloading1{ public static void main(String[] args){ System.out.println(Adder.add(10,20)); System.out.println(Adder.add(10,20,30)); }} Output: 30 60 class Addition{ static int add(int x, int y) { return x+y; } static double add(double x, double y) { return x+y; } } class TestOverloading2{ public static void main(String[] args){ System.out.println(Adder.add(22,22)); System.out.println(Adder.add(12.5,12.5)); }} Output: 44 25 Ambiguity is the main reason why method overloading is not possible by changing the return type of method only. Yes, we can overload the main() method in Java by using method overloading but, JVM only calls the main() method that receives string array as arguments only. If the subclass in the program has the same method as declared in superclass then it is known as method overriding. Method overriding is used to achieve run-time polymorphism and also it is used to provide a specific implementation for a method that is already given by its subclass. The rules are as follows: //Java Program to illustrate the use of Java Method Overriding //Creating a parent class. Class Shape{ //defining a method void run() { System.out.println("Shape is ready"); } } //Creating a child class class Rectangle extends Shape{ //defining the same method as in the parent class void run() { System.out.println("Rectangle is drawn"); } public static void main(String args[]){ Rectangle obj = new Rectangle();//creating object obj.run();//calling method } } Output: Rectangle is drawn No, a static method cannot be overridden, it is because a static method is bounded with a class and an instance method is bounded with an object. So, the static belongs to the class area whereas, the instance method belongs to the heap area. No, because the main() method is static. Aggregation is built to represent the weak relationship whereas, the composition is built to represent the strong relationship. Pointer is a variable that mainly refers to the memory address. Java doesn’t support pointers because they are complex to understand and unsecured. Super keyword in Java is used to conjure immediate prompt parent class object. It is also called a reference variable. The uses of the super keyword in Java are as follows: Yes, all the functions in Java are virtual by default. Stay updated with our newsletter, packed with Tutorials, Interview Questions, How-to's, Tips & Tricks, Latest Trends & Updates, and more ➤ Straight to your inbox! Pranaya is working as a Content Writer at Mindmajix. She is a technology enthusiast and loves to write about various technologies which include, Java, MongoDB, Automation Anywhere, SQL, Artificial Intelligence, and Big Data. You can connect with her via LinkedIn and Twitter. 1 /15
https://mindmajix.com/java-interview-questions
CC-MAIN-2022-21
refinedweb
4,485
52.6
Details - Type: Bug - Status: Closed - Priority: P2: Important - Resolution: Done - Affects Version/s: 6.2.2 - Fix Version/s: 6.2.5, 6.3.1, 6.4.0 Beta1 - Component/s: Quick: Core Declarative QML - Labels: - Platform/s: - Commits:8068bde891 (qt/qtdeclarative/dev) 8068bde891 (qt/tqtc-qtdeclarative/dev) 71660c9700 (qt/qtdeclarative/6.3) 71660c9700 (qt/tqtc-qtdeclarative/6.3) 71660c9700 (qt/tqtc-qtdeclarative/6.3.1) e9f1881c0d (qt/tqtc-qtdeclarative/6.2) Description We have an touch screen embedded device where the screen is installed upside down at manufacturing. To work around this, we do a rotation transform in our main QML file. The below code worked fine in Qt5, but in Qt 6.2.2, it seems that the deceleration of a Flickable occurs in the untransformed rotation, opposite to how the Flickable was dragged. I have seen this behavior on both macOS and our Yocto distro. An example is shown below. If you slowly scroll the window it behaves as expected, but if you drag quickly and let go, you will bounce back to where you started the drag. We have been using the technique below since Qt 5.6. Is this even the correct way we should be working around the screen orientation, or is there a better way in Qt to indicate that the screen is inverted? import QtQuick 2.15 import QtQuick.Window 2.15 import QtQuick.Layouts 1.15 Window { width: 640 height: 480 visible: true title: qsTr("Hello World") Item { anchors.fill: parent rotation: 180 Flickable { anchors.fill: parent contentHeight: column.height ColumnLayout { id: column width: parent.width Repeater { model: 255 delegate: Rectangle { Layout.fillWidth: true Layout.preferredHeight: 50 color: "gray" width: parent.width Text { anchors.centerIn: parent text: index } } } } } } } Attachments Issue Links - relates to QTBUG-104471 tst_QQuickListView2::tapDelegateDuringFlicking fails on Android - Reported
https://bugreports.qt.io/browse/QTBUG-99639?gerritReviewStatus=All
CC-MAIN-2022-27
refinedweb
300
53.78
reading notes on Making Java Groovy - Chapter 1, Why add Groovy to Java? - Chapter 2, Groovy by example - Chapter 3, Code-level integration - Chapter 4, Using Groovy features in Java - Chapter 5, Build processes - Chapter 6, Testing Groovy and Java projects - Chapter 7, The Spring framework - Chapter 8, Database access - Chapter 9, RESTful web services - Chapter 10, Building and testing web applications Chapter 1, Why add Groovy to Java? (no notes) Chapter 2, Groovy by example Hello, Groovy two additional differences in syntax between groovy and java: - Semicolons are options. - Parentheses are often optional. Accessing Google Chart Tools using In a groovy script you don't actually have to declare any types at all. If you declare a type the variable becomes local to the script. If not, it becomes part of the "binding" String base = '?' groovy is optionally typed, you can specify a type or you can use the def keyword if you don't know or care. "If you think of a type, type it" def params = [cht:'p3',chs:'250x100',chd:'t:60,40',chl:'Hello|World'] in groovy you create a map with [], and each entry consists of keys and values separated by : // init a map Map somemap = [:] the keys are assumed to be strings by default. the values can be anything. by default params is an instance of `java.util.LinkedHashMap' single-quoted strings are instance of java.lang.String; double-quoted strings are "interpolated" strings (known as GStrings params.collect { k,v -> "$k=$v" } collect method takes a closure as an argument, applies the closure to each element of the collection, and return a new collection containing the results. closure is a block of code, delimited by {}, which can be treated as an object. if the closure takes one argument, the argument represents a Map.Entry; with two arguments, the first is the key and the second is the value for each entry. if the last argument to any methods is a closure you can put the closure outside the parentheses. // above actually is: params.collect ( { k,v -> "$k=$v" } ) join method takes a single argument that's used as the separator when assembling the elements into a String params.collect { k,v -> "$k=$v" }.join('&') note that there is () on join method above. in groovy, if you leave off the parentheses when calling a method with no arguments the compiler assumes you are asking for the corresponding getter or setter method. groovy assert keyword, which takes a boolean expression as an argument. if the expression is true, nothing is returned. but if not, you get the error printed to the console. groovy jdk addsgroovy jdk adds params.each { k,v -> assert qs.contains("$k=$v") } toURL()method to the Stringclass, converts an instance of java.lang.Stringinto java.net.URL url.toURL().text accessing properties in groovy automatically invokes the associated getter or setter method. above, .text is actually invoking getText() method In groovy every class has a metaclass. A metaclass is another class that manages the actual invocation process. if you invoke a method on class that doesn't exists, the call is untimately intercepted by a method in the metaclass called methodMissing. likewise, accessing a property that doesn't exist eventually calls propertyMissing in the metaclass. customizing the behavior of methodMissing and propertyMissing is the heart of groovy runtime metaprogramming. Groovy automatically imports: java.lang java.util java.io java.net groovy.lang groovy.util java.math.BigInteger java.math.BigDecimal the as keyword has several uses, one of which is to provide an alias for imported classes. in groovy, if you don't specify an access modifier, attributes are assumed to be provate, and methods are assumed to be public. if you don't add a constructor, you get not only the default, but also a map-based constructor that allows you to set any combination of attribute values by supplying them as key-value pairs. [stadium.name, stadium.city, stadium.state].collect { URLEncoder.encode(it, 'UTF-8') }.join(',') if you use a closure without specifying a dummy parameter, each element of the list is assigned to a variable called it. the last expression in a closure is returned automatically. parse xml def response = new XmlSlurper().parse(url) def homeName = response.result[0].boxscore.@home_name groovy has two classes for parsing XML. One is XmlParser and the other is XmlSlurper. both convert XML into a DOM tree. from a practical point of view the slurper is more efficient and takes less memory. whether you use an XmlParser or an XmlSlurper, exracting data from XML means just walking the dom tree. (slurping JSON is just as easy) Dots . traverse from parent elements to children, and @ signs represent attriute values. work with regular expressions def pitchers = boxscore.pitching.pitchers pitchers.each { p -> if (p.@note && p.@note =~ /W|L|S/) { println " ${p.@name} ${p.@note}" } } the =~ method in groovy returns an instance of java.util.regex.Matcher for parsing HTML, use some third-partly library to do it, like NekoHTML parser generating XML the standard groovy library includes a class groovy.xml.MarkupBuilder MarkupBuilder builder = new MarkupBuilder() builder.games { results.each { g -> game( outcome:"...", lat:g.stadium.latitude, lng:g.stadium.longitude ) } } there is no game method in MarkupBuilder, the builder intercepts that method call and creates a node out of it. query database Sql db = Sql.newInstance( 'jdbc:mysql://localhost:3306/baseball', '...username...', '...password...', 'com.mysql.jdbc.Driver' ) The Sql class has a static newInstance method, whose arguments are the JDBC URL, username, password, and the driver class. db.execute "drop table if exists stadium;" db.execute ''' create table statidum( ... ); ''' the three single quotes ''' represent a multiline string. three double quotes """ would be a multiline GString stadium.each { s -> db.execute """ insert into stadium (name, ...) values (${s.name}, ...); """ } use ${...} notation to substitute values. assert db.rows('select * from stadium').size() == stadium.size() db.eachRow('select latitude, longitude from stadium') { row -> assert row.latitude ... assert row.longitude ... } groovlets a groovlet is a script that is executed by a class called groovy.servlet.GroovyServlet groovlets executed this way are deployed as source code rather than as compiled classes under WEB-INF more about groovlets in chapter 10 Chapter 3, Code-level integration Groovy and Java together at runtime at runtime, compiled groovy and compiled java both result in bytecodes for the JVM. to execute code that combines them, all that's necessary is to add a single JAR file to the system. compiling and testing your code requires the Groovy compiler and libraries, but at runtime all you need is one JAR. > groovyc hello_world.groovy > java -cp .:$GROOVY_HOME/embeddable/groovy-all-2.1.5.jar hellow_world using JSR223 scripting for the JAVA platform API build into Java SE 6 and above, the API for JSR223, Scripting for the Java Platform, is a standard mechanism you can use to call scripts written in other languages. the JSR223 allows you to call Groovy scripts using purely Java classes public class ExecuteGroovyFromJSR223 { public static void main(String[] args) { ScriptEngine engine = new ScriptEngineManager().getEngineByName("groovy"); try { engine.eval("println 'Hello, Groovy!'"); engine.eval(new FileReader("src/hello_world.groovy")); } catch (ScriptException e) { e.printStackTrace(); } catch (FileNotFoundException e) { e.printStackTrace(); } } } supplying parameters to a groovy script a binding is a collection of variables at a scope that makes them visible inside a script. String address = [street,city,state].collect { URLEncoder.encode(it, 'UTF-8') }.join(',') street,city,state are not declared in the script, this adds them to the binding, making them available to the caller. (something like global variables) // set binding variables engine.put("street", "...") engine.put("city", "...") engine.put("state", "...") (Groovy Eval and GroovyShell classes sections are skipped) so use ScriptEngine class from Java, or the Eval and GroovyShell classes from Groovy, along with a Binding if necessary, to call Groovy scripts from Java. and this this the hard way Calling Groovy from Java the easy way the easy way: put the Groovy code in a class, compile it as usual, and then instantiate it and invoke methods as thought it was Java. Calling Java from Groovy String address = [street,city,state].collect { URLEncoder.encode(it, 'UTF-8') }.join(',') here, URLEncoder is Java library code. whenever you mix Java and Groovy, compile everything with groovyc, let groovyc handle all the cross-compiler issues. any compiler flags you would normally send to javac work just fine in groovyc as well. Chapter 4, Using Groovy features in Java def slayers = [buffy, faith] assert ['Buffy', 'Failth'] == slayers*.name spread-dot operator extracts name property from each instance and return a list of the results (same as collect does) in groovy all operators are represented by methods, you can overload any operator by Implementing the appropriate method in your groovy class. public class Department { // overriding + public Department plus(Employee e) { hire(e); return this; } // overriding - public Department minus(Employee e) { layOff(e); return this; } } check doc for list of operators that can be overridden in groovy. the groovy jdk every groovy class contains a metaclass. the metaclass contains methods that come into play if a method or property that doesn't exist is accessed through an instance. for example, groovy adds many methods to the java.util.Collection interface, including collect,count,find,findAll,leftShift,max,min,sort,sum check doc for more API of groovy jdk AST transformations groovy 1.6 introduced Abstract Syntax Tree (AST) transformations. @Delegate public class Camera { public String takePicture() { return "taking picture"; } } public class Phone { public String dial(String number) { return "dialing " + number; } } class SmartPhone { @Delegate Camera camera = new Camera() @Delegate Phone phone = new Phone() } @Immutable making a class support immutability requires that - all mutable methods (setters) must be removed - the class should be marked final - any contained fields should be privateand final - mutable components like arrays should defensively be copied on the way in (through constructors) and the way out (through getters) equals,hashCode,toStringshould all be implemented through fields @Immutable AST transformation does everything for you. the @Immutable transformation can only be applied to Groovy classes, but those classes can then be used in Java applications. @Immutable class ImmutablePoint { double x double y String toString() { "($x,$y)" } } it allows the properties to be set through a constructor, but once set the properties can no longer be modified. the annotation has its limitations: you can only apply it to classes that contain primitives or certain library classes, like String or Date. It also works on classes what contain properties that are also immutable. @Singleton @Singleton class ImmutablePointFactory { ImmutablePoint newImmutablePoint(xval, yval) { return new ImmutablePoint(x:xval, y:yval) } } the class now contains a static property called instance which contains the instance of the class. ImmutablePoint p = ImmutablePointFactory.instance.newImmutablePoint(3,4) Working with XML (already covered above) Working with JSON data String url = '[nerdy]' String jsonTxt = url.toURL().text def json = new JsonSlurper().parseText(jsonTxt) def joke = json?.value?.joke println joke JsonBuilder class generates JSON strings using the same mechanism as XmlSlurper Chapter 5, Build processes The build challenge The Java approach, part 1: Ant Making Ant Groovy The Java approach, part 2: Maven Grapes and @Grab The Gradle build system Chapter 6, Testing Groovy and Java projects Working with JUnit Testing scripts written in Groovy Testing classes in isolation The future of testing: Spock Chapter 7, The Spring framework A Spring application Refreshable beans Spring AOP with Groovy beans Inline scripted beans Groovy with JavaConfig Building beans with the Grails BeanBuilder Chapter 8, Database access The Java approach, part 1: JDBC The Groovy approach, part 1: groovy.sql.Sql The Java approach, part 2: Hibernate and JPA The Groovy approach, part 2: Groovy and GORM Groovy and NoSQL databases Chapter 9, RESTful web services The REST architecture The Java approach: JAX-RS Implementing JAX-RS with Groovy RESTful Clients Hypermedia Other Groovy approaches Chapter 10, Building and testing web applications Groovy servlets and ServletCategory Easy server-side development with groovlets Unit- and integration-testing web components Grails: the Groovy "killer app"
https://jchk.net/read/making-java-groovy
CC-MAIN-2021-21
refinedweb
1,999
54.93
When automating a task, it is sensible to test it first manually. It would be helpful, though, if any data going to stderr was immediately recognizeable as such, and distinguishable from the data going to stdout, and to have all the output together so it is obvious what the sequence of events is. One last touch that would be nice is if, at program exit, it printed its return code. All of these things would aid in automating. Yes, I can echo the return code when a program finishes, and yes, I can redirect stdout and stderr; what I'd really like it some shell, script, or easy-to-use redirector that shows stdout in black, shows stderr interleaved with it in red, and prints the exit code at the end. Is there such a beast? [If it matters, I'm using Bash 3.2 on Mac OS X]. Update: Sorry it has been months since I've looked at this. I've come up with a simple test script: #!/usr/bin/env python import sys print "this is stdout" print >> sys.stderr, "this is stderr" print "this is stdout again" In my testing (and probably due to the way things are buffered), rse and hilite display everything from stdout and then everything from stderr. The fifo method gets the order right but appears to colourize everything following the stderr line. ind complained about my stdin and stderr lines, and then put the output from stderr last. Most of these solutions are workable, as it is not atypical for only the last output to go to stderr, but still, it'd be nice to have something that worked slightly better. You 1798 times active 1 year ago
http://superuser.com/questions/28869/immediately-tell-which-output-was-sent-to-stderr/380144
CC-MAIN-2014-52
refinedweb
287
79.3
¶ - 0.9.10 - 0.9.9 - 0.9.8 - 0.9.7 - 0.9.6 - 0.9.5 - 0.9.4 - 0.9.3 - 0.9.2 - 0.9.1 - 0.9.0 - 0.9.0b1 -.0 Changelog - Next: 0.8 Changelog - Up: Home - On this page: - 0.9 Changelog - 0.9.10 - 0.9.9 - 0.9.8 - 0.9.7 - 0.9.6 - 0.9.5 - 0.9.4 - 0.9.3 - 0.9.2 - 0.9.1 - 0.9.0 - 0.9.0b1 0.9 Changelog¶ 0.9.10¶Released: July 22, 2015... engine¶ Added the string value "none"to those accepted by the Pool.reset_on_returnparameter as a synonym for None, so that string values can be used for all settings, allowing utilities like engine_from_config()to be usable without issue.. Fixed issue where a MetaDataobject that used a naming convention would not properly work with pickle. The attribute was skipped leading to inconsistencies and failures if the unpickled MetaDataobject were used to base additional tables from. postgresql¶. Fixed the BITtype on Py3K which was not using the ord()function correctly. Pull request courtesy David Marin. sqlite¶ Fixed bug in SQLite dialect where reflection of UNIQUE constraints that included non-alphabetic characters in the names, like dots or spaces, would not be reflected with their name. tests¶ Fixed an import that prevented “pypy setup.py test” from working correctly. misc¶ Fixed bug where when using extended attribute instrumentation system, the correct exception would not be raised when class_mapper()were called with an invalid input that also happened to not be weak referencable, such as an integer. Fixed regression from 0.9.9 where the as_declarative()symbol was removed from the sqlalchemy.ext.declarativenamespace. 0.9.9¶Released: March 10, 2015 orm¶. Fixed bug where internal assertion would fail in the case where an after_rollback()handler for a Sessionincorrectly adds state to that Sessionwithin the handler, and the task to warn and remove this state (established by #2389) attempts to proceed. Fixed bug where TypeError raised when Query.join()called with unknown kw arguments would raise its own TypeError due to broken formatting. Pull request courtesy Malthe Borch.. Fixed bug regarding expression mutations which could express itself as a “Could not locate column” error when using Queryto select from multiple, anonymous column entities when querying against SQLite, as a side effect of the “join rewriting” feature used by the SQLite dialect. Fixed bug where the ON clause for Query.join(), and Query.outerjoin()to a single-inheritance subclass using of_type()would not render the “single table criteria” in the ON clause if the from_joinpoint=Trueflag were set. examples¶. Fixed a bug in the examples/generic_associations/discriminator_on_association.py example, where the subclasses of AddressAssociation were not being mapped as “single table inheritance”, leading to problems when trying to use the mappings further. engine¶ Added new user-space accessors for viewing transaction isolation levels; Connection.get_isolation_level(), Connection.default_isolation_level.. sql¶ Added the native_enumflag to the __repr__()output of Enum, which is mostly important when using it with Alembic autogenerate. Pull request courtesy Dimitris Theodorou.. schema¶. postgresql¶ Added support for the CONCURRENTLYkeyword with PostgreSQL indexes, established using postgresql_concurrently. Pull request courtesy Iuri de Silvio. See also Indexes with CONCURRENTLY. mysql¶ The gaerdbmsdialect is no longer necessary, and emits a deprecation warning. Google now recommends using the MySQLdb dialect directly. Added a version check to the MySQLdb dialect surrounding the check for ‘utf8_bin’ collation, as this fails on MySQL server < 5.0. sqlite¶ Added support for partial indexes (e.g. with a WHERE clause) on SQLite. Pull request courtesy Kai Groner. See also Added a new SQLite backend for the SQLCipher backend. This backend provides for encrypted SQLite databases using the pysqlcipher Python driver, which is very similar to the pysqlite driver. See also misc¶ Fixed bug where the association proxy list class would not interpret slices correctly under Py3K. Pull request courtesy Gilles Dartiguelongue. 0.9.8¶Released: October 13, 2014 orm¶. Fixed warning that would emit when a complex self-referential primaryjoin contained functions, while at the same time remote_side was specified; the warning would suggest setting “remote side”. It now only emits if remote_side isn’t present. orm declarative¶ Fixed “‘NoneType’ object has no attribute ‘concrete’” error when using AbstractConcreteBasein conjunction with a subclass that declares __abstract__. engine¶. sql¶ Fixed bug where a fair number of SQL elements within the sql package would fail to __repr__()successfully, due to a missing descriptionattribute that would then invoke a recursion overflow when an internal AttributeError would then re-invoke __repr__(). An adjustment to table/index reflection such that if an index reports a column that isn’t found to be present in the table, a warning is emitted and the column is skipped. This can occur for some special system column situations as has been observed with Oracle. Fixed bug in CTE where literal_bindscompiler argument would not be always be correctly propagated when one CTE referred to another aliased CTE in a statement. Fixed 0.9.7 regression caused by #3067 in conjunction with a mis-named unit test such that so-called “schema” types like Booleanand Enumcould no longer be pickled. postgresql¶ Support is added for “sane multi row count” with the pg8000 driver, which applies mostly to when using versioning with the ORM. The feature is version-detected based on pg8000 1.9.14 or greater in use. Pull request courtesy Tony Locke.. Fixed bug in arrayobject where comparison to a plain Python list would fail to use the correct array constructor. Pull request courtesy Andrew.. mysql¶ Unicode SQL is now passed for MySQLconnector version 2.0 and above; for Py2k and MySQL < 2.0, strings are encoded. sqlite¶. mssql¶ Fixed the version string detection in the pymssql dialect to work with Microsoft SQL Azure, which changes the word “SQL Server” to “SQL Azure”. oracle¶ Fixed long-standing bug in Oracle dialect where bound parameter names that started with numbers would not be quoted, as Oracle doesn’t like numerics in bound parameter names. misc¶. Fixed bug in ordering list where the order of items would be thrown off during a collection replace event, if the reorder_on_append flag were set to True. The fix ensures that the ordering list only impacts the list that is explicitly associated with the object. Fixed bug where MutableDictfailed to implement the update()dictionary method, thus not catching changes. Pull request courtesy Matt Chisholm. Fixed bug where a custom subclass of MutableDictwould not show up in a “coerce” operation, and would instead return a plain MutableDict. Pull request courtesy Matt Chisholm.. 0.9.7¶Released: July 22, 2014 orm¶. The “evaluator” for query.update()/delete() won’t work with multi-table updates, and needs to be set to synchronize_session=False or synchronize_session=’fetch’; a warning is now emitted. In 1.0 this will be promoted to a full exception. Fixed bug where items that were persisted, deleted, or had a primary key change within a savepoint block would not participate in being restored to their former state (not in session, in session, previous PK) after the outer transaction were rolled back. Fixed bug in subquery eager loading in conjunction with with_polymorphic(), the targeting of entities and columns in the subquery load has been made more accurate with respect to this type of entity and others. Fixed bug involving dynamic attributes, that was again a regression of #3060 from version 0.9.5. A self-referential relationship with lazy=’dynamic’ would raise a TypeError within a flush operation. engine¶ Added new event ConnectionEvents.handle_error(), a more fully featured and comprehensive replacement for ConnectionEvents.dbapi_error(). sql¶ Fixed bug in Enumand other SchemaTypesubclasses where direct association of the type with a MetaDatawould lead to a hang when events (like create events) were emitted on the MetaData. This change is also backported to: 0.8.7 Fixed a bug within the custom operator plus TypeEngine.with_variant()system, whereby using a TypeDecoratorin conjunction with variant would fail with an MRO error when a comparison operator was used. This change is also backported to: 0.8.7. Fixed bug in common table expressions whereby positional bound parameters could be expressed in the wrong final order when CTEs were nested in certain ways. Fixed bug where multi-valued Insertconstruct would fail to check subsequent values entries beyond the first one given for literal SQL expressions. Added a “str()” step to the dialect_kwargs iteration for Python version < 2.6.5, working around the “no unicode keyword arg” bug as these args are passed along as keyword args within some reflection processes. The TypeEngine.with_variant()method will now accept a type class as an argument which is internally converted to an instance, using the same convention long established by other constructs such as Column. postgresql¶ Added kw argument postgresql_regconfigto the ColumnOperators.match()operator, allows the “reg config” argument to be specified to the to_tsquery()function emitted. Pull request courtesy Jonathan Vanasco. Added support for PostgreSQL JSONB via JSONB. Pull request courtesy Damian Dimmich. Fixed bug introduced in 0.9.5 by new pg8000 isolation level feature where engine-level isolation level parameter would raise an error on connect. mysql¶.8.7 sqlite¶. mssql¶ Enabled “multivalues insert” for SQL Server 2008. Pull request courtesy Albert Cervin. Also expanded the checks for “IDENTITY INSERT” mode to include when the identity key is present in the VALUEs clause of the statement..8.7’. oracle¶ Fixed bug in oracle dialect test suite where in one test, ‘username’ was assumed to be in the database URL, even though this might not be the case. tests¶ Fixed bug where “python setup.py test” wasn’t calling into distutils appropriately, and errors would be emitted at the end of the test suite. misc¶ Fixed bug when the declarative __abstract__flag was not being distinguished for when it was actually the value False. The __abstract__flag needs to actually evaluate to a True value at the level being tested. 0.9.6¶Released: June 23, 2014 orm¶ 0.9.5¶Released: June 23, 2014 orm¶. Fixed bug in SQLite join rewriting where anonymized column names due to repeats would not correctly be rewritten in subqueries. This would affect SELECT queries with any kind of subquery + join.. examples¶ Added a new example illustrating materialized paths, using the latest relationship features. Example courtesy Jack Zhou. engine¶. Fixed bug in INSERT..FROM SELECT construct where selecting from a UNION would wrap the union in an anonymous (e.g. unlabeled) subquery. This change is also backported to: 0.8.7 Fixed bug where Table.update()and Table.delete()would produce an empty WHERE clause when an empty and_()or or_()or other blank expression were applied. This is now consistent with that of This change is also backported to: 0. Fixed bug where the Operators.__and__(), Operators.__or__()and Operators.__invert__()operator overload methods could not be overridden within a custom Comparatorimplementation. Fixed bug in new DialectKWArgs.argument_for()method where adding an argument for a construct not previously included for any special arguments would fail. Fixed regression introduced in 0.9 where new “ORDER BY <labelname>” feature from #1068 would not apply quoting rules to the label name as rendered in the ORDER BY. Restored the import for Functionto the sqlalchemy.sql.expressionimport namespace, which was removed at the beginning of 0.9. postgresql¶ Added support for AUTOCOMMIT isolation level when using the pg8000 DBAPI. Pull request courtesy Tony Locke..8.7 Added a new “disconnect” message “connection has been closed unexpectedly”. This appears to be related to newer versions of SSL. Pull request courtesy Antti Haapala. This change is also backported to: 0. mysql¶.8.7 Added support for reflecting tables where an index includes KEY_BLOCK_SIZE using an equal sign. Pull request courtesy Sean McGivern. This change is also backported to: 0.8.7 mssql¶ Revised the query used to determine the current default schema name to use the database_principal_id()function in conjunction with the sys.database_principalsview so that we can determine the default schema independently of the type of login in progress (e.g., SQL Server, Windows, etc). tests¶ Corrected for some deprecation warnings involving the impmodule and Python 3.3 or greater, when running tests. Pull request courtesy Matt Chisholm..8.7 Fixed bug in mutable extension where MutableDictdid not report change events for the setdefault()dictionary operation. This change is also backported to: 0.8.7 Fixed bug where MutableDict.setdefault()didn’t return the existing or new value (this bug was not released in any 0.8 version). Pull request courtesy Thomas Hervé. This change is also backported to: 0.8.7 In public test suite, changed to use of String(40)from less-supported Textin StringTest.test_literal_backslashes. Pullreq courtesy Jan.. 0.9.4¶Released: March 28, 2014 general¶ Fixed some test/feature failures occurring in Python 3.4, in particular the logic used to wrap “column default” callables wouldn’t work properly for Python built-ins. orm¶ Added new parameterntial cascaded deletes of this nature. See also #2403 for background on the original change. A warning is emitted if the MapperEvents.before_configured()or MapperEvents.after_configured()events are applied to a specific mapper or mapped class, as the events are only invoked for the Mappertarget at the general level. Added a new keyword argument once=Trueto listen()and listens_for(). This is a convenience feature which will wrap the given listener such that it is only invoked once. Added a new option to. Fixed ORM bug where changing the primary key of an object, then marking it for DELETE would fail to target the correct row for DELETE. This change is also backported to: 0.8.6 Fixed regression from 0.8.3 as a result of #2818 where Query.exists()wouldn’t work on a query that only had a Query.select_from()entry but no other entities. This change is also backported to: 0.8.6. This change is also backported to: 0.8.6 Removed stale names from sqlalchemy.orm.interfaces.__all__and refreshed with current names, so that an import *from this module again works. This change is also backported to: 0.8.6. Added support for the not-quite-yet-documented insert=Trueflag for listen()to work with mapper / instance events.. Fixed regression from 0.8 where using an option like lazyload()with the “wildcard” expression, e.g. :). examples¶ Fixed bug in the versioned_history example where column-level INSERT defaults would prevent history values of NULL from being written. engine¶. sql¶ Added support for literal rendering of boolean values, e.g. “true” / “false” or “1” / “0”. Added a new feature Fixed an 0.9 regression where a Tablethat failed to reflect correctly wouldn’t be removed from the parent MetaData, even though in an invalid state. Pullreq courtesy Roman Podoliaka. MetaData.naming_conventionfeature will now also apply to CheckConstraintobjects that are associated directly with a Columninstead of just on the Table.. Adjusted the logic which applies names to the .c collection when a no-name BindParameteris received, e.g. via. Fixed issue in new TextClause.columns()method where the ordering of columns given positionally would not be preserved. This could have potential impact in positional situations such as applying the resulting TextAsFromobject to a union. postgresql¶ Enabled “sane multi-row count” checking for the psycopg2 DBAPI, as this seems to be supported as of psycopg2 2.0.9. This change is also backported to: 0.8.6. This change is also backported to: 0.8.6 mysql¶(). oracle¶ coercion is unconditional for all string values, despite performance concerns. Pull request courtesy Christoph Zwerschke. Added new datatype. tests¶ Fixed a few errant u''strings that would prevent tests from passing in Py3.2. Patch courtesy Arfrever Frehtes Taifersar Arahesis. misc¶ Fixed bug in mutable extension as well as flag_modified()where the change event would not be propagated if the attribute had been reassigned to itself. This change is also backported to: 0.8.6 Added support to automap for the case where a relationship should not be created between two classes that are in a joined inheritance relationship, for those foreign keys that link the subclass back to the superclass. Fixed small issue in SingletonThreadPoolwhere the current connection to be returned might get inadvertently cleaned out during the “cleanup” process. Patch courtesy jd23. Fixed bug in association proxy where assigning an empty slice (e.g. x[:] = [...]) would fail on Py3k.. 0.9.3¶Released: February 19, 2014 orm¶ Added new MapperEvents.before_configured()event which allows an event at the start of configure_mappers(), as well as __declare_first__()hook within declarative to complement __declare_last__(). Fixed bug where Query.get()would fail to consistently raise the InvalidRequestErrorthat invokes when called on a query with existing criterion, when the given identity is already present in the identity map. This change is also backported to: 0.8. Improved the initialization logic of composite attributes such that calling MyClass.attributewill not require that the configure mappers step has occurred, e.g. it will just work without throwing any error.. orm declarative¶. examples¶ Added optional “changed” column to the versioned rows example, as well as support for when the versioned Tablehas an explicit Table.schemaargument. Pull request courtesy jplaverdure. engine¶ Fixed a critical regression caused by #2880 where the newly concurrent ability to return connections from the pool means that the “first_connect” event is now no longer synchronized either, thus leading to dialect mis-configurations under even minimal concurrency situations. This change is also backported to: 0.8.5 sql¶ Fixed bug where calling Insert.values()with an empty list or tuple would raise an IndexError. It now produces an empty insert construct as would be the case with an empty dictionary. This change is also backported to: 0.8.5 Fixed bug where ColumnOperators.in_()would go into an endless loop if erroneously passed a column expression whose comparator included the __getitem__()method, such as a column that uses the ARRAYtype. This change is also backported to: 0.8.5 Fixed regression in new “naming convention” feature where conventions would fail if the referred table in a foreign key contained a schema name. Pull request courtesy Thomas Farvour. Fixed bug where so-called “literal render” of bindparam()constructs would fail if the bind were constructed with a callable, rather than a direct value. This prevented ORM expressions from being rendered with the “literal_binds” compiler flag. postgresql¶ Added the TypeEngine.python_typeconvenience accessor onto the ARRAYtype. Pull request courtesy Alexey Terentev. Added an additional message to psycopg2 disconnect detection, “could not send data to server”, which complements the existing “could not receive data from server” and has been observed by users. This change is also backported to: 0.8.5. This change is also backported to: 0.8.5. This change is also backported to: 0.8.5 Added support for the PARTITION BYand PARTITIONSMySQL table keywords, specified as mysql_partition_by='value'and mysql_partitions='value'to Table. Pull request courtesy Marcus McCurdy. This change is also backported to: 0.8.5(). This change is also backported to: 0.8.5 Fixed bug in cymysql dialect where a version string such as '33a-MariaDB'would fail to parse properly. Pull request courtesy Matt Schmidt. sqlite¶ The SQLite dialect will now skip unsupported arguments when reflecting types; such as if it encounters a string like INTEGER(5), the INTEGERtype will be instantiated without the “5” being included, based on detecting a TypeErroron the first attempt. Fixed bug where the AutomapBaseclass of the new automap extension would fail if classes were pre-arranged in single or potentially joined inheritance patterns. The repaired joined inheritance issue could also potentially apply when using DeferredReflectionas well. 0.9.2¶Released: February 2, 2014 orm¶ Added a new parameter Operators.op.is_comparison. This flag allows a custom op from Operators.op()to be considered as a “comparison” operator, thus usable for custom relationship.primaryjoinconditions. Fixed error message when an iterator object is passed to class_mapper()or similar, where the error would fail to render on string formatting. Pullreq courtesy Kyle Stark. This change is also backported to: 0.8.5Clause.columns()method is called to accommodate the text.typemapargument.. examples¶ Added a tweak to the “history_meta” example where the check for “history” on a relationship-bound attribute will now no longer emit any SQL if the relationship is unloaded. engine¶ Added a new pool event PoolEvents.invalidate(). Called when a DBAPI connection is to be marked as “invalidated” and discarded from the pool. sql¶ Added MetaData.reflect.**dialect_kwargsto support dialect-level reflection options for all Tableobjects reflected. Added a new feature which allows automated naming conventions to be applied to. Fixed bug whereby binary type would fail in some cases if used with a “test” dialect, such as a DefaultDialect or other dialect with no DBAPI. Fixed bug where “literal binds” wouldn’t work with a bound parameter that’s a binary type. A similar, but different, issue is fixed in 0.8.. A UniqueConstraintcreated inline with a Tablethat has no columns within it will be skipped. Pullreq courtesy Derek Harland.. schema¶ Restored sqlalchemy.schema.SchemaVisitorto the .schemamodule. Pullreq courtesy Sean Dague. postgresql¶ Added a new dialect-level argument Some missing methods added to the cymysql dialect, including _get_server_version_info() and _detect_charset(). Pullreq courtesy Hajime Nakagami. This change is also backported to: 0.8.5 ENUMbeing downcast into a Enum, and that of SQLite date types being cast into generic date types. The adapt()method needed to become more specific here to counteract the removal of a “catch all” **kwargscollection on the base TypeEngineclass that was removed in 0.9.. sqlite¶ Fixed bug whereby SQLite compiler failed to propagate compiler arguments such as “literal binds” into a CAST expression. mssql¶ Added an option mssql_clusteredto the UniqueConstraintand PrimaryKeyConstraintconstructs; on SQL Server, this adds the CLUSTEREDkeyword to the constraint construct within DDL. Pullreq courtesy Derek Harland. oracle¶. 0.9.1¶Released: January 5, 2014 orm¶ A new, experimental extension sqlalchemy.ext.automapis added. This extension expands upon the functionality of Declarative as well as the DeferredReflectionclass to produce a base class which automatically generates mapped classes and relationships based on table metadata. See also. Fixed bug where using new Session.infoattribute would fail if the .infoargument were only passed to the sessionmakercreation call but not to the object itself. Courtesy Robin Schoonover. Fixed regression where we don’t check the given name against the correct string class when setting up a backref based on a name, therefore causing the error “too many values to unpack”. This was related to the Py3k conversion.. orm declarative¶. sql¶ schema¶ The Table.extend_existingand Table.autoload_replaceparameters are now available on the MetaData.reflect()method. 0.9.0¶Released: December 30, 2013 orm¶ The. Added new argument include_backrefs=Trueto the validates()function; when set to False, a validation event will not be triggered if the event was initiated as a backref to an attribute operation from the other side. A new API for specifying the FOR UPDATEclause of a SELECTis added with the new Query.with_for_update()method, to complement the new GenerativeSelect.with_for_update()method. Pull request courtesy Mario Lassnig.. This change is also backported to: 0.8.5 Fixed bug when using joined table inheritance from a table to a select/alias on the base, where the PK columns were also not same named; the persistence system would fail to copy primary key values from the base table to the inherited table upon INSERT. This change is also backported to: 0.8.5 composite()will raise an informative error message when the columns/attribute (names) passed don’t resolve to a Column or mapped attribute (such as an erroneous tuple); previously raised an unbound local. This change is also backported to: 0.8.5 Fixed a regression introduced by #2818 where the EXISTS query being generated would produce a “columns being replaced” warning for a statement with two same-named columns, as the internal SELECT wouldn’t have use_labels set. This change is also backported to: 0.8.4 Added support for the Python 3 method list.clear()within the ORM collection instrumentation system; pull request courtesy Eduardo Schettino.. The. Added support for new Session.infoattribute to scoped_session. Fixed bug where usage of new Bundleobject would cause the Query.column_descriptionsattribute to fail.. orm declarative¶ Declarative does an extra check to detect if the same Columnis mapped multiple times under different properties (which typically should be a synonym()instead) or if two or more Columnobjects are given the same name, raising a warning if this condition is detected.. Fixed bug where in Py2K a unicode literal would not be accepted as the string name of a class or other argument within declarative using relationship(). examples¶ Fixed bug which prevented history_meta recipe from working with joined inheritance schemes more than one level deep. engine¶.. This change is also backported to: 0.8.4. This change is also backported to: 0.8.4. This change is also backported to: 0.8.4 Fixed bug where SQL statement would be improperly ASCII-encoded when a pre-DBAPI StatementErrorwere raised within Connection.execute(), causing encoding errors for non-ASCII statements. The stringification now remains within Python unicode thus avoiding encoding errors. This change is also backported to: 0.8.4 The create_engine()routine and the related make_url()function no longer considers the usernameand passwordexpect only See also The “password” portion of a create_engine() no longer considers the + sign as an encoded space The RowProxyobject is now sortable in Python as a regular tuple is; this is accomplished via ensuring tuple() conversion on both sides within the __eq__()method as well as the addition of a __lt__()method. sql¶ A new API for specifying the. The precision used when coercing a returned floating point value to Python. The precedence rules for the. The exception raised when a BindParameteris present in a compiled statement without a value now includes the key name of the bound parameter in the error message. This change is also backported to: 0.8.5 schema¶ postgresql¶ Support for PostgreSQL JSON has been added, using the new JSONtype. Huge thanks to Nathan Rice for implementing and testing this. Added support for PostgreSQL TSVECTOR via the TSVECTORtype. Pull request courtesy Noufal Ibrahim. Fixed bug where index reflection would mis-interpret indkey values when using the pypostgresql adapter, which returns these values as lists vs. psycopg2’s return type of string. This change is also backported to: 0.8.4 Now using psycopg2 UNICODEARRAY extension for handling unicode arrays with psycopg2 + normal “native unicode” mode, in the same way the UNICODE extension is used. Fixed bug where values within an ENUM weren’t escaped for single quote signs. Note that this is backwards-incompatible for existing workarounds that manually escape the single quotes. mysql¶. mssql¶ The “asdecimal” flag used with the Floattype will now work with Firebird as well as the mssql+pyodbc dialects; previously the decimal conversion was not occurring. This change is also backported to: 0.8.5 Added “Net-Lib error during Connection reset by peer” message to the list of messages checked for “disconnect” within the pymssql dialect. Courtesy John Anderson. This change is also backported to: 0.8.5. This change is also backported to: 0.8.4 oracle¶ Added ORA-02396 “maximum idle time” error code to list of “is disconnect” codes with cx_oracle. This change is also backported to: 0.8.4 Fixed bug where Oracle VARCHARtypes given with no length (e.g. for a CASTor similar) would incorrectly render None CHARor similar. This change is also backported to: 0.8.4 misc¶ The firebird dialect will quote identifiers which begin with an underscore. Courtesy Treeve Jelbert. This change is also backported to: 0.8.5 Fixed bug in Firebird index reflection where the columns within the index were not sorted correctly; they are now sorted in order of RDB$FIELD_POSITION. This change is also backported to: 0.8.5 Error message when a string arg sent to relationship()which doesn’t resolve to a class or mapper has been corrected to work the same way as when a non-string arg is received, which indicates the name of the relationship which had the configurational error. This change is also backported to: 0.8.5 Fixed bug which prevented the serializerextension from working correctly with table or column names that contain non-ASCII characters. This change is also backported to: 0.8.4. The “informix” and “informixdb” dialects have been removed; the code is now available as a separate repository on Bitbucket. The IBM-DB project has provided production-level Informix support since the informixdb dialect was first added. 0.9.0b1¶Released: October 26, 2013 general¶ The C extensions are ported to Python 3 and will build under any supported CPython 2 or 3 environment. The codebase is now “in-place” for Python 2 and 3, the need to run 2to3 has been removed. Compatibility is now against Python 2.6 on forward... This change is also backported to: 0.8.3 The association proxy now returns Nonewhen fetching a scalar attribute off of a scalar relationship, where the scalar relationship itself points to None, instead of raising an AttributeError. Added new method AttributeState.load_history(), works like AttributeState.historybut also fires loader callables. Added a new load option load_only(). This allows a series of column names to be specified as loading “only” those attributes, deferring the rest. The system of loader options has been entirely rearchitected to build upon a much more comprehensive base, the The version_id_generatorparameter of Mappercan now be specified to rely upon server generated version identifiers, using triggers or other database-provided versioning features, or via an optional programmatic value, by setting version_id_generator=False. When using a server-generated version identifier, the ORM will use RETURNING when available to immediately load the new version value, else it will emit a second SELECT. The eager_defaultsflag of Mapperwill now allow the newly generated default values to be fetched using an inline RETURNING clause, rather than a second SELECT statement, for backends that support RETURNING. Added a new attribute Session.infoto Session; this is a dictionary where applications can store arbitrary data local to a Session. The contents of Session.infocan be also be initialized using the infoargument of Sessionor sessionmaker. Removal of event listeners is now implemented. The feature is provided via the remove()function. See also The mechanism by which attribute events pass along an AttributeImplas an “initiator” token has been changed; the object is now an event-specific object called aliased(), Join.alias(), and Fixed bug where using an annotation such as remote()or foreign()on a Columnbefore association with a parent Tablecould produce issues related to the parent table not rendering within joins, due to the inherent copy operation performed by an annotation. This change is also backported to: 0.8.3 Fixed bug where Query.exists()failed to work correctly without any WHERE criterion. Courtesy Vladimir Magamedov. This change is also backported to: 0.8.3 Fixed a potential issue in an ordered sequence implementation used by the ORM to iterate mapper hierarchies; under the Jython interpreter this implementation wasn’t ordered, even though cPython and PyPy maintained ordering. This change is also backported to: 0.8.3 Fixed bug in ORM-level event registration where the “raw” or “propagate” flags could potentially be mis-configured in some “unmapped base class” configurations. This change is also backported to: 0.8.3). This change is also backported to: 0.8.3 Fixed bug whereby attribute history functions would fail when an object we moved from “persistent” to “pending” using the make_transient()function, for operations involving collection-based backrefs. This change is also backported to: 0.8.3 A warning is emitted when trying to flush an object of an inherited class where the polymorphic discriminator has been assigned to a value that is invalid for the class. This change is also backported to: 0.8.2 Fixed bug in polymorphic SQL generation where multiple joined-inheritance entities against the same base class joined to each other as well would not track columns on the base table independently of each other if the string of joins were more than two entities long. This change is also backported to: 0.8.2 Fixed bug where sending a composite attribute into Query.order_by()would produce a parenthesized expression not accepted by some databases. This change is also backported to: 0.8.2 Fixed the interaction between composite attributes and the aliased()function. Previously, composite attributes wouldn’t work correctly in comparison operations when aliasing was applied. This change is also backported to: 0.8.2 Fixed bug where MutableDictdidn’t report a change event when clear()was called. This change is also backported to: 0.8.2. This change is also backported to: 0.8.3, 0.8.0b1 get_history()when used with a scalar column-mapped attribute will now honor the “passive” flag passed to it; as this defaults to PASSIVE_OFF, the function will by default query the database if the value is not present. This is a behavioral change vs. 0.8. Cls.scalar.has()with no arguments, when Cls.scalaris a column-based value - this returns whether or not Cls.associatedhas any rows present, regardless of whether or not Cls.associated.scalaris NULL or not. The “auto-aliasing” behavior of the Added a convenience class decorator as_declarative(), is a wrapper for declarative_base()which allows an existing base class to be applied using a nifty class-decorated approach. This change is also backported to: 0.8.3 ORM descriptors such as hybrid properties can now be referenced by name in a string argument used with order_by, primaryjoin, or similar in relationship(), in addition to column-bound attributes. This change is also backported to: 0.8.2. This change is also backported to: 0.8.3 Added “autoincrement=False” to the history table created in the versioning example, as this table shouldn’t have autoinc on it in any case, courtesy Patrick Schmid. This change is also backported to: 0.8.3 Fixed an issue with the “versioning” recipe whereby a many-to-one reference could produce a meaningless version for the target, even though it was not changed, when backrefs were present. Patch courtesy Matt Chisholm. This change is also backported to: 0.8.2 engine¶ repr()for the URLof an Enginewill now conceal the password using asterisks. Courtesy Gunnlaugur Þór Briem. This change is also backported to: 0.8.3 New events added to ConnectionEvents: Dialect.initialize() is not called a second time if an Engineis recreated, due to a disconnect error. This fixes a particular issue in the Oracle 8 dialect, but in general the dialect.initialize() phase should only be once per dialect. This change is also backported to: 0.8.3 Fixed bug where QueuePoolwould lose the correct checked out count if an existing pooled connection failed to reconnect after an invalidate or recycle event. This change is also backported to: 0.8.3 Fixed bug where the reset_on_returnargument to various Poolimplementations would not be propagated when the pool was regenerated. Courtesy Eevee. This change is also backported to: 0.8.2 The regexp used by the make_url()function now parses ipv6 addresses, e.g. surrounded by brackets. This change is also backported to: 0.8.3, 0.8.0b1 The method signature of Dialect.reflecttable(), which in all known cases is provided by DefaultDialect, has been tightened to expect include_columnsand exclude_columnsarguments without any kw option, reducing ambiguity - previously exclude_columnswas missing. sql¶ Added support for “unique constraint” reflection, via the Inspector.get_unique_constraints()method. Thanks for Roman Podolyaka for the patch. This change is also backported to: 0.8.4') This change is also backported to: 0.8.3 The PostgreSQL and MySQL dialects now support reflection/inspection of foreign key options, including ON UPDATE, ON DELETE. PostgreSQL also reflects MATCH, DEFERRABLE, and INITIALLY. Courtesy ijl.. An overhaul of expression handling for special symbols particularly with conjunctions, e.g. None null() true() TypeEngine.literal_processor()serves as the base, and TypeDecorator.process_literal_param()is added to allow wrapping of a native literal rendering method.. The defaultargument of Columnnow accepts a class or object method as an argument, in addition to a standalone function; will properly detect if the “context” argument is accepted or not. Added new method to the insert()construct Insert.from_select(). Given a list of columns and a selectable, renders INSERT INTO (table) (columns) SELECT ... While this feature is highlighted as part of 0.9 it is also backported to 0.8.3. See also Provided a new attribute for TypeDecoratorcalled TypeDecorator.coerce_to_is_types, to make it easier to control how comparisons using Noneand boolean types goes about producing an ISexpression, or a plain equality expression with a bound parameter. A. Fixed bug where type_coerce()would not interpret ORM elements with a __clause_element__()method properly. This change is also backported to: 0.8.3. This change is also backported to: 0.8.3 The .uniqueflag on Indexcould be produced as Noneif it was generated from a Columnthat didn’t specify unique(where it defaults to None). The flag will now always be Trueor False. This change is also backported to: 0.8.3. This change is also backported to: 0.8.3 A select()that is made to refer to itself in its FROM clause, typically via in-place mutation, will raise an informative error message rather than causing a recursion overflow. This change is also backported to: 0.8.3 Fixed bug where using the column_reflectevent to change the .keyof the incoming Columnwould prevent primary key constraints, indexes, and foreign key constraints from being correctly reflected. This change is also backported to: 0.8.3 The ColumnOperators.notin_()operator added in 0.8 now properly produces the negation of the expression “IN” returns when used against an empty collection. This change is also backported to: 0.8.3. This change is also backported to: 0.8.3. This change is also backported to: 0.8.2. This change is also backported to: 0.8.2 Fixed bug whereby using MetaData.reflect()across a remote schema as well as a local schema could produce wrong results in the case where both schemas had a table of the same name. This change is also backported to: 0.8.2! This change is also backported to: 0.8.2 Fixed regression dating back to 0.7.9 whereby the name of a CTE might not be properly quoted if it was referred to in multiple FROM clauses. This change is also backported to: 0.8.3, 0.8.0b1 Fixed bug in common table expression system where if the CTE were used only as an alias()construct, it would not render using the WITH keyword. This change is also backported to: 0.8.3, 0.8.0b1 Fixed bug in CheckConstraintDDL where the “quote” flag from a Columnobject would not be propagated. This change is also backported to: 0.8.3, 0.8.0b1 The “name” attribute is set on Indexbefore the “attach” events are called, so that attachment events can be used to dynamically generate a name for the index based on the parent table and/or columns. The erroneous kw arg “schema” has been removed from the ForeignKeyobject. this was an accidental commit that did nothing; a warning is raised in 0.8.3 when this kw arg is used. A rework to the way that “quoted” identifiers are handled, in that instead of relying upon various. The resolution of. postgresql¶ Support for PostgreSQL 9.2 range types has been added. Currently, no type translation is provided, so works directly with strings or psycopg2 2.5 range extension types at the moment. Patch courtesy Chris Withers. This change is also backported to: 0.8.2 Added support for “AUTOCOMMIT” isolation when using the psycopg2 DBAPI. The keyword is available via the isolation_levelexecution option. Patch courtesy Roman Podolyaka. This change is also backported to: 0.8.2 Added support for rendering SMALLSERIALwhen a SmallIntegertype is used on a primary key autoincrement column, based on server version detection of PostgreSQL version 9.2 or greater. Removed a 128-character truncation from the reflection of the server default for a column; this code was original from PG system views which truncated the string for readability. This change is also backported to: 0.8.3 Parenthesis will be applied to a compound SQL expression as rendered in the column list of a CREATE INDEX statement. This change is also backported to: 0.8.3 Fixed bug where PostgreSQL version strings that had a prefix preceding the words “PostgreSQL” or “EnterpriseDB” would not parse. Courtesy Scott Schaefer. This change is also backported to: 0.8.3. This change is also backported to: 0.8.2 Fixed bug in HSTORE type where keys/values that contained backslashed quotes would not be escaped correctly when using the “non native” (i.e. non-psycopg2) means of translating HSTORE data. Patch courtesy Ryan Kelly. This change is also backported to: 0.8.2 Fixed bug where the order of columns in a multi-column PostgreSQL index would be reflected in the wrong order. Courtesy Roman Podolyaka. This change is also backported to: 0.8.2 mysql¶ The mysql_lengthparameter used with Indexcan now be passed as a dictionary of column names/lengths, for use with composite indexes. Big thanks to Roman Podolyaka for the patch. This change is also backported to: 0.8.2 The MySQL SETtype now features the same auto-quoting behavior as that of ENUM. Quotes are not required when setting up the value, but quotes that are present will be auto-detected along with a warning. This also helps with Alembic where the SET type doesn’t render with quotes.. This change is also backported to: 0.8.3 MySQL-connector dialect now allows options in the create_engine query string to override those defaults set up in the connect, including “buffered” and “raise_on_warnings”. This change is also backported to: 0.8.3 Fixed bug when using multi-table UPDATE where a supplemental table is a SELECT with its own bound parameters, where the positioning of the bound parameters would be reversed versus the statement itself when using MySQL’s special syntax. This change is also backported to: 0.8.2 Added another conditional to the mysql+gaerdbmsdialect to detect so-called “development” mode, where we should use the rdbms_mysqldbDBAPI. Patch courtesy Brett Slatkin. This change is also backported to: 0.8.2. This change is also backported to: 0.8.2 Updates to MySQL reserved words for versions 5.5, 5.6, courtesy Hanno Schlichting. This change is also backported to: 0.8.3, 0.8.0b1 Fix and test parsing of MySQL foreign key options within reflection; this complements the work in #2183 where we begin to support reflection of foreign key options such as ON UPDATE/ON DELETE cascade. Improved support for the cymysql driver, supporting version 0.6.5, courtesy Hajime Nakagami. sqlite¶ The newly added SQLite DATETIME arguments storage_format and regexp apparently were not fully implemented correctly; while the arguments were accepted, in practice they would have no effect; this has been fixed. This change is also backported to: 0.8.3 Added sqlalchemy.types.BIGINTto the list of type names that can be reflected by the SQLite dialect; courtesy Russell Stuart. This change is also backported to: 0.8.2 mssql¶ When querying the information schema on SQL Server 2000, removed a CAST call that was added in 0.8.1 to help with driver issues, which apparently is not compatible on 2000. The CAST remains in place for SQL Server 2005 and greater. This change is also backported to: 0.8.2 Fixes to MSSQL with Python 3 + pyodbc, including that statements are passed correctly. oracle¶ The Oracle unit tests with cx_oracle now pass fully under Python 3. Fixed bug where Oracle table reflection using synonyms would fail if the synonym and the table were in different remote schemas. Patch to fix courtesy Kyle Derr. This change is also backported to: 0.8.3. This change is also backported to: 0.8.3. This change is also backported to: 0.8.2. Added pool logging for “rollback-on-return” and the less used “commit-on-return”. This is enabled with the rest of pool “debug” logging. The fdbdialect is now the default dialect when specified without a dialect qualifier, i.e. firebird://, per the Firebird project publishing fdbas their official Python driver. Type lookup when reflecting the Firebird types LONG and INT64 has been fixed so that LONG is treated as INTEGER, INT64 treated as BIGINT, unless the type has a “precision” in which case it’s treated as NUMERIC. Patch courtesy Russell Stuart. This change is also backported to: 0.8.2 Fixed bug whereby if a composite type were set up with a function instead of a class, the mutable extension would trip up when it tried to check that column for being a MutableComposite(which it isn’t). Courtesy asldevi. This change is also backported to: 0.8.2. This change is also backported to: 0.8.2 flambé! the dragon and The Alchemist image designs created and generously donated by Rotem Yaari.Created using Sphinx 4.5.0.
https://docs.sqlalchemy.org/en/20/changelog/changelog_09.html
CC-MAIN-2022-21
refinedweb
7,541
58.79
by Zoran Horvat Sep 12, 2013 Given an unsorted array of integer numbers, write a function which returns the number that appears more times in the array than any other number (mode of the array). If there are multiple solutions, i.e. two or more most frequent numbers occur equally many times, function should return any of them. Example: Let the array be 1, 2, 2, 3, 1, 3, 2. Mode of this array is 2, and the function should return value 2. Should an additional value 1 appear in the array, so that it becomes 1, 2, 2, 3, 1, 3, 2, 1, the function should return either 2 or 1, because these are the numbers with most appearances - three times each. Keywords: Array mode, mode of the array, relative majority, plurality, array. In this exercise we are dealing with simple majority, meaning that the winning element is not required to populate more than half of the array. It is sufficient for the winner to just appear more often than the second most frequent element. This is a different requirement compared to the Majority Element exercise. But this loose requirement actually makes the problem harder to solve. In case of proper (absolute) majority element, it is sufficient to find the median of the array and that element would either be the majority element, or the array would not contain the majority element at all. There is even simpler and more efficient algorithm for finding majority element, which is covered in exercise Finding a Majority Element in an Array. With array mode, the difficulty comes from the fact that many numbers can share the winning position. Suppose that we try to count occurrences of each number in the array in order to find which one occurs more times than others. In the worst case, all numbers could be distinct, and then all of them would share the same place with exactly one occurrence. Of course, that would mean that there is no single winner, but that conclusion can only be made after the very last element of the array has been visited. Suppose that the last number in the array is not unique, but instead is a repetition of one of the preceding elements. In that case, that particular number would be the winner with total of two occurrences - just enough to beat all other competing elements by one. Following the analysis, the first solution could be to just count occurrences of each distinct element in the array. We could use the hash table data structure, which has O(1) search time. Hash table would be used to map element value to total number of occurrences of that value. Whenever an element of the array is visited, we search for that value in the hash table. If the value is there, then it maps to total number of occurrences of that value in the array so far. Therefore, we just increment the count in the hash table and move forward. Otherwise, if this is the first occurrence of the value in the array, we just add it to the hash table with associated count equal to one. Here is the pseudocode of the counting solution: function Mode(a, n) a - array of integers n - length of the array begin index = new hashtable mode = 0 modeCount = 0 for i = 1 to n begin if index contains a[i] then begin index[a[i]] = index[a[i]] + 1 curCount = index[a[i]] end else begin index[a[i]] = 1 curCount = 1 end if curCount > modeCount then begin mode = a[i] modeCount = curCount end return mode end end This function takes only O(1) time to calculate the array mode, provided that the hash table it uses exhibits O(1) access time. But on the other hand, this function takes O(N) memory to build the index of all distinct elements in the array. In the worst case, when all elements are distinct, index size will be proportional to the array size. In some cases we could simplify the indexing solution. If we knew that the range of numbers appearing in the array is bounded to a reasonable range, then we could apply the counting sort algorithm and just count occurrences of each value in the array. Here is the pseudocode of the function which calculates the array mode by counting: function Mode(a, n) a - array of integers n - length of the array begin min = minimum value in a max = maximum value in a range = max - min + 1 counters = new int array [0..range - 1] for i = 0 to range - 1 counters[i] = 0 for i = 1 to n begin offset = a[i] - min counters[offset] = counters[offset] + 1 end modeOffset = 0 for i = 1 to range - 1 if counters[i] > counters[modeOffset] then modeOffset = i mode = min + modeOffset return mode end This function takes O(N) time to count occurrences in the array. It also takes O(range) to initialize the counters and then again O(range) to find the maximum counter. Overall. Now that we have two solutions based on counting array elements, we could ask whether there is another way to solve this problem without taking additional space outside the array? That will be the goal of the next solution that we will present. We could easily find the array mode by first sorting the array and then counting successive elements that are equal. The sorting step would take O(NlogN) time to complete. The subsequent counting step would take O(N) time to find the most frequent element. On the other hand, no additional memory would be required, because both sorting and counting take O(1) space. This is the compromise compared to previous solution. We are spending somewhat more time, but we are preserving memory. Here is the pseudocode of the function which relies on external sorting algorithm: function Mode(a, n) a – array of integers n – length of the array begin sort a -- use algorithm with O(NlogN) time mode = 0 modeCount = 0 curValue = 0 curCount = 0 for i = 1 to n begin if a[i] = curValue then begin curCount = curCount + 1 end else begin if curCount > modeCount then begin mode = curValue modeCount = curCount end curValue = a[i] curCount = 1 end end if curCount > modeCount mode = curValue return mode end Counting loop in this function is a little bit complicated. At every step, we are keeping record of the previous value in the array, as well as total number of its appearances so far. If current array element is the same as the previous one, then we just increase the counter. Otherwise, we have reached the end of counting for the previous value. In that case we have to test whether the better candidate has been found and to reset the counter for the next value in the array. Of course, when the all the elements are exhausted, the last one can still be the best one. We have to test whether it has beaten the previously most frequent element. This solution looks like a step in the right direction, but we can also come to a better solution. Do we have to sort the array completely? Probably not – sections of the array that are shorter than the best candidate this far can freely be left unsorted. That idea will be discussed in the next section. One possible improvement on the sorting solution would be to partition the array in a way similar to what Quicksort does. Just pick up an element (pivot) and divide the array by comparing all elements against the pivot. Elements that are strictly less than the pivot will go to the beginning of the array. Elements strictly greater than the pivot will be moved to the end of the array. And in between shall we put all the elements equal to the pivot. In this way we have made one step forward in search for the mode. Namely, we have isolated one element of the array (the pivot from previous step) and found total number of its occurrences in the whole array. Now we apply the process recursively, first to the left partition of the array, and then to the right partition. But not the same in all cases. There are circumstances under which it makes no sense to process certain partition further because it cannot provide a candidate better than the one found so far. Any partition is processed recursively only if it is larger than the count of the best candidate found this far. Otherwise there would be no point in spending time to count elements of the partition because none of them could ever beat the same frequency as the current candidate. Here is the pseudocode of the mode function which partially sorts the array: function Mode(a, n) a – array of integers n – length of the array begin mode = 0 modeCount = 0 ModeRecursive(a, 1, n, in/out mode, in/out modeCount); return mode end function ModeRecursive(a, lower, upper, in/out mode, in/out modeCount) a – array of integers lower – lower inclusive index of the array that should be processed upper – upper exclusive index of the array that should be processed mode – on input current best candidate; on output new best candidate modeCount – on count of the current best candidate begin pivot = array[begin] -- We could use a better algorithm left = lower right = upper - 1 pos = left; while (pos <= right) begin if array[pos] < pivot begin array[left++] = array[pos] pos = pos + 1 end else if array[pos] > pivot begin swap array[right] and array[pos] right = right – 1 end else begin pos = pos + 1 end end pivotCount = right - left + 1 for i = left to right array[i] = pivot if pivotCount > mode.Count begin mode.Value = pivot; mode.Count = pivotCount; end leftCount = left - begin if leftCount > mode.Count mode = FindModePartialSortRecursive(array, begin, left, mode) rightCount = end - right - 1 if rightCount > mode.Count mode = FindModePartialSortRecursive(array, right + 1, end, mode) return mode end Recursive function is the one which does all the work. It first partitions the array and counts the pivot occurrences. Should the pivot be more frequent than the previously discovered mode candidate, the pivot becomes new candidate. After this step, operation is recursively performed on left and right partition. Observe that both partitions are excluding the pivot in this solution. This lets us include the optimization. We do not process a partition unless it is longer than number of occurrences of the current mode candidate. Once again, this is because such partition could not contain any candidate better than the one we already have. Finally, when recursion unfolds, candidate it returns is the overall best element, which is the mode of the array. Non-recursive function at the beginning only delegates the call to the recursive implementation and then returns its overall result. Below is the full implementation of a console application in C# which allows user to select size of a randomly generated array. Application then searches for the simple majority element in the array and prints its value on the console. Although only the partial sort function is called in this solution, you can find all four mode functions in the code. using System; using System.Collections.Generic; namespace ArrayMode { struct Mode { public int Value; public int Count; } class Program { static void Print(int[] a) { for (int i = 0; i < a.Length; i++) { Console.Write("{0,3}", a[i]); if (i < a.Length - 1 && (i + 1) % 10 == 0) Console.WriteLine(); } Console.WriteLine(); Console.WriteLine(); Array.Sort(a); int count = 0; for (int i = 0; i < a.Length; i++) { if (i == 0) count = 1; else if (a[i] == a[i - 1]) count++; else { Console.WriteLine("{0,3} x {1}", a[i - 1], count); count = 1; } } Console.WriteLine("{0,3} x {1}", a[a.Length - 1], count); } static Mode FindModeHash(int[] array) { Mode best = new Mode(); Dictionary<int, int> hashtable = new Dictionary<int, int>(); foreach (int value in array) { int curCount; int count = 1; if (hashtable.TryGetValue(value, out curCount)) count = curCount + 1; hashtable[value] = count; if (count > best.Count) { best.Value = value; best.Count = count; } } return best; } static Mode FindModeCountingSort(int[] array) { int min = array[0]; int max = array[0]; for (int i = 1; i < array.Length; i++) { if (array[i] < min) min = array[i]; if (array[i] > max) max = array[i]; } int range = max - min + 1; // All elements automatically reset to 0 int[] counters = new int[range]; for (int i = 0; i < array.Length; i++) { int offset = array[i] - min; counters[offset]++; } int modeOffset = 0; for (int i = 1; i < range; i++) if (counters[i] > counters[modeOffset]) modeOffset = i; Mode mode = new Mode() { Value = min + modeOffset, Count = counters[modeOffset] }; return mode; } static Mode FindModeSort(int[] array) { Array.Sort(array); Mode current = new Mode() { Value = array[0], Count = 1 }; Mode best = new Mode(); for (int i = 1; i < array.Length; i++) { if (array[i] != current.Value) { if (current.Count > best.Count) best = current; current.Value = array[i]; current.Count = 1; } else { current.Count++; } } if (current.Count > best.Count) best = current; return best; } static Mode FindModePartialSort(int[] array) { return FindModePartialSortRecursive(array, 0, array.Length, new Mode()); } static Mode FindModePartialSortRecursive(int[] array, int begin, int end, Mode best) { Mode mode = best; int pivot = array[begin]; // Use better pivot selection int left = begin; int right = end - 1; int pos = left; while (pos <= right) { if (array[pos] < pivot) { array[left++] = array[pos]; pos++; } else if (array[pos] > pivot) { int tmp = array[right]; array[right--] = array[pos]; array[pos] = tmp; } else { pos++; } } int pivotCount = right - left + 1; for (int i = left; i <= right; i++) { array[i] = pivot; } if (pivotCount > mode.Count) { mode.Value = pivot; mode.Count = pivotCount; } int leftCount = left - begin; if (leftCount > mode.Count) { mode = FindModePartialSortRecursive(array, begin, left, mode); } int rightCount = end - right - 1; if (rightCount > mode.Count) { mode = FindModePartialSortRecursive(array, right + 1, end, mode); } return mode; } static void Main(string[] args) { Random rnd = new Random(); int n = 0; while (true) { Console.Write("Array length (0 to exit): "); n = int.Parse(Console.ReadLine()); if (n <= 0) break; int[] a = new int[n]; for (int i = 0; i < a.Length; i++) a[i] = rnd.Next(9) + 1; Print(a); Mode mode = FindModePartialSort(a); Console.WriteLine("Mode of the array is {0}; " + "it occurs {1} times.", mode.Value, mode.Count); Console.WriteLine(); } } } } When application above is run, its output may look something like this: Array length (0 to exit): 10 6 7 1 1 8 5 4 6 4 4 1 x 2 4 x 3 5 x 1 6 x 2 7 x 1 8 x 1 Mode of the array is 4; it occurs 3 times. Array length (0 to exit): 17 3 3 6 9 8 1 6 5 8 9 2 6 9 2 9 6 8 1 x 1 2 x 2 3 x 2 5 x 1 6 x 4 8 x 3 9 x 4 Mode of the array is 6; it occurs 4 times. Array length (0 to exit): 20 7 4 2 2 5 7 9 4 5 9 3 2 7 9 9 2 6 5 6 3 2 x 4 3 x 2 4 x 2 5 x 3 6 x 2 7 x 3 9 x 4 Mode of the array is 2; it occurs 4 times. Array length (0 to exit): 30 9 8 8 6 2 6 2 2 6 2 3 4 7 4 6 2 2 5 1 5 3 2 1 5 3 1 8 1 6 4 1 x 4 2 x 7 3 x 3 4 x 3 5 x 3 6 x 5 7 x 1 8 x 3 9 x 1 Mode of the array is 2; it occurs 7 times. Array length (0 to exit): 50 9 9 3 4 5 5 2 5 7 4 8 9 1 8 5 2 2 7 3 8 6 5 7 7 3 3 2 4 5 7 6 9 6 8 3 8 8 3 7 5 8 6 8 8 2 1 7 7 3 1 1 x 3 2 x 5 3 x 7 4 x 3 5 x 7 6 x 4 7 x 8 8 x 9 9 x 4 Mode of the array is 8; it occurs 9 times. Array length (0 to exit): 0 Press ENTER to continue... In this exercise we have seen four solutions to the same problem: hash table, counting sort, Quicksort and partial sorting. Each of the methods exhibits its own strengths and weaknesses. Graph below shows how much time each of the algorithms takes on an array with one million elements. Varying factor is total number of occurrences of the most frequent element in the array. We can see that indexing methods are behaving quite differently from sorting methods. Hash table and counting sort both exhibit constant performance regardless of the data. Sorting solutions gradually improve as total number of values in the array goes down. Quicksort solution is improving because it is somewhat easier to sort the array with many repeating elements. Partial sort solution, on the other hand, is exhibiting significantly better performance as the array becomes more uniform because then it can cut the recursion much higher in the call hierarchy. This further leads to significant savings in execution time. To truly compare these four methods, we have to remember that hash table and counting sort require additional memory. Even more, counting sort requires the range from which numbers in the array are selected is quite limited. In the experiment from which the results are shown in the diagram, range of the values was smaller than the array length. Under such ideal circumstances, we should not be surprised that the counting sort method was by far better than any other. The next well performing method is hash table, followed by custom algorithm based on partial application of the Quicksort algorithm. Worst performance was exhibited by the Quicksort algorithm. This is clearly because total sorting of the array is not really necessary to calculate the array mode, so this solution was working more than necessary. Overall conclusion is that algorithm that we should pick depends on the situation: This concludes the analysis of algorithms for finding the array mode..
http://codinghelmet.com/exercises/array-mode
CC-MAIN-2019-04
refinedweb
3,045
59.03
Generating BoxColliders for PicaVoxel-imported objects in Unity3D : (Download the script from or copy from below) using UnityEngine;using System.Collections;using PicaVoxel;/// The script has two properties - thickness: How thick the colliders should be (0.4 works well) - staticColliders: If true, all the colliders will be static! Even if you don't, please check out my portfolio and follow me! Games: Twitch: Twitter: YouTube: 1 Comment Recommended Comments You need to be a member in order to leave a comment Sign up for a new account in our community. It's easy!Register a new account Already have an account? Sign in here.Sign In Now
https://www.gamedev.net/blogs/entry/2262286-generating-boxcolliders-for-picavoxel-imported-objects-in-unity3d/
CC-MAIN-2018-34
refinedweb
107
58.99
This review is really outdated. Now instead of using a dictionary of terms, it pulls terms from RSS feeds that are constantly updating and changing. You get to pick the RSS feeds. I don't use any that are news feeds precisely because I don't want search terms associated with crimes. And now it searches in "burst mode" to mimic the way people actually search for things, which is a bunch of searches close together, not at intervals. Yes, if you are searching for illegal things or searching for advice on how to become a terrorist, it's not as if this makes it go away. No one ever said it did. The point is diluting your searches enough that someone can't build a data profile of you and violate your privacy, like when AOL stupidly released its search data and some users were easily attached to their real names. I've search for my own name -- it'd be easy to figure out I am me. But now with TrackMeNot, it should be a lot harder since I'll have a lot of other search queries of people's names.]]> The analogy to being pulled over by a police officer is moronic, to say the least. Whoever made it probably felt very smug and smart, however, they are foolish. The name of the extension is TrackMeNot. See that first word? It's "TRACK". It's not "ARREST" or "INVESTIGATE". This extension does not claim to protect you from the NSA or the FBI. It is intended to obfuscate your actual searches, from a privacy perspective. It is not intended to hide your searches so you can find information on carrying out illegal activities. The extension is buggy and poorly designed, but the concept is sound. Schneier's comments are disappointing, and the aforementioned commentator is an idiot.]]> additional bandwidth is negligible compared to flash, img, ads...]]> Don't be ridiculous: Google can make tons of money just fine while thousands of people use this add-on to protect their privacy. I have been using this add-on every day for two years now, in order to make sure that no one can build a consistent profile about me. I refuse to let them. And in that time, Trackmenot has improved a great deal. Meanwhile Google search, Gmail and all the other Google services are working just fine; I have not been blocked or banned even once. Bandwidth problem affects not only users. These fake queries must be parsed by search engine, and that's bad. If everyone will start using extension - google will be as good as dead.]]> A new upgraded version of TrackMeNot is now availble. It's use words from your choice of RSS-feeds to do queries on your choice of Internet search engines.]]> send you to the page sorry.google.com.. who know, google may even start to launch lawsuits against people who use this claiming malicious intent to harm the browser]]> And where do one find a search engine that do not place search words in the URL and that have a SSL certificate and do not keeps logs?]]> The program is a very good thing. Most part of personal profiles of the people are builded using IA programs according querys. The analysis of this blog is unsatisfactory even infantile according the importance of this topic. Surprising?. I want suggest the authors some improvements: - Chose the language of Google search engine. - Random time for words. - Using the own list of words. Actually, I'm rewriting the JS code to get some of them but it would be nice they can be implemented in the close future. ]]> Keep the good work and ignore weak critics. You are pointing in the right direction !!!.]]> Entry about echelon jamming list as self extending search dictionary: *** censored by schneier *** I have read in the Nov/Dec issue of Mother Jones about this very topic. It is quite freaky to say the least that Google is more interested in preserving profit than preserving privacy. There are a couple remedies they suggested: 1. Use another search engine.... is a European search engine that adheres to EU privacy rules that prohibit search engines from stashing user data. 2. Clear/delete your cookies when you are done browsing. 3. Check out Anomynizer which issue the user a temporary IP address, thus making it difficult for the IP pack rats to trace you. 4. Shutoff your DSL modem at night. If you're not running anything on your computer at night then when you shutdown your computer unplug your modem. You will be assigned a new IP address (unless you have a permanent IP address) and thus tracking you is nearly impossible. hope this helps...]]> Hi again The comment above was initially filtered out by this site's anti spam protection. Correctly! I emailed this website to explain the purpose of posting this spam. Now my post has appeared. Thank you. This word list counters the second and third reasons given in the initial thread on this list as to why trackmenot fails. I happen to agree with the first and more important reason "it doesn't hide your searches". That's why I suggest scroogle. But trackmenot may yet have some effect on countering profiling and this extension's existence is a good thing because search profiling should be more widely discussed, and this extension has created more debate than any other firefox extension.]]> hi i have been using trackmenot 4.5.4 for a few weeks now. seems to work fine as a spam tool. i have no use for its supposed anti-profiling capabilties because i use the scroogle search plugin unlike the author above i have no qualms about filling the google data centre with useless search requests. in fact running trackmenot is quite interesting to see the list of randomly generated word searches that it creates. it would be nice to have such a tool that filled Gmail accounts with similar garbage. i know genuine spam emails comes close to fulfilling this desire but Google is probably getting better at filtering out these. In this spirit of open minded spamming, here is my current list. l am happy that all the web spiders currently indexing this site have more to chew on. **************************** movie memorabilia from,with several common peripherals,Local Deals Before,lifestyle brand that incorporates every aspect,Right Online Graduate School,Your Hamster Question, Custom tribal dragon,Obsession Required Viewing,CHAT General Chat,backstreet boys,narrated,understands that successful case management, Austrian documentarist Nikolaus Geyrhalter which looks,European Union have increased,commercial credit card,White House Briefing,Provides positive solutions,Implementing Quarantine Services with Microsoft Virtual, products from furniture,Anonymous Home Page,Network Your home,have been conceived,summer programs abroad,Jewish Index with, Read eBay Review,including Heresheis birth announcements,MailEnable provides robust,come into existence,growing storage needs with scalable PowerVault,host Keith Olbermann says Bush will, Official Home Page,online game rental,Adobe Solution Partner Program,SAXOTECH Join Forces,conveniently located within walking distance,plus Full warranty, Hosted with predictive dialer,PINK FRONT DOOR,upcoming Texas Chainsaw Massacre,Ferrari believe Michael Schumacher will,Reviewed Publication Authored,fence wall forming, home page serving,massively successful Bakuretsu Hunter,ultimate broadband video channel,place where high,Find your favorite DVDs from,higher learning known, Official Aerobic Striptease Strip Workout,your videos into your computer,estate cowgirl rutherford kilstein chronicle,gather information from hundreds,scary culinary adventure looking,transforming their business, Learn Environmental Project Management,movies from every genre,Captain Jack Sparrow,Free Remote Computer,News service Agence France Presse,Alaska Appellate Courts, language with programs that inspire,never been more important,leading professional association,best online photo management,Properties providing professional services,Cancels Qatar Trip, Sills Cummis Epstein,most widely recognized regional dialect,Salt Lake City,time text headlines,patch leaves users locked,Oxford Home page, Strictly Come Dancing,General Services Administration,providing quality timely,June Buying Guide,Pretty much everybody,online career training, Produces solid wood,early stories were,other horror flicks Check,less than four months after,Small businesses often face,accredited Business courses offered, Directional Control Valves,provide real professional lenticular production trainings,Eighth Systems Administration Conference,Live music reviews,compasses binoculars altimeters heightmeters pedometers,date cancer information from, Sukkot celebration lends itself,were around during,Imagine Music Group,Scarlett Johansson posters,United States Court,World Series front pages from, Folk Music Index,Interior Color Combos,free real estate,Hitler Consolidated Power,VIDEOS FILMOGRAPHY DISCOGRAPHY PHOTOS TALK,enormous turd that editorial page editor, processing segment that uses Data Processing,easy online source,play free online,Also includes film reviews,Belper Town Football Club,World Trade Center, takes over Nickelodeon,Listen Almost Anywhere,Massachusetts based documentary photographer,hotel commits itself,Fulbright Senior Specialists Program,Hong Kong shares close higher, Find exactly what,location voiture Paris,lingering associations with,Work Zone Safety,State judge strikes down Arkansas,Degree From Online Colleges, treat common chemo side effects,English Spanish Dictionary,Annual Rhody Bike,that sells Pennsylvania,Create your space,seeing this page, Public service cooperatively provided,filters collected from,Visit Jeep life,This limited edition,compliant multiple virtual desktop window,junior United States, Also offers many financial,having effective meetings,best comparison shopping information,more infectious than,Peoples State Bank,Whales have been seen, Discuss this movie with other users,first board games,think they were,admits altering Beirut photo,Qualified orders over,Fishing Port Alberni, little higher learning facility just down,juridically pinning them,prospect that this could,mountain bike tires,monitoring more posts,presents shopping with, Positron Emission Tomography,Online Learning Modules,Tokyo shares outlook,most exciting trips,Neal defends against,which began with, that they were,Business,Laure Edwige Djoukam,also tracked down,years merits special recognition,Enterprise Collaboration Platform, mcse training course,advertising sales division,samples were bagged,ticket prices from BuySellTix,International,that mate preference evolves once selection, their book fails,Homeland Security National,Keeping Tupac Amaru Shakur Alive Makaveli,Bush signed into,World Wide Colleges,mixed martial arts, Dietary Supplements Nutrition,offers public classroom training,answers from real people,combined surface area,Work Like Elevator Buttons,Afghanistan probe into killings, Launch present personalized Internet radio,Northwest Rural Public Power District,readers have given,Playboy Miss Teen,mona lisa,Attackers have found, Webster Online Dictionary with audio pronunciations,politically forbidden relationship with,Forest Whitaker Voices AmericanDad Character,Distributing news headlines,Harsh Interrogation Techniques,Call Schemes from, Super Columbine Massacre,Resource with free,Project Plans vary from simple,start your home,Erie Canal Cruises,does make clear, Premier Travel Inns,core issues involved,line LISNs with country specific power,children supports West,Raise Money with Email,Normandale Japanese Garden, evolved into much more,Class Clown Just,Offers educational courses,Michigan Home Builders,Some Engineering Aspects,Drew Barrymore Quiz, excerpts from meditations given this,Reserve your rental,Announcements Frequently Asked Questions Does PHPlist,NOAA News Online,Wire Forms Manufacturer,Based External Exit Exam Systems, School Lesson Plans,money while setting your,from specialized gifts, *******************************]]> Nice idea bad implementation. Though he author may have been well intentioned, it is still open to debate. Let let us assume he is sincere for now since this isn't even my point and the tool in my opinion is still a failure as there are much better alternative out there. My real point is has anyone even taken into account the fact that this search engine spam method of TrackMeNot will surely screw up many peoples rankings! This is going to certainly start to have a real measurable impact on search results shifting about more an more as the number of people using the experimental widget go up. I personal am not some SEO or guy trying to make a buck on my site for Google PR or and search engine. This really has little effect on me since have not ads or anything to sell (not even donation beggar-ware button), so I have nothing to lose at all on that front. This still burns me up a bit since this could screw with many peoples livelihood, especially those with nice clean sites without nasty pop-ups, for those people and all gambling sites and spammers I hope they rot in hell. For the legit guys on the other hand this seems hardly fair. Just my thoughts about it. If I missed a post mentioning this previous I apologize in advance as I skimmed the comments but saw nothing in reference to this at all. I think for TrackMeNot's author it should be back to the drawing board. - Azag]]> It still scares me, go ahead and use it if you like, i won't. As an extension developer myself imo i think it should be banned from Mozilla. o.. wait: "TrackMeNot" + "Terrorist" + "Profiling" hopefully, some visitors will see this through Google while searching to ditch their trails.]]> TrackMeNot has been upgraded in response to some of these criticisms, including the use of a dynamic, evolving word list unique to each client. More here: "And there was also a comment about images being requested from your browser? If your request is done through the proxy and you never visit google.com with your IP then how are you being tracked?" You have to disable java and javascript too, possibly flash or any other plugin that executes code. It is pretty easy to bypass proxy settings through java.]]> The problem of profiling by search engines, or the usage of that data by governments (and not only in the view of this war-on-terror hype, but how about regimes where minorities based on sexual preference, faith or political ideals have to fear for their lives) can also be attacked from the other side: don't use centralised services. To this end, i'm currenlty starting up a project, getting some money, to develop a peer-to-peer search engine. Anyone interested in contributing (it will be an open project), check out. It is all still quite preliminary, but slowly we've been securing funds to hire some people to spend some serious time on this. I saw a post that said black box has problems because your search term is in the url. Thats silly. You cant search a search engine without sending it a URL. The point is the search came from the proxies IP address not YOURS! And there was also a comment about images being requested from your browser? If your request is done through the proxy and you never visit google.com with your IP then how are you being tracked?]]> has several options to preserver your privacy. Remove click tracking Anonymize the Google cookie UID Don't send any cookies to Google Analytics TrackMeNot search for incoherent phrases, such as "food+gas". Also, not quite sure about this, it searches at fixed time intervals. Then, you can guess what a user searched by eliminating incoherent searches or removing searches done at those specific intervals. It searches for random but coherent phrases on the subject you want at random intervals. It works with any browser and you can fake http headers. Unfortunately the database is too small, user contributions welcomed.]]> A larger dictionary will mostly mitigate the risk of conspicuous search log entries, and algorithms randomizing real personal query terms into it will even further. But of course no matter how clueful the algorithm, the search log pollution can only reduce the symptoms (a program and later a police officer looking through it) not heal the underlying problem (that logs are created at all). But it's important to remember that ISPs are much worse a problem than search engines, because without elaborate proxy tricks they track where we really went. So I'd say TrackMeNot & Co. can really be just a first step. But btw, give this a look:.]]> Now that I think about it, better way would be to inject random searches (again at random: "Turn It on Again: The Hits". "Davis Kamoga", "Gridiron football", "Paul of Narbonne") into the stream when you're doing google, msn, et. al. searches. I'd pick that every-12-seconds traffic pretty damn quick, if I was looking at usage paterns. If I was a detective looking at google searches, I don't know what I would come up with. -c]]> One solution to the word list problem is to use Wikipedia to grab random subjects to search on (to hit 'random' a few times: "List of Istanbulites", "Bookfinder4u", "Cavia", "Ethanol fermentation", "Salisbury National Cemetery"). While that doesn't address the other problems, it ought to be much better than trying to provide a dictionary. It makes the bandwith problem much worse tho, not to mention being a burden on the poor wikipedia servers :/ I suppose you could just grab whatever URL returns... -c]]> Hi Bruce, Regarding your comments about TrackMeNot: Some of the problems you identified with TrackMeNot is that there are a limited number of search terms and that it conducts an automatic search every 12 seconds. Both these issues makes it easier for a monitoring system to separate the signal from the noise. You made some recommendations to improve the program: ." I understand the reasoning behind your recommendations - you want to make the automated TrackMeNot search terms more similar to the individual's search patterns so that it is harder to identify the signal from the noise. However, in trying to make the search terms more realistic, I think you lost sight of the purpose of the program. You wrote, "And I would make it monitor the web pages the user looks at, and send queries based on keywords it finds on those pages." This would certainly make the searches more realistic. However, if the user of the program is attempting to conceal his search terms and the sites which he visits, then having TrackMeNot monitor which sites the user visits and then generating search terms from those sites would just generate more evidence of the activity the user wishes to conceal. For example, if a user is using the internet to search for information on how to build a bomb, TrackMeNot would be generating search terms from the bomb-making sites the user visits. This would tend to produce search terms such as "uranium-235 or plutonium-239", "harvest magnesium oxide", "purified aspirin in sulfuric", etc. Certainly, TrackMeNot should not generate search terms based on the sites the user visits. Keep up the great work Bruce!! Regards, Noam "The ultimate measure of a man is not where he stands in moments of comfort and convenience, but where he stands at times of challenge and controversy." ]]> I wrote: > to accept Google cookies but delete them often, .... In Firefox this is trivial (Ctrl + Shift + Del). I should qualify that. This will only delete cookies if the "cookies" option is set under the "Clear Private Data" tool. By default, it isn't checked. You can change this under Options-->Privacy-->Settings Also, note that this deletes _all_ your cookies, plus anything else you have selected as being "private data".]]> @Jojo: > Most of what you searched for is stored on their system linked to your IP addr. Um, no. Linking queries to IP addresses is a bad idea for several reasons. Firstly, around half the individual PCs around the world are behind a web proxy or NAT proxy already, so it wouldn't work. And even if the machine isn't behind a proxy, there may still be more than one person using the PC, so it fails to map queries to identities. Secondly, well over half the world's home computers are still on dial-up, which means they get a different address every time they connect. Thirdly, many people (perhaps most) connect from more than one machine (e.g. home, work, mobile device) and consequently from different addresses. For all these reasons, queries are actually aggregated by session IDs, which are kept in your cookie. > I doubt that Google, AOL, etc. keeps more than a few searches in your cookies. They don't keep ANY queries in your cookies; they keep session IDs [1]. Whenever a machine connects to the query engine without a current session ID, a random session ID is generated and handed out in the form of a cookie. This ID number is then used to link all subsequent queries from the same user. If the user logs in (e.g. to get personalised settings, to access Gmail, or to access Google Groups), the login ID can then be used to link session IDs from multiple browsing sessions, and possibly also to a real world ID. If you accept and never delete search engine cookies, this works whether you are behind a proxy or not because the cookie is sent to them as part of your query. If you don't accept cookies, then: a) you can do normal queries just fine, but can't use personalised services like Gmail; and b) if you're ultraparanoid, note that queries which refuse cookies are unusual, and thus actually tend to stand out from the crowd... Thus the best way to do this is to accept your 1st party cookies [2] like a good citizen, but delete them before logging in to a personalised session (e.g. Gmail), delete them again after logging out but before resuming browsing, and from time to time between. If you are behind a proxy and do this, there is no practical way for the search engine to link sets of queries from different sessions, nor to link any of them to your login ID. It may be _theoretically_ possible with header fingerprinting, timing analysis etc. but it would be very difficult and just not worth the effort (they are an advertising distributor, not a spy agency). If you aren't behind a proxy and they definitely know this to be the case then it would be relatively easy to approximately link the sessions (by assuming the next query from the same IP is the same user), but since there is no easy way to tell that the query wasn't proxied, it once again won't be worth their bother so I strongly doubt they would attempt to do so. Thus my policy is simply to accept Google cookies but delete them often, especially before and after all logged-in sessions. In Firefox this is trivial (Ctrl + Shift + Del). A proxy is an additional assurance but not essential. ____ 1. To be precise, a Google cookie holds a randomised 64 bit session ID, two Unix timestamps, and a mysterious 96 bit value believed to be a checksum. It is perfectly possible to load someone else's cookie (if they send it to you, or you somehow steal it) to see what their session looks like; beta testers sometimes do this to demonstrate new features to their friends. AOL and Yahoo both give more cookies, and the contents of their cookies are much more opaque; but they are still too short to store actual queries. 2. Third party cookies -- i.e. cookies sent from a server which hosts some of the content (banner ads) on the page but not the page itself -- should always be refused. To do this in Firfox, check the "for the originating site only" box in Options-->Privacy-->Cookies. @Steve It's been a while since I've seen the movie but as I recall, the "I am Spartacus" defense resulted in the Romans deciding to crucify *everybody*. Perhaps not a good example to follow...]]> It seems to me that the imagined purpose for this tool is not to fool anyone actively surveilling your last mile, but rather: 1) To introduce "plausible deniability" - the computer did it, not me. 2) To deal with the possibility of search data being released; insert noise. It isn't perfect on either count, but the analogy that commentator used, that of telling a police officer that you're breaking the law in N ways - shows he doesn't quite grasp it either. For ordinary, law-abiding people, with some minor tweaks this extension could deal effectively with both goals above. Someone actually up to no good might not want to use it because it might draw attention to them. @Mike Sherwood: What you're describing is an open proxy, and the Internet has a lot of them. Even if it didn't log accesses, it could, and if you used one over and over again then the authorities could wiretap it and connect inbound to outbound queries. And, don't you think tor might have some of the same drawbacks? I mean, suppose someone searches for a villanous term, and its last hop through tor is your system, how would that differ from trackmenot sending the same query from your browser? Do you think that there's anything you could do about your ISP saving your emails? That you would have any control over them? They almost certainly log the sender/receiver, time and size in their mail logs. If you use their proxy they're almost certainly logging every query. If you don't use their proxy, they could record outbound HTTP requests anyway. If everyone were using postcards, and someone suggested using an envelope, I can see similar arguments against the envelope. It might draw attention to you. However, if everyone uses and envelope... the game's a bit different isn't it?]]> 1) ticktock mentioned to delay your requests to the 12 sec intervals. Another idea would be, to generate the intervals by your interval-patterns. Not too easy, but of course normal users don't query google at random intervals. Sometimes I make an atomic search, more often it's multiple steps of widening, narrowing and switching keywords. Adepting very different search-patterns, including very dumb ones like trying 'alice bob' after 'bob alice' ... 2) Interesting to see, whether - if at all - to use harmless patterns, or harmful ones. My first idea was using harmless words too. But of course a pin is better covered in a pinstack than an haystack. 3) Remembering the NSA inspecting or logging every traffic, you don't need google to leach information. Of course other organisations than the NSA might be interested in the information. 4) Instead of a keywordlist from the vendor, you could use a personal dictionary extension as often found with spellcheckers and thesaurus. Thesaurus-access is needed although, or better something more sophisticated because after searching for 'sony screen' I might try 'sony monitor', 'sony display', 'graphics sony' and so on. 5) Do I need it? Would I use google preparing a serious malignance? From my regular account? But I guess, an investigation of regular, harmless searches might sometimes raise attention of a filter from the quality of TrackMeNot. 6) People using such a tool might be suspicious, while their searches aren't. "And set your browser to delete search engine cookies regularly." Or set your browser to specifically not accept Google cookies at all. For Mac OS X Safari, e.g. using PithHelmet. Of course, they still have your IP address...]]> Right. storage of 'words' seems a little weird, one can generate words with a simple algorithm, who cares if it makes sense, if you're planning to waste their bandwidth anyway. :) But i wonder about the possible memory leaks within that extention, if it is open all the time querying search engines, that would mean a slower browser. So i don't see the advantage.]]> This poor little tool doesn't reach it's aspired goal to anonymize search queries. That can't be reached fully, it can come very close by using some more real obfuscators like 'tor' or war-driving through inner city, swinging from one randomly choosen open access point to the other using every single one only for some seconds. Beware: the latter is illegal in certain jurisdiction, but that's valid for the former too, of course. But TrackMeNot can reach an other goal: put a lot of noise into the databases of the companies well known for poor data security (I'm tempted to add: "aka all publicly traded companies", but that would be unfair. Hopefully). When--not if, when!--these databases get public the chance for a blackmailer to extort the male teacher because of his searching for "dating young blonde boys" is a bit lower. Yes, I've read Cicero too: "semper aliquid haeret"; the exageration was used to show that it isn't always a question of life or death. It is several orders of magnitude smaller in most of the cases (e.g. looking for "jobs at $COMPETITOR") but can still cost money and even ruin lives nevertheless. But I don't think that a simple browser plugin will suffice, it's a lot of work. Especially the word list is way to small. The average language has about 100,000 words. That doesn't include special (technical, medical and so on) terms, dialects, idioms and accents. The special terms alone can double that number quite easily. It neither generates typos[1]--homines sumus!--nor includes different spellings (e.g. AEBEIE) and I won't even mention the fun you'll get with transliterations. It doesn't "refresh" names (there's always a "celebrity" du jour). I use a wordfile with some three million entries to check automatically generated passwords and that thing is over 28MiB large! I can use a bloomfilter to handle that mess easily and secure but you can't do that here. Now imagine the necessary overhead for Javascript and you'll see that you have to localize the app. It will be still around 1-2 MiB, probably more in memory. The computing is neglectable in the times of cheap multi-GHz processors, but the localization and sampling of new names and terms will most probably have to be done by hand, a simple parsing of news.google.com won't do it[2]. Apropos Google: I don't think that a company with an income mainly based on offering space for advertising is very enthusiastic about automated searching if noone, especially no human is interrested in the result. It can be seen as a DDoS if TrackMeNot get's a larger userbase. So, TrackMeNot can't even reach the lower fruits. CZ ]]> [1] There's always a perl script, no exception here ;-) (for an english keyboard) I've ported these lines to javascript (for a german keyboard) in the unlikely case someone needs it. [2] yes, it can be done. Not perfect but it's possible. But in Javascript? I guess there are a lot of funnier ways to waste processor cycles ;-) Does it actually BROWSE the search results? If not, it is pretty easy to separate the noise from the real searches and the program is a waste of resources.]]> Ok, that's worked out i guess? back on topic.]]> And: doh! OK, i clicked your name, and it shows a real name on the page ... -> it does, because i'm the author :) Well, my point I think is still valid. -> That you have a serious problem and in need of some counseling? I doubt you post your real details on everything you do, and if you do online and offline, and if you do, you either live a very risky life, or a very boring one, I'm not sure which. -> i do. But google knows. -> No she doesn't. @0987654321 who said: So .. uh, I guess 1234567890 is your *real* name. Or is it your phone number. Or address. Perhaps I can google you on that string? -> Why the fuss about a few digits? they might have a higher meaning to you then for me.. -> You seem to know much of me, well i'm not one of them. What i amde was a firefox extention, made in my sparetime. If you don't like it dont use it. But don't confuse yourself my friend. -> What's up your ass anyway?]]> "And set your browser to delete search engine cookies regularly" ====================== What exactly does this do? I doubt that Google, AOL, etc. keeps more than a few searches in your cookies. Most of what you searched for is stored on their system linked to your IP addr. Why do you think Google is building a 3 football field data center in Oregon? A proxy connection is the only good way to protect yourself. And if too many people start doing this, watch for some laws to be passed making it illegal to use a proxy in the good 'ol USA.]]> "I didn't search for this, must have been TrackMeNot that did it!" It may not be much better at that, but at least it's a more realistic objective. ======================= And it doesn't matter one bit when TPTB break down your front door, take your computer(s) and incarcerate you w/o charges for years while they "investigate". "God doesn't know, but the company probably knows the amount of shrink from theft." The company being exactly whom? And how does he/she know the person who did the audit did not fudge the books? And the people making the items did not make more of them than they said? And somehow got the materials at below cost (I wonder how?!)? etc etc Watch catch22 if you want to know how the real world works. Security is fine in theory, in real life, like God, it doesn't exist. It's an abstraction, an ideal. Get over it and enjoy what you do have, don't fret away your life worrying about what you could use. And then you will find you make clearer decisions about security matters as well, because your brain is working mroe efficiently. Stress is a killer, not just binary 1/0 death-style killing, but incremental eating away of our joy and our ability. And re: anonymisers to defeat totalitarian states, of course, they can always take you in for obstructing the police in their duty, or whatever the equivalent crime is in your jurisdiction. Try disproving that one, even in "the west", or any of a hundred thousand made-up charges that are kept on the books so that police in "western democracies" all have arbitrary power of arrest. And when you resist, see if they are not violent and animal-like, just like the ones we are told about in Iraq and Afghanistan. Enjoy what you have and understand that most of you joy comes about through some irational happenstance, not through endless juggling of security parameters and respecifying of jargon. Lipservice to authority is the best thing that was ever invented. All heil the king! People are the same everywhere. They're not that bad. And they're not that good. Better take sensible precautions, but don't go overboard, and don't waste time on something if it doesn't work, and above all, don't piss people off who are all set to exact retribution. Stress is the killer, it's also the aggravator. And something like this trackmenot will get better with refinements based on feedback. Yum.]]> the problem with aggregation sites like scroogle are that .. we have to trust the aggregator, so the trusted party is just shifted, and perhaps the amount of information they can each aggregate is reduced, if there are more than one of these. AT&T interestingly enough, apart from helping NSA to snoop recently on most/all americans, had published a useful tool that in theory would solve this problem. However they never published the full source iirc or perhaps the license was just restrictive, and the tool, an aggregating anonymising proxy, was in any event removed from their site some years ago. See for a sketch of how it worked. archived site: And, yes, all the posts above, including the one at the top by 'bruce' were by me. NSA can verify this. But they will never know if it's because I, being Bruce Schnier, hacked their snooper to make it look that way. :P]]> doh! OK, i clicked your name, and it shows a real name on the page ... Well, my point I think is still valid. I doubt you post your real details on everything you do, and if you do online and offline, and if you do, you either live a very risky life, or a very boring one, I'm not sure which. But google knows.]]> @1234567890 "I don't get the fuss about getting profiled by the enigines" So .. uh, I guess 1234567890 is your *real* name. Or is it your phone number. Or address. Perhaps I can google you on that string?.]]> "It seems like it would be much easier to just have sites for those people who don't want to be tracked that aggregates all of their queries. With a user base of more than 1, such a site would offer plausable deniability." As well as the encrypting tunnel proxy sites, there is at least one site dedicated to exactly what you propose, for exactly this purpose: To the people including bruce who ciriticize trackmenot because it searches for things other than 'kittens' and 'daisies', I think the idea behind these things is not that a single install protects a single user, but that if a lot of people use it, it becomes unreasonable to investigate people based on their search queries, since large numbers of people are performing queries which are 'just as bad'. Coming up with a statistical filter that weeds out tools such as this is one thing, proving to a judge that your filter works accurately and he should grant a warrant based on the fact that some moron searched for 'dirty cheese' three days before a food poisoning outbreak is another. Of course, the accurate criticism which applies is simply, do you think the cops give a shit whether you really are a bad person? Of course not, they just want to make arrests. It's why they became police. Give them a break.]]> For some protection, it might be useful to avoid the built-in search interface for an online service. Using an external browser and a third-party search engine might well reduce the chance of search queries being associated with a specific user over time. With a built-in search interface, it is easy to record the precise query and the identity of the user. If IP addresses are dynamically reassigned, knowing the IP address that is associated with a query URL is of less significance. When using a separate search engine with an ISP company, there is less likely to be problems if one of the companies does something wrong. Of course, new and/or inexperienced users are likely to favor the built-in search interface for their ISP. It is familiar in terms of branding and it is convenient. Incidentally, users of the Mozilla Firefox browser can disable the sending of referer headers. For more information, see]]> Instead of randomizing the times for fake queries, synchronize real queries to the fake-query timer. So if a new fake query occurs every 12 secs, and the user has a real query, then the next query sent will be a real one not a fake one. Max latency is 12 secs. Also, I think it'd be better to use the AOL data to create the word-lists, and also to create the number-of-word query patterns, at least until the program learns the patterns for your own queries. Adaptable heuristics.]]> @TimH "When Schneier proves that a certain security technique won't work... is that what's known as a Bruce Force attack?" Not sure, but if creating such proofs were among Bruce's strongest skills, they'd be Bruce Forte attacks. @Bruce, """Good point. This is one of the reasons I do not use GMail. The fact that they save all my e-mail -- and I have no protections against them doing whatever they want with it -- is another.""" With various data retention acts (see "Ten Worst Privacy Debacles of All Time"), and a typical ISP's contract entiteling them to do whatever they want with your traffic, how does your current email differ (in privacy terms) from GMail?]]>
https://www.schneier.com/blog/archives/2006/08/trackmenot_1.xml
CC-MAIN-2016-36
refinedweb
6,737
61.16
Patching Libraries to Instrument Downstream Calls To instrument downstream calls, use the X-Ray SDK for Python to patch the libraries that your application uses. The X-Ray SDK for Python can patch. When you use a patched library, the X-Ray SDK for Python creates a subsegment for the call and records information from the request and response. A segment must be available for the SDK to create the subsegment, either from the SDK middleware or from AWS Lambda. Note If you use SQLAlchemy ORM, you can instrument your SQL queries by importing the SDK's version of SQLAlchemy's session and query classes. See Use SQLAlchemy ORM for instructions. To patch all available libraries, use the patch_all function in aws_xray_sdk.core. Some libraries, such as httplib and urllib, may need to enable double patching by calling patch_all(double_patch=True). Example main.py – patch all supported libraries import boto3 import botocore import requests import sqlite3 from aws_xray_sdk.core import xray_recorder from aws_xray_sdk.core import patch_all patch_all() To patch individual libraries, call patch with a tuple of library names. Example main.py – patch specific libraries import boto3 import botocore import requests import mysql-connector-python from aws_xray_sdk.core import xray_recorder from aws_xray_sdk.core import patch libraries = ('botocore', 'mysql') patch(libraries) Note In some cases, the key that you use to patch a library does not match the library name. Some keys serve as aliases for one or more libraries. Libraries Aliases httplib– httpliband http.client mysql– mysql-connector-python Tracing Context for Asynchronous Work For asyncio integrated libraries, or to create subsegments for asynchronous functions, you must also configure the X-Ray SDK for Python with an async context. Import the AsyncContext class and pass an instance of it to the X-Ray recorder. Note Web framework support libraries, such as AIOHTTP, are not handled through the aws_xray_sdk.core.patcher module. They will not appear in the patcher catalog of supported libraries. Example main.py – patch aioboto3 import asyncio import aioboto3 import requests from aws_xray_sdk.core.async_context import AsyncContextfrom aws_xray_sdk.core import xray_recorder xray_recorder.configure(service='my_service', context=AsyncContext())from aws_xray_sdk.core import patch libraries = ('aioboto3') patch(libraries)
https://docs.aws.amazon.com/xray/latest/devguide/xray-sdk-python-patching.html
CC-MAIN-2019-47
refinedweb
358
58.38
The QNetworkCookie class holds one network cookie. More... #include <QNetworkCookie> This class was introduced in Qt 4.4. The holds one such cookie as received from the network. A cookie has a name and a value, but those are opaque to the application (that is, the information stored in them has no meaning to the application). A cookie has an associated path name and domain, which indicate when the cookie should be sent again to the server. A cookie can also have an expiration date, indicating its validity. If the expiration date is not present, the cookie is considered a "session cookie" and should be discarded when the application exits (or when its concept of session is over). QNetworkCookie provides a way of parsing a cookie from the HTTP header format using the QNetworkCookie::parseCookies() function. However, when received in a QNetworkReply, the cookie is already parsed. This class implements cookies as described by the initial cookie specification by Netscape, which is somewhat similar to the RFC 2109 specification, plus the "HttpOnly" extension. The more recent RFC 2965 specification (which uses the Set-Cookie2 header) is not supported. See also QNetworkCookieJar, QNetworkRequest, and QNetworkReply.(). Parses the cookie string cookieString as received from a server response in the "Set-Cookie:" header. If there's a parsing error, this function returns an empty list. Since the HTTP header can set more than one cookie at the same time, this function returns a QList==(). Copies the contents of the QNetworkCookie object other to this object. Returns true if this cookie is equal to other. This function only returns true if all fields of the cookie are the same. However, in some contexts, two cookies of the same name could be considered equal. See also operator!=().
http://doc.qt.nokia.com/main-snapshot/qnetworkcookie.html#domain
crawl-003
refinedweb
292
56.86
CURLOPT_RESOLVE explained NAME CURLOPT_RESOLVE - provide custom host name to IP address resolves SYNOPSIS #include <curl/curl.h> CURLcode curl_easy_setopt(CURL *handle, CURLOPT_RESOLVE, struct curl_slist *hosts); DESCRIPTION Pass a pointer to a linked list of strings with host name resolve information to use for requests with this handle. The linked list should be a fully valid list of struct curl_slist structs properly filled in. Use curl_slist_append to create the list and curl_slist_free_all is set to make libcurl use another IP version.. Support for providing the ADDRESS within [brackets] was added in 7.57.0. Support for providing multiple IP addresses per entry was added in 7.59.0. DEFAULT PROTOCOLS EXAMPLE Added in 7.21.3. Removal support added in 7.42.0. RETURN VALUE Returns CURLE_OK if the option is supported, and CURLE_UNKNOWN_OPTION if not. SEE ALSO CURLOPT_IPRESOLVE, CURLOPT_DNS_CACHE_TIMEOUT, CURLOPT_CONNECT_TO This HTML page was made with roffit.
https://curl.haxx.se/libcurl/c/CURLOPT_RESOLVE.html
CC-MAIN-2019-22
refinedweb
147
57.67
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. Printing report from the form view... I want to print report from my form view...for that i have to fetch data from wizard...Can anyone tell me how to browse record of wizard in my main python file??? Hi, You can read the data from wizard as follows: data = self.read(cr, uid, ids)[0] and then you can return those data when you return for printing the report. Ex: def check_report(self, cr, uid, ids, context=None): if context is None: context = {} data = self.read(cr, uid, ids)[0] datas = { 'ids': context.get('active_ids',[]), 'model': 'account.analytic.journal', 'form': data } return { 'type': 'ir.actions.report.xml', 'report_name': 'account.analytic.journal', 'datas': datas, } Then you can access this data from your report as follows: [[ data['form']['filter'] ]] About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now Why would you need wizard data for this? Please elaborate. I have to give the print report button on the main form...so i need to define the method on my main python file..and for that i need to fetch data from wizard like price and quantity of product..so i am unable to get price data from wizard to my main python file.. What specific report are you trying to print? That might help me understand what you're trying to do. Once you're done using a wizard, you should never need to access the wizard data again, the relevant info that came out of it should already be written to normal ERP objects. I am trying to print a pdf report using sxw to rml conversion..the report is printing but i am unable to see the data of wizard in the report...I am browsing record of wizard as normal erp object in my main .py file but the data is not showing..even I am printing that wizard data in my main file..but its giving error that browse record list has no attribute price.
https://www.odoo.com/forum/help-1/question/printing-report-from-the-form-view-34020
CC-MAIN-2017-26
refinedweb
374
67.65
In the second part of this series, you saw how to collect the commit information from the git logs and send review requests to random developers selected from the project members list. In this part, you'll see how to save the code review information to follow up each time the code scheduler is run. You'll also see how to read emails to check if the reviewer has responded to the review request. Getting Started Start by cloning the source code from the second part of the tutorial series. git clone CodeReviewer Modify the config.json file to include some relevant email addresses, keeping the royagasthyan@gmail.com email address. It's because the git has commits related to the particular email address which is required for the code to execute as expected. Modify the SMTP credentials in the schedule.py file: FROM_EMAIL = "your_email_address@gmail.com" FROM_PWD = "your_password" Navigate to the project directory CodeReviewer and try to execute the following command in the terminal. python scheduler.py -n 20 -p "project_x" It should send the code review request to random developers for review. Keeping the Review Request Information To follow up on the review request information, you need to keep it somewhere for reference. You can select where you want to keep the code review request information. It can be any database or may be a file. For the sake of this tutorial, we'll keep the review request information inside a reviewer.json file. Each time the scheduler is run, it'll check the info file to follow up on the review requests that haven't been responded to. Create a method called save_review_info which will save the review request information inside a file. Inside the save_review_info method, create an info object with the reviewer, subject, and a unique Id. def save_review_info(reviewer, subject): info = {'reviewer':reviewer,'subject':subject,'id':str(uuid.uuid4()),'sendDate':str(datetime.date.today())} For a unique Id, import the uuid Python module. import uuid You also need the datetime Python module to get the current date. Import the datetime Python module. import datetime You need to initialize the reviewer.json file when the program starts if it doesn't already exist. if not os.path.exists('reviewer.json'): with open('reviewer.json','w+') as outfile: json.dump([],outfile) If the file doesn't exist, you need to create a file called reviewer.json and fill it with an empty JSON array as seen in the above code. This method will be called each time a review request is sent. So, inside the save_review_info method, open the reviewer.json file in read mode and read the contents. Append the new content information into the existing content and write it back to the reviewer.json file. Here is how the code would look: def save_review_info(reviewer, subject): info = {'reviewer':reviewer,'subject':subject,'id':str(uuid.uuid4()),'sendDate':str(datetime.date.today())} with open('reviewer.json','r') as infile: review_data = json.load(infile) review_data.append(info) with open('reviewer.json','w') as outfile: json.dump(review_data,outfile) Inside the schedule_review_request method, before sending the code review request mail, call the save_review_info method to save the review information.) save_review_info(reviewer,subject); send_email(reviewer,subject,body) Save the above changes and execute the scheduler program. Once the scheduler has been run, you should be able to view the reviewer.json file inside the project directory with the code review request information. Here is how it would look: [{ "reviewer": "samson1987@gmail.com", "id": "8ca7da84-9da7-4a17-9843-be293ea8202c", "sendDate": "2017-02-24", "subject": "2017-02-24 Code Review [commit:16393106c944981f57b2b48a9180a33e217faacc]" }, { "reviewer": "roshanjames@gmail.com", "id": "68765291-1891-4b50-886e-e30ab41a8810", "sendDate": "2017-02-24", "subject": "2017-02-24 Code Review [commit:04d11e21fb625215c5e672a93d955f4a176e16e4]" }] Reading the Email Data You have collected all the code review request information and saved it in the reviewer.json file. Now, each time the scheduler is run, you need to check your mail inbox to see if the reviewer has responded to the code review request. So first you need to define a method to read your Gmail inbox. Create a method called read_email which takes the number of days to check the inbox as a parameter. You'll make use of the imaplib Python module to read the email inbox. Import the imaplib Python module: import imaplib To read the email using the imaplib module, you first need to create the server. Log in to the server using the email address and password: Once logged in, select the inbox to read the emails: You'll be reading the emails for the past n number of days since the code review request was sent. Import the timedelta Python module. import timedelta Create the email date as shown: Using the formatted_date, search the email server for emails. typ, data = email_server.search(None, '(SINCE "' + formatted_date + '")') It will return the unique IDs for each email, and using the unique IDs you can get the email details. ids = data[0] id_list = ids.split() first_email_id = int(id_list[0]) last_email_id = int(id_list[-1]) Now you'll make use of the first_email_id and the last_email_id to iterate through the emails and fetch the subject and the "from" address of the emails. for i in range(last_email_id,first_email_id, -1): typ, data = email_server.fetch(i, '(RFC822)' ) data will contain the email content, so iterate the data part and check for a tuple. You'll be making use of the email Python module to extract the details. So import the import email You can extract the email subject and the "from" address as shown: for response_part in data: if isinstance(response_part, tuple): msg = email.message_from_string(response_part[1]) print 'From: ' + msg['from'] print '\n' print 'Subject: ' + msg['subject'] print '\n' print '------------------------------------------------' Here is the complete read_email method: def read_email(num_days): try:]) print 'From: ' + msg['from'] print '\n' print 'Subject: ' + msg['subject'] print '\n' print '------------------------------------------------' except Exception, e: print str(e) Save the above changes and try running the above read_email method: read_email(1) It should print the email subject and "from" address on the terminal. Now let's collect the "from" address and subject into an Instead of printing the subject and the "from" address, append the data to the Here is the modified read_email method: def read_email(num_days): try: email_info = []]) email_info.append({'From':msg['from'],'Subject':msg['subject'].replace("\r\n","")}) except Exception, e: print str(e) return email_info Adding Logging for Error Handling Error handling is an important aspect of software development. It's really useful during the debugging phase to trace bugs. If you have no error handling, then it gets really difficult to track the error. Since you're growing with a couple of new methods, I think it's the right time to add error handling to the scheduler code. To get started with error handling, you'll be needing the logging Python module and the RotatingFileHandler class. Import them as shown: import logging from logging.handlers import RotatingFileHandler Once you have the required imports, initialize the logger as shown: logger = logging.getLogger("Code Review Log") logger.setLevel(logging.INFO) In the above code, you initialized the logger and set the log level to INFO. Create a rotating file log handler which will create a new file each time the log file has reached a maximum size. logHandler = RotatingFileHandler('app.log',maxBytes=3000,backupCount=2) Attach the logHandler to the logger object. logger.addHandler(logHandler) Let's add the error logger to log errors when an exception is caught. In the read_email method's exception part, add the following code: logger.error(str(datetime.datetime.now()) + " - Error while reading mail : " + str(e) + "\n") logger.exception(str(e)) The first line logs the error message with the current date and time to the log file. The second line logs the stack trace to the error. Similarly, you can add the error handling to the main part of the code. Here is how the code with error handling would look: try: commits = process_commits() if len(commits) == 0: print 'No commits found ' else: schedule_review_request(commits) except Exception,e: print 'Error occurred. Check log for details.' logger.error(str(datetime.datetime.now()) + " - Error while reading mail : " + str(e) + "\n") logger.exception(str(e)) Wrapping It Up In this part of the series, you shelved the review request information in the reviewer.json file. You also created a method to read the emails. You'll be using both of these functions to follow up on the code review requests in the final part of this series. Additionally, don’t hesitate to see what we have available for sale and for study in the marketplace, and don't hesitate to ask any questions and provide your valuable feedback using the feed below. Source code from this tutorial is available on GitHub. Do let us know your thoughts and suggestions in the comments below. Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://code.tutsplus.com/tutorials/building-a-python-code-review-scheduler-keeping-the-review-info--cms-28316
CC-MAIN-2018-22
refinedweb
1,495
56.96
Apr 26, 2011 03:05 AM|alexwieder|LINK Hi Everybody, I'm fairly new to MVC and am creating my first application. Last night I learned that when the framework generates html for an edit form, the textboxes lack the maxlength attribute. If this is information that's readily available from the database, why doesn't the framework generate the textboxes in what I'd assume would be the correct way? Took me a very long time to figure out that the reason why I was getting "String or binary data would be truncated" errors was due to an entry error as I was entering random data in the form rather than to a problem with my code. Is it possible to somehow have the framework generate the html with this information so that this [very hard to track-down] problem could be easily averted? (Not to mention that a simple "maxlength" attribute added to the html would save a lot of coding.) Thanks! Alex All-Star 47060 Points Moderator MVP Apr 26, 2011 06:40 AM|HeartattacK|LINK You can add a StringLength Data Annotations Attribute to the property in question. public class Person { [StringLength(...)] public string Name{get;set;} } The reason it doesn't read it from the database if coz it doesn't know where the data's coming from. It could be coming from a web service, created manually in the controller etc. Also, if you're new to MVC and aren't maintaining a legacy MVC 1 app, I'd recommend you use MVC 3. Apr 26, 2011 06:56 AM|alexwieder|LINK Hi Ashic! Makes a lot of sense for it not to make assumptions. I stuck with mvc 1 because that's what I had installed in my computer (now you know how long I've been trying to make time for this) and the tutorials I collected are also for that version. Is mvc 3.0 compatible with visual studio 2008? I tried to look into this, but microsoft's web site mentions something about needing vs2010 to take full advantage of it (I get paranoid when ms-talk is vague). Would mvc 2 be a better option in my case? Thanks for your quick reply! Alex All-Star 47060 Points Moderator MVP Apr 26, 2011 07:41 AM|HeartattacK|LINK MVC 3 needs 2010. MVC 2 is better than MVC1, but I'd recommend even getting the free Visual Web Developer 2010 to use MVC 3. MVC 3 is miles ahead of MVC 2 - and you get to use Razor as well. MVC 3 is more consise and easier to use than previous versions. 4 replies Last post Apr 28, 2011 06:26 AM by alexwieder
http://forums.asp.net/t/1675773.aspx?TextBox+MaxLength+in+MVC+1
CC-MAIN-2014-52
refinedweb
455
70.63
Apache OpenOffice (AOO) Bugzilla – Issue 121754 Build HSQLDB with JDK 7 Last modified: 2013-08-03 08:05:03 UTC Patches attached to this email post on the Dev mailing list: (In reply to comment #0) > Patches attached to this email post on the Dev mailing list: > > That mail has no attachment. The patch should attached to this bug, and should be buildable (the patch copied in that mail is made from the unpacked sources; the developer should make a patch, put it on trunk/main/hsqldb/patches and add it to trunk/main/hsqldb/makefile.mk, *then* generate the patch). Created attachment 80307 [details] Buildable patch Based on. (In reply to comment #3) >. I've raised this concern several times (for example ) but people seems to misunderstand it, and still talk about "dropping java6" (whatever that could mean). Adapted the bug title. >. Unfortunately it seems impossible to compile classes that implement java.sql package interfaces to force a previous JRE target. Therefore a baseline JDK 5 is required for release builds. But there is no harm in committing the patch (after some mods) just to make it simpler for developers. Developers can already compile with JDK 6 as support is built into the existing HSQLDB sources. The patch needs changes to //#ifdef and //#endif lines as they must start at the beginning of the line (example below) + public class jdbcConnection implements Connection { ++//#ifdef JAVA7 The build.xml must also be updated to include a target for JDK7, something like: <target name="switchtojdk17" depends="switchtojdk16" ... You can look at build.xml for hsqldb 2.2.x for an example. I confirm that this patch allows building hsqldb with a Java 7 compiler. As clarified by Pedro on the dev list, we don't have a more comprehensive Java 7 patch. So what's preventing this from being committed? That the generated binaries would not run with Java 5/6? Would Fred's latest comment address this? If it is an elaborate issue, feel free to raise it on the dev list. > That the generated binaries would not run with Java 5/6? Would Fred's latest comment address this? With the patch applied in its present form, the sources would not compile with JDK 5/6. HSQLDB has a preprocessor that comments in/out #ifdef tagged code that is valid/not valid for the JDK used for compile. The tags must be in the right place and the build.xml must be upgraded for JDK7. These changes are quite simple. (In reply to comment #7) > I confirm that this patch allows building hsqldb with a Java 7 compiler. As > clarified by Pedro on the dev list, we don't have a more comprehensive Java > 7 patch. > > So what's preventing this from being committed? The patch is incomplete and some errors, see comment 6 from Fred (who happens to be the HSQLD developer, so he knows what he is talking about ;) ). Pedro pointed to the FreeBSD solution: which seems to be correct, as it also modifies build.xml, but the "debian" in the name of the patches suggests they took the patches from Debian, so the precedence/license is unclear. Note that I was/am/will not working on this; I simply made the patch from the copy&paste in the mailing list because they pinged a developer to look at it. @folling: are you working on this? I posted detailed instructions how to generate a patch that works in the build environment: An updated build.xml has been committed to HSQLDB SVN. The patch for jdbcDataSource is incorrect. The inserted code stub must be moved further down (outside the existing #ifdef /#else tags). We I have time, I will add the Java 7 method stubs to the HSQLDB hsqldb_1_8_0_11 SVN repository, which can then be used to patch hsqldb_1_8_0_10 Updated sources and build.xml file for HSQLDB 1.8.0.11 are avaiable from the HSQLDB project SVN. See here: These files compile correctly with JDK7. So, to make sure I understood this right, all relevant patches are now incorporated into HSQLDB 1.8.0.11 and it would be enough to update HSQLDB from 1.8.0.10 to 1.8.0.11 in the OpenOffice external source dependencies? Answering Rob: yes, this bug should be fixed in version 4. The original patch by follinge, as modified by Ariel, works for me. But from the comment above I understand that updating HSQLDB to 1.8.0.11 (which might imply rebasing some patches, I haven't checked) should be the best solution. Anyway, this issue should be fixed in 4.0 by integrating the patch or (probably better) by updating to HSQLDB 1.8.0.11. [Setting target 4.0, feel free to reset if we haven't agreed how to set it yet] I have not been working on this. I just wanted to get things working with a new version of java. I have been going back and forth with people on which versions of java are "supported". I'm still not clear on this. I just figured that it made since to build on newer versions of java so people with modern distros could build with a clean install of a newer linux distro. I can go back and put ifdefs in this, but I think that the correct decision is to NOT apply this patch and instead to update hsqldb if it will work on all versions of java. Plus, it should actually have the functions implemented instead of stubs which I planned on implementing if and when things became an issue. I think it is vital to do a build that will enable acceptable java 1.7+ usage by end users. We simply can not tell them to use java 1.6- with the security problems that exist in that old environment. If we can build with 1.6 but users can run acceptably with 1.7 and above that would be ideal. Apache buildbots are still at 1.6, with linux buildbot at openJDK 6. We appreciate continued assistance in this area. Created attachment 80961 [details] Update HSQLDB to version 1.8.0.11, released specifically to address this issue The attached patch updates HSQLDB to version 1.8.0.11, thus enabling to build the "hsqldb" module with both Java 6 and Java 7 (I don't have older versions at the moment). System requirements for users are not expected to be affected by this patch. Note that we download the package from OOo Extras, and that the package there is manually obtained from since does not offer version 1.8.0.11 at the moment. The package contents with respect to 1.8.0.10 are significantly different in size, so I will replace this 1.8.0.11 package with a more "official" version if that appears in the HSQLDB files area at SourceForge. Also note that two out of the four patches currently at get removed since they are already included in HSQLDB 1.8.0.11. Andrea -- Ok, I see the move to: Will you be updating: /main/external-deps.lst as well? (In reply to Kay from comment #18) > Will you be updating: > /main/external-deps.lst > as well? The patch already does it, see the first chunk at It is not committed yet, but complete. OK, sorry for the confusion on my part. I see this now. Time to apply and see what happens. Thanks. why is this not already checked in and tested for a while? If it is not critical and don't break the build with older Java versions. What's the current status exactly, doe sit build on all our major platforms, Linux, Mac, Windows, ...? I'm assuming at this point, we should just use Andrea's patches only? otherwise this doesn't make sense as Ariel's patch included i121754.patch on hslqdb and this is not mentioned in Andrea's makefile.mk. What I can say is that the patched code did build hsqldb on both Java 6 and Java 7 (Linux, 64 and 32 bit respectively). @fredt: I'm perplexed by comparing your comment here (where you mention that 'target name="switchtojdk17"' must be added) to the revision log of (where you added it in but you then removed it in ). Is this as expected? (In reply to Andrea Pescetti from comment #23) > @fredt: I'm perplexed by comparing your comment here I simplified the added code in the SVN to avoid using new #ifdef blocks. The SVN head you are looking at will compile correctly on Java 5 as well as later versions. I would like to incorporate the remaining patches at but I will probably use slightly different code. If this will not cause confusion then I can do it today. Then you can retire those patches and use a diff between 1.8.0.10 release code and 1.8.0.11 SVN for the patch. Andrea, please get in touch direct to iron this out and and get it done quickly. in response to Andrea #23: Yes, I also incorporated the IFDEF statements for a build in April, but it seems there were execution issues with java 7. However, the other patches you now have here were not in place. So, I'll see. "pescetti" committed SVN revision 1500167 into trunk: #i121754# Patch HSQLDB to align it to version 1.8.0.11, enable building on Ja... Created attachment 81010 [details] Patch committed in r1500167 Some more details: - Fred Toussi (thanks) made a newer 1.8.0.11 release (newer than the previous 1_8_0_11 tag) available at ; but this was not released in ZIP form, so the previous idea of simply updating the ZIP file is not the best. - The 1_8_0_11 tag supersedes the four patches we used to apply and enables building with Java 7. - The attached patch, as agreed with Fred, takes care of updating the current 1.8.0.10 with all code changes done in the 1_8_0_11 tag. To create it, I started with the ZIP version of 1.8.0.10 and compared all files to the 1_8_0_11 tag. There were dozens of false positives due to the "$Id$" tags, whitespace changes and other similar differences. I only kept files that were significantly different, plus one new file (src/org/hsqldb/lib/StringComparator.java) that is in 1_8_0_11 but not in the 1.8.0.10 ZIP file. So the final result should be that all non-trivial (whitespace, comments...) changes from 1_8_0_11 are backported. - This was tested by building with Java 6 and Java 7; the hsqldb module built successfully on both systems. OK, I'm having problems building -- have junit enabled for QA build -- and stops in the same place -- complex/connectivity/HsqlDriverTest.java:34: error: package org.hsqldb.lib does not exist import org.hsqldb.lib.StopWatch; ^ complex/connectivity/hsqldb/TestCacheSize.java:36: error: package org.hsqldb.lib does not exist import org.hsqldb.lib.StopWatch; The current deliverd hslqdb with latest snapshot contains lib/Stopwatch.class So, some questions -- I have your latest patches in my working copy. And, it seems current main/external_deps.lst has this: if (SOLAR_JAVA == TRUE) MD5 = 17410483b5b5f267aa18b7e00b65e6e0 name = hsqldb_1_8_0.zip URL1 = URL2 = $(OOO_EXTRAS)$(MD5)-$(name) If this is correct, then something is wrong -- where is org.hsqldb.lib.StopWatch.class? (In reply to Kay from comment #28) > where is org.hsqldb.lib.StopWatch.class? The class is in hsqldb.jar. The hsqldb.jar must be built before the test classes. This jar must be included in the classpath when the test classes are built re Fred's comment -- Yes. My conclusion is that in my recent build, hsqldb was either not built at all (likely since I don't see it anywhere in the build tree), or my build sequence is faulty. Will investigate by trying manual build with the provided ant build.xml. what is the current status of this issue now? It seems to be really a blocker at the moment. Created attachment 81042 [details] remove the failiing part The part, where the readme file is patched, does not apply on Windows but produces a build break. The attached patch removes that part. I wonder, that the file i121754.patch has DOS line ends (and so has this patch). In addition I currently struggle with git. So please have a look and help fixing it on master. All-clear for line ends, trunk is OK, but I have to repair my local copy. Looks like the root cause of the non-applying patch is a DOS line start, ie a carriage return at the start of a line. This is not handled by the patch command. I played around with the patch (helped by the Emacs Diff mode) and when the offending line is left untouched then the whole patch applies without further problems. With this idea (not changing the line with the ^M at the beginning) the resulting readme file would look like this: Readme File June 2013 ^MThis package contains HSQLDB 1.8.0.10 This package contains HSQLDB 1.8.0.11 (please ignore the line above). ... When I examine i121754.patch with a hex editor, I see mixed line ends in the file. It starts with 0A but later on a lot of 0D0A exists. So my previous statement "trunk is OK regarding line ends" might be wrong. Good detective work, Regina -- hex editors come in VERY handy for this type of thing. No wonder patch is having problems. At the very least, I think we should change the patch to consistent line ends. I wonder how the buildbot is getting past this issue. Maybe setup to just try ANY kind of line end, I don't know. "pescetti" committed SVN revision 1501409 into trunk: #i121754# Simplify the patch to avoid line-end problems. Created attachment 81043 [details] Patch committed in r1501409 Status: this new patch removes all ^M characters (line-ends problems) and only keeps the code differences, thus removing readme and HTML files. If you examine the options given to diff, you'll see that I had to fiddle with line-ends due to the fact that the two versions have different line-end conventions. So the patch was not mixing line-ends per se, it was a diff between files with different line-ends conventions. Tested like the other time: hsqldb module built successfully under Java 6 and 7 (Linux). Regina, thanks for your feedback and if you now can build on Windows too then this should be fine. I have build successfully on Windows 7 with Java 6. is it possible to update the status of this issue MacOS and Windows builds with "release build options" are ok Still not building hsqldb for me on rev 1502186 with openjdk7 in Linux. I will investigate further. @Kay: Can you describe you current build problem? Basically, the build gets down to building "connectivity", which depends on hsqldb being built, but hsqldb is NOT built, that is for certain! No errors or warnings occur for the hsqldb build -- if it'd even invoked correctly -- so that's all I can tell you at the moment. I was going to do a build --from hsqldb to see what would happen. At this point, due to recent changes on my system, I don't know if it's an ant problem -- maybe? -- or the new java (openJDK 7 instead of Oracle java 7 <--- this one worked ok on older build in April) or what. So, more information when I know more. remove showstopper request because we currently build our release not with Java 7 For 4.1 I propose to switch completely to Java 7 and change our build dependencies accordingly. But for now this is no stopper issue for me. This issue was fixed on trunk before the AOO400 release branch was created: So it is fixed in 4.0 too: we can build OpenOffice 4 with Java 7 (with HSQLDB 1.8.0.10 patched to incorporate all the few code changes from 1.8.0.11). For 4.1 we should look into moving to HSQLDB 2.x, but there is already an issue (and an old CWS) for that. @Kay: since you never managed to successfully finish the build before or after the fix, I suggest that you open a separate issue and put me in CC.
https://bz.apache.org/ooo/show_bug.cgi?id=121754
CC-MAIN-2017-09
refinedweb
2,746
75
In this tutorial I will document the process of adding a database to your flask app. We'll use the following tools: - Flask-SQLAlchemy: ORM mapping - Flask-Migrate: DB Migrations for SQLAlchemy - Flask-Testing: Testing for SQLAlchemy Let's add all of these to requirements.txt to get started: ... Flask-SQLAlchemy Flask-Script Flask-Migrate Flask-Testing ... Step 1: Add Flask-SQLAlchemy to our app This part is easy: app.py from flask import Flask from flask.ext.sqlalchemy import SQLAlchemy app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///db.db' db = SQLAlchemy(app) Step 2: Let's set up testing from the get-go: We're using Flask-Testing to test our database. Here is an example TestCase which we'll use to test our code: tests.py from flask.ext.testing import TestCase from app import db, app TEST_SQLALCHEMY_DATABASE_URI = "sqlite:///test.sqlite" class MyTest(TestCase): def create_app(self): app.config['SQLALCHEMY_DATABASE_URI'] = TEST_SQLALCHEMY_DATABASE_URI return app def setUp(self): db.create_all() def tearDown(self): db.session.remove() db.drop_all() What's going on here: - We import our app from our flask project and we configure a test database uri. We don't want to be running our tests against a production database by mistake! - In setUp()we create a fresh database (this is run before each test in the TestCase) - In tearDown()we destroy our database again. This ensures that each test is running against a clean database in a predictable state. You should be able to run this successfully with your test runner of choice. I like nose, so I would run this with: $ nosetests .. ---------------------------------------------------------------------- Ran 0 tests in 0.000s OK Step 3: Create a model: models.py from app import db class User(db.Model): id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String(80), unique=True) email = db.Column(db.String(120), unique=True) def __init__(self, username, email): self.username = username self.email = email def __repr__(self): return '<User %r>' % self.username Step 4: Test our model In tests.py we can add a couple of tests. Firstly, in setUp(), let's create some initial users: def setUp(self): db.create_all() ## create users: user = User('joe', 'joe@soap.com') user2 = User('jane', 'jane@soap.com') db.session.add(user) db.session.add(user2) db.session.commit() Let's test querying for all users: def test_get_all_users(self): users = User.query.all() assert len(users) == 2, 'Expect all users to be returned' and fetching a specific user: def test_get_user(self): user = User.query.filter_by(username='joe').first() assert user.email == 'joe@soap.com', 'Expect the correct user to be returned' Again, we can run this with our test runner of choice: $ nosetests tests/db_tests.py .. ---------------------------------------------------------------------- Ran 2 tests in 0.037s OK Excellent! We've setup Flask to use a simple database. We've created our first ORM model and we've got some unit tests making sure it's all working as expected. Last step: we need to get migrations in place. Step 5: Setup migrations The last part of the puzzle is to setup our DB migrations What are DB migrations? Part of the purpose of using an ORM is to abstract away the database. This means that your ORM mappings need to be tightly coupled to the db schema. Migrations are a way to manage this coupling with code in a mostly painless way. Migrations make sure that your database is in a state that matches your code. We're going to use: Flask-Migrate for this. Adding Flask-Migrate to our app is easy. Create a file called manage.py manage.py from flask.ext.script import Manager from flask.ext.migrate import Migrate, MigrateCommand from app import app, db manager = Manager(app) migrate = Migrate(app, db) manager.add_command('db', MigrateCommand) if __name__ == "__main__": manager.run() Notes: - If you already have a mange.py file, you can simply add the relevant parts. - See Flask-Script for more info on running commands with Flask We're now ready to add migrations to our project. To start managing the project with migrations run: $ python manage.py db init This will create a migrations folder which will contain all the migrations we need for our project. We then need to generate our initial migration. Use: $ python manage.py db migrate Then, when we make changes to our models, we need to generate a migration to apply these changes to our database. We do this with: $ python manage.py db upgrade You will need to run upgrade every time you make changes to your models. Now, when checking out the project from fresh, a new user can simply run: python app.py db upgrade and all migrations will be run so that their application's database is in the correct state.
https://blog.toast38coza.me/adding-a-database-to-a-flask-app/
CC-MAIN-2020-50
refinedweb
796
69.79
On Thu, 26 Dec 2002, Eduard Bloch wrote: > > (3) that can be installed alongside another tar > > I think all this discussion is worthless until we see real problems. And > > Don't even think about it. Go trough the archives and realize the dimensions > of disputes, caused by latest tar option change (-j vs. -I). You are going to > create x times more trouble. If tar sucks and star wants to become the default, then star needs to be able to emulate tar. A possible solution could look like this: $ cat star.c ... bool behave_like_tar = FALSE; #ifdef EMULATION_MODE if ( called_as("tar") ) behave_like_tar = TRUE; // braindammage on ;-) #endif *t -- to s mam power corrupts so the two towers t
https://lists.debian.org/debian-devel/2002/12/msg01539.html
CC-MAIN-2014-15
refinedweb
116
72.05
#include <Transfer_ActorOfProcessForTransient.hxx> Returns the Last status (see SetLast). Returns the Actor defined as Next, or a Null Handle. Returns a Binder for No Result, i.e. a Null Handle. in STEPControl_ActorRead, and IGESToBRep_Actor. If <mode> is True, commands an Actor to be set at the end of the list of Actors (see SetNext) If it is False (creation default), each add Actor is set at the beginning of the list This allows to define default Actors (which are Last) Defines a Next Actor : it can then be asked to work if <me> produces no result for a given type of Object. If Next is already set and is not "Last", calls SetNext on it. If Next defined and "Last", the new actor is added before it in the list. Specific action of Transfer. The Result is stored in the returned Binder, or a Null Handle for "No result" (Default defined as doing nothing; should be deferred) "mutable" allows the Actor to record intermediate information, in addition to those of TransferProcess. Reimplemented in Transfer_ActorOfTransientProcess. Prepares and Returns a Binder for a Transient Result Returns a Null Handle if <res> is itself Null.
https://dev.opencascade.org/doc/occt-7.6.0/refman/html/class_transfer___actor_of_process_for_transient.html
CC-MAIN-2022-27
refinedweb
192
63.9
Racc is an LALR(1) parser generator for Ruby. It is written in Ruby and generates Ruby code. Almost all functions of yacc(1) is implemented. WWW: No installation instructions: this port has been deleted. The package name of this deleted port was: ruby19-racc ruby19-racc PKGNAME: ruby19-racc NOTE: FreshPorts displays only information on required and default dependencies. Optional dependencies are not covered. This port is required by: No options to configure Number of commits found: 59 broken with ruby 1.9 -remove MD5 - Assign all unmaintained ruby ports to ruby@, so people will know where to send questions to. - Update to 1.4.5 PR: ports/96801. Fix plist. Add SIZE data. Submitted by: trevor Update to 1.4.4. De-pkg-comment. Update to 1.4.3 (revision 2) and update URLs. Update WWW and the author email. Remove from MASTER_SITES, now that bsd.ruby.mk adds a backup site to MASTER_SITE_BACKUP. Use RUBY_MOD*. Define USE_RUBY_FEATURES instead of hardcoding conditional *_DEPENDS. devel/ruby-racc-runtime has been replaced with lang/ruby16-shim-ruby18. The backup site directory has been moved. The master site is down right now, so provide an alternative site. Update to 1.4.2. Update MASTER_SITES. Update to 1.4.1. Update to 1.3.12. Update to 1.3.11. Update to 1.3.10. Clean up. Update to 1.3.9. Update MASTER_SITES, WWW and the author's email. Update to 1.3.8. Update to 1.3.7. Add %%PORTDOCS%%. Add ruby-strscan to RUN_DEPENDS. Seems y2racc requires it. Update to 1.3.6. Update to 1.3.5. Update to 1.3.3. Fix the breakage of ruby-racc-runtime. Update to 1.3.2. Update to 1.3.0. Update to 1.2.6. Update to 1.2.5. Convert category devel to new layout. Now bsd.ruby.mk is automatically included by bsd.port.mk when USE_RUBY or USE_LIBRUBY is defined, individual ruby ports no longer need to include it explicitly. Update to 1.2.4. Update md5. Update fundamental ruby ports first with bsd.ruby.mk. To separate runtime libraries from this port, add a "RUNTIME" knob for the forthcoming slave port. Update to 1.2.3. Make all these Ruby related ports belong also in the newly-added "ruby" virtual category. Do The Right Thing. (R) Update to 1.2.2. Set DIST_SUBDIR=ruby for all these Ruby ports to stop distfile namespace pollution. Depend on ruby-amstd and get rid of the amstd installation of this port. Follow our hier(7) policy: share/doc/ruby/*/examples -> share/examples/ruby/* Add more Ruby ports. Servers and bandwidth provided byNew York Internet, SuperNews, and RootBSD 10 vulnerabilities affecting 29 ports have been reported in the past 14 days * - modified, not new All vulnerabilities
http://www.freshports.org/devel/ruby-racc/
CC-MAIN-2016-22
refinedweb
467
64.17
8.1. Unlink a File¶ Table of Contents There are several standard terms for deleting a file, which are unlink, delete, remove and erase. The system call to delete a file is unlink() because that term accurately describes what happens. Deleting a file removes the pointer to the file, but the file remains unchanged. Similarly, removing an address on a map doesn’t affect the actual building. Other APIs or GUIs use terms to describe what the sees user, such as remove or erase, instead of what the code does. - Both Windows and POSIX use functions unlinkand removeto delete a file. - The POSIX function remove() is at a higher level because it calls unlink()to remove a file and rmdir()to remove a directory. You can see the additional functionality in the POSIX remove source code. - The Windows functions of unlinkand removefunctions have the same prototype and presumably do the same thing (not verified). - unlink Deletes a file // returns 0 if successful; Otherwise returns -1; int unlink( const char *filename // Name of file to delete. ); - remove Deletes a file // returns 0 if successful; Otherwise returns -1; int remove( const char *filename // Name of file to delete. ); Note The file handle must be closed to the file others unlink will return an error. Template Code¶ Required Include Files #include <stdio.h> Basic Usage char *filename = "delete-me.txt"; // unlink() or remove() int unlink_status = unlink(filename); // Check for errors // 0 if the file deleted successfully // -1 if an error occurred Task: Delete a File¶ Create a file called lab8.1.c for this task. First, use unlink and then test your code using remove. Add error handling to unlink()by evaluating the return value. Execute the code using an invalid file. Verify that your program prints an error message with the PID. Expected Output Unable to delete file 'delete-me.txt' Next, create the file. Execute the code again. Verify that the file deletes and unlink_statushas a value of 0 Expected Output Successfully deleted file 'delete-me.txt' Replace unlinkwith removeand verify that both functions operate identically. Hint This method is a hack, so don’t use it for anything besides testing or as a developer tool. You can create a file using echoin the command line echo "some text" > filename.txt You can wrap the command in function system()to run a command from your code. system("echo \"some text\" > filename.txt");
https://labs.bilimedtech.com/operating-systems/8/1.html
CC-MAIN-2020-34
refinedweb
401
66.64
One Liner - itertools.groupby solution in Creative category for Call to Home by vlad.bezden """Call to Home Nicola believes that Sophia calls to Home too much and her phone bill is much too expensive. He took the bills for Sophia's calls from the last few days and wants to calculate how much it costs. The bill is represented as an array with information about the calls. Help Nicola to calculate the cost for each of Sophia calls. Each call is represented as a string with date, time and duration of the call in seconds in the follow format: "YYYY-MM-DD hh:mm:ss duration" The date and time in this information are the start of the call. Space-Time Communications Co. has several rules on how to calculate the cost of calls: on the day when they began. For example if a call was started 2014-01-01 23:59:59, then it counted to 2014-01-01; For example: 2014-01-01 01:12:13 181 2014-01-02 20:11:10 600 2014-01-03 01:12:13 6009 2014-01-03 12:13:55 200 First day -- 181s≈4m -- 4 coins; Second day -- 600s=10m -- 10 coins; Third day -- 6009s≈101m + 200s≈4m -- 100 + 5 * 2 = 110 coins; Total -- 124 coins. Input: Information about calls as a tuple of strings. Output: The total cost as an integer. Precondition: 0 < len(calls) ≤ 30 0 < call_duration ≤ 7200 The bill is sorted by datetime. """ from itertools import groupby from typing import Tuple def total_cost(calls: Tuple[str]) -> int: return sum( max(mins, mins * 2 - 100) for mins in ( sum((int(m[20:]) + 59) // 60 for m in t) for _, t in groupby(calls, lambda i: i[:10]) ) ) if __name__ == "__main__": result = total_cost( ( "2014-01-01 01:12:13 181", "2014-01-02 20:11:10 600", "2014-01-03 01:12:13 6009", "2014-01-03 12:13:55 200", ) ) assert result == 124, "Base example" result = total_cost( ( "2014-02-05 01:00:00 1", "2014-02-05 02:00:00 1", "2014-02-05 03:00:00 1", "2014-02-05 04:00:00 1", ) ) assert result == 4, "Short calls but money..." result = total_cost( ( "2014-02-05 01:00:00 60", "2014-02-05 02:00:00 60", "2014-02-05 03:00:00 60", "2014-02-05 04:00:00 6000", ) ) assert result == 106, "Precise calls" print("PASSED!!!") May 24, 2020 Forum Price Global Activity ClassRoom Manager Leaderboard Coding games Python programming for beginners
https://py.checkio.org/mission/calls-home/publications/vlad.bezden/python-3/one-liner-itertoolsgroupby/share/524e36c12fe7c2477a95e98e1211caf9/
CC-MAIN-2021-43
refinedweb
419
67.69
I want to create a function read data of type integer or double from a text file and stores them in a 2-dimensional array. But the number of the data in the text file is unknown during the program execution. Therefore the size of the array is also unknown. I have problems in reading the correct data value of type int and double and stores them in the array. I also can't determine the correct total of columns and rows for the array. My data in the text file looks something like this: 11001101010101010 10101011111111111 00111011111100001 Currently,the data in my file consists of 32 rows and 34 columns, and I tried using vector as shown below. But the program didn't return the correct rows and columns values. It read the row as 28 and column as 1 only. Also the program seem didn't able to read the data value into the vector as array data. No data is displayed on screen. Code:#include <fstream> // for ifstream #include <iostream> // for cout #include <vector> // for vector #include <iomanip> // for setw #include <string> // for string #include <sstream> // for stringstream template <typename T> void ReadMatrix(const char * fname , std::vector< std::vector<T> > & matrix) { using namespace std; T value; string line; ifstream in(fname); vector<T> v; while (getline(in,line)) { v.clear(); stringstream ss(line); while (ss >> value) v.push_back(value); matrix.push_back(v); } } template <typename T> void PrintMatrix(std::ostream & out, const T & matrix) { for (int i=0; i<matrix.size(); ++i) { for (int j=0; j<matrix[i].size(); ++j) { out << std::setw(10) << matrix[i][j]; } out << "\n"; } } typedef std::vector< std::vector<int> > IMATRIX; int main() { IMATRIX imatrix; // optionally reserve to make reading in a little more efficient ReadMatrix("test.txt",imatrix); PrintMatrix(std::cout,imatrix); return 0; } Later, I need to scan each row of data for 0s and stores them in text file. then I access the text file to normalize all 0s by rows. Therefore I need to save the information about the location of the 0s(which rows and columns). After the normalization, I need to replace the 0s in the later text file with the new value at the correct location. How to save the location value(rows and pixels) and corresponds them with their new normalized value? Thanks in advance.
https://cboard.cprogramming.com/cplusplus-programming/69513-how-read-unknown-total-int-data-text-file-2-dim-array-c-cplusplus.html
CC-MAIN-2017-22
refinedweb
392
63.09
When I was browsing various web sites to see what electronic components to buy one day I noticed some interesting little kits comprising of switches, LEDs and a couple of 4 digit seven segment displays saw I decided to purchase one, the first thing I noticed was at the heart of the module was a chip called a TM1638, never heard of it. A quick search dug up links to the datasheet (link supplied underneath in the links section) and an arduino library (in the code section). That makes life easier. The module I bought had 5 connections. VCC – 5v from Arduino Gnd – GND from Arduino STB – strobe pin, an output from your Arduino CLK – clock pin, an output from your Arduino DIO – data pin, another ouput from your Arduino Layout Code You can get a library to make development easier from In the example below we use a couple of functions built into the library, there are various others that are available. We will count to 100 and then display a message. setDisplayToDecNumber – Decimal numbers will be displayed setDisplayToString – Displays some text on the segments #include <TM1638.h> // define a module on data pin 8, clock pin 9 and strobe pin 10 TM1638 module(8, 9, 10); unsigned long a=1; void setup() { } void loop() { for (a=1; a<=100; a++) { module.setDisplayToDecNumber(a,0,false); delay(100); } module.setDisplayToString("Complete"); delay(1000); } Links TM1638 F71A 8* Digital Tube + 8* Key + 8* Double Color LED Module TM1638 LED keyboard scanning and display module
http://arduinolearning.com/learning/basics/arduino-tm1638-module.php
CC-MAIN-2022-27
refinedweb
254
55.47
So your thoughts on seeing this title might be “Holy cow, an article about imports…could there be anymore more boring?” But bear with me. This is actually something that is pretty easy to do correctly. But if you’re even a bit lazy (as I often am), you’ll do it wrong. And that can have really bad effects on you and your teammates’ productivity. Imagine someone is trying to make their first contribution to your codebase. They have no idea which functions are defined where. They aren’t necessarily familiar with the libraries you use. So what happens when they come across a function they don’t know? They’ll search for the definition in the file itself. But if it’s not there, they’ll have to look to the imports section. Once you’ve built a Haskell program of even modest size, you’ll appreciate the importance of the imports section of any source file. Almost any nontrivial program requires code beyond the base libraries. This means you’ll have to import library code that you got through Stack or Cabal. You’ll also want different parts of your code working together. So naturally your different modules will import each other as well. When you write your imports list properly, your contributors will love you. They’ll know exactly where they need to look to find the right documentation. Write it poorly, and they’ll be stuck and frustrated. They’ll lose all kinds of time googling and hoogling function definitions. Tech stacks like Java and iOS have mature IDEs like IntelliJ or XCode. These make it easy someone to click on a method and find documentation for it. But you can’t count on people having these features for their Haskell environment yet. So now imagine the function or expression they’re looking for is not defined in the file they’re looking at. They’ll need to figure out themselves which module imported it. Here are some good practices to make this an easy process. Only Import the Functions You Need The first way to make your imports clearer is to specify which functions you import. The biggest temptation is to only write the module name in the import. This will allow you to use any function from that library. But you can also limit the functions you use. You do this with a parenthesized list after the module name. import Data.List.Split (splitOn) import Data.Maybe (isJust, isNothing) Now suppose someone sees the splitOn function in your code and doesn’t know what it does, or what its type is. By looking at your imports list, they know they can find out by googling the Data.List.Split library. Qualifying Imports The second way to clarify your imports is to use the qualified keyword. This means that you must prefix every function you use from this module by a name assigned to the module. You can either use the full module name, or you can use use the askeyword. This indicates you will refer to the module by a different, generally shorter name. import qualified Data.Map as M import qualified Data.List.Split ... myMap :: M.Map String [String] myMap = M.fromList [(“first”, Data.List.Split.splitOn “abababababa” “a”)] In this example, a contributor can see exactly where our functions and types came from. The fromList function and the “Map” data structure belong to the Data.Map module, thanks to the “M” prefix. The splitOn function also clearly comes from Data.List.Split. You can even import the same module in different ways. This allows you to namespace certain functions in different ways. This helps to avoid prefixing type names, so your type signatures remain clean. In this example, we explicitly import the ByteString type from Data.ByteString. Then we also make it a qualified import, allowing us to use other functions like empty. import qualified Data.ByteString as B import Data.ByteString (ByteString) … myByteString :: ByteString myByteString = B.empty Organizing Your Imports Next, you should separate the internal imports from the external ones. That is, you should have two lists. The first list consists of built-in packages. The second list has modules that are in the codebase itself. In this example, the “OWA” modules are from within the codebase. The other modules are either Haskell base libraries or downloaded from Hackage: import qualified Data.List as L import System.IO (openFile) import qualified Text.PrettyPrint.Leijen as PPrint import OWAPrintUtil import OWASwiftAbSyn This is important because it tells someone where to look for the modules. If it’s from the first list, they will immediately know whether they need to look online. For imports on the second list, they can find the file within the codebase. You can provide more help by name-spacing your module names. For instance, you can attach your project name (or an abbreviation) to the front of all your modules’ names, as above. An even better approach is to separate your modules by folder and have period spacing. This makes it more clear where to look within the file structure of the codebase for it. Organizing your code cleanly in the first place can also help this process. For instance, it might be a good idea to have one module that contains all the types for a particular section of the codebase. Then it should be obvious to users where the different type names are coming from. This can save you from needing to qualify all your type names, or have a huge list of types imported from a module. Making the List Easy to Read On a final note, you want to make it easy to read your import list. If you have a single qualified import in a list, line up all the other imports with it. This means spacing them out as if they also used the word qualified. This makes it so the actual names of the modules all line up. Next, write your list in alphabetical order. This helps people find the right module in the list. Finally, also try to line up the as statements and specific function imports as best you can. This way, it’s easy for people to see what different module prefixes you're using. This is another feature you can get from Haskell text editor plugins. import qualified Data.ByteString (ByteString) import qualified Data.Map as M import Data.Maybe as May Summary Organizing your imports is key to making your code accessible to others developers! Make it clear where functions come from by. You can do this in two ways. You can either qualify your imports or specify which functions you use from a module. Separate library imports from your own code imports. This let’s people know if they need to look online or in the codebase for the module. Make the imports list itself easy to read by alphabetizing it and lining all the names up. Stay tuned for next week! We’ll take a tour through all the different string types in Haskell! This is another topic that ought to be simple but has many pitfalls, especially for beginners! We’ll focus on the different ways to convert between them. If you’ve never a line of Haskell before don’t despair! You can check out our Getting Started Checklist. It will walk you through downloading Haskell and point you in the direction of some good learning resources. If you’ve done a little bit of Haskell before but want some practice on the fundamentals, you should take a look at our Recursion Workbook. It has two chapters of content followed by 10 practice problems for you to look!
https://hackernoon.com/4-steps-to-a-better-imports-list-in-haskell-43a3d868273c
CC-MAIN-2019-47
refinedweb
1,288
76.01
The original code uses a timestamp as the start of the progress bar, and different points of time as the end of numerous progress bars. (8 hours, 24 hours, 48 hours, one month, etc.). My idea was to have a separate progress bar for each point in time of. This is the code i whipped up for a small example. - Code: Select all import time import sys start = time.time() end = start + 60 def progressbar_disp_full(): display_char = '#' for num in range(101): spacer = int(33-int(num/3)) * ' ' filler = int(num/3) * display_char #time.sleep(.1) sys.stdout.write("\r[{0}{1}] {2}%".format(filler, spacer, num)) print() progressbar_disp_full() So because i havent yet used a progress bar, and i feel like my brain is not working 100% lately: The range(101) gets the bar with from 0-100%, but how would you implement the start and end into the pbar. I have yet to see an example to view that hasnt just used time.sleep(1) to increment the pbar, so that might also be a part of my problem.
http://www.python-forum.org/viewtopic.php?p=2048
CC-MAIN-2015-32
refinedweb
180
78.89
Guava’s EventBus provides a publish-subscribe event mechanism which allows objects to communicate with each other via the Observer Pattern. The EventBus shies away from the traditional “Event Listener” pattern seen in Java where an object implements a particular interface and then explicitly subscribes itself with another object. In a recent project we chose to use the EventBus in conjunction with Guice (a dependency injection library) and have had a lot of success with it. Specifically, objects in our system only have to express what events they care about without being required to explicitly register with the EventBus or any other object. Before we go over how we bootstrapped EventBus with Guice, I think it would be a useful exercise to review the traditional and “non-guice” approaches to subscribing to events in order to illustrate the advantages of the EventBus/Guice partnership. Traditional As stated above—the traditional method requires an interface declaration, an explicit subscription, and knowledge of the object that is posting the particular event. Additionally, it forces the object that is posting the event to invent its own method of publishing the event. Non-Guice EventBus Using the EventBus eliminates the need for an interface or reference to the object that is posting the event. However, the ApplicationEventListener is still required to reference a global EventBus and register itself with it. The coupling between the global EventBus and the ApplicationEventListener eliminates the possibility of providing a flexible way of swapping out one EventBus with another without having to refactor the ApplicationEventListener. EventBus on Guice We are almost there! The example below utilizes Guice to inject an instance of the EventBus into the objects that require it—giving us the flexibility to swap one EventBus with another without having to refactor code. Unfortunately, it requires that we inject an EventBus instance into objects that are only going to register with it and never reference it again. EventBus on a lot of Guice Finally, we have arrived at our destination. Using Guice, we bind a TypeListener to every object that is created and ensure that it is registered with our default EventBus. Objects that subscribe to particular events are no longer required to explicitly subscribe with an EventBus and only need to express what kind of events they are interested in. 12 Comments Nice writeup – I didn’t see your example code for the generic EventBus usage w/o Guice — both of your snippets look like they initialize the EventBus in a Guice bind. Good catch Chris. I must have, at some point, messed up my gist and overwrote it with the example following it. I’ve updated the example. Good find — EventBus was specifically designed for your last technique. I wrote EventBus in 2006 down the hall from where Bob Lee was writing what became Guice, and each influenced the other’s design. This is why EventBus.register doesn’t throw if an object lacks @Subscribe annotations, for example. Glad you’re enjoying it! We are certainly happy with it and the pattern we’ve been able to pull out of it. I would have been pretty disappointed if register threw if an object lacked the @Subscribe annotation. I’m of the school that exceptions should indicate a real problem — as opposed to using it for flow control. So I thank you for not having register throw. :-) Thank you again for writing EventBus. The combination of EventBus with Guice (GuiceBus?) has been a real gem. I want to use Eventbus with Vaadin (Servlet). I have to be extra careful with scopes. How can I use the TypeListener Example with Session-Scopes? I cannot have a Singletone Eventbus inside my modules. Thanks for any hint … Great post, btw! Great question. It should be possible to do this and I’ve provided an example that may do what you are asking for. Because TypeListenersare not privy to the injection context it requires that we use a static proxy — which isn’t particularly nice since global state should be frowned upon. Here is what the example does: * Binds any requests for an EventBus to a provider that is scoped to a session * Creates a public class that implements TypeListenerwhich uses a holder object that is statically injected with the injector * For every type that is encountered during injection it is registered with the event bus in that given session scope This may be subject to a number of pitfalls and it should be thoroughly tested. :-) The example can be found here. This is really slick. You’re still on your own for unregistering as objects go out of scope or whatever, right? Although IIRC EventBus uses weak references so its not a catastrophe if someone forgets. Ray, EventBus does not use weak references — it’s a feature we’re considering though. (It was really designed with Crash-Only systems in mind, where unregistering isn’t such an issue.) Thanks for answering Cliff, As far as I knew it did not use weak references. For objects that have a limited scope it would require an explicit action (or event) to ensure the object unregisters. Extremely nice article Justin, thanks a lot for sharing! Is the code shown in this post available for public domain? I mean, is there any issue if that code would be integrated in an OSS project, mentioning the original source in the NOTICE file? Many thanks in advance, all the best! -Simo This code can be used freely for whatever purpose you see fit. Treat it as MIT licensed source code. thanks a lot Justin!
https://spin.atomicobject.com/2012/01/13/the-guava-eventbus-on-guice/
CC-MAIN-2018-13
refinedweb
929
54.73
#include <linux/perf_event.h> #include <linux/hw_breakpoint.h> int perf_event_open(struct perf_event_attr *attr, pid_t pid, int cpu, int group_fd, unsigned long flags); Note: There is no glibc wrapper for this system call; see NOTES.). The pid and cpu arguments allow specifying which process and CPU to monitor: */ __reserved_1 : 39; perf_branch_sample_type */ __u64 sample_regs_user; /* user regs to dump on samples */ __u32 sample_stack_user; /* size of stack to dump on samples */ __u32 __reserved_2; /* Align to u64 */ }; The fields of the perf_event_attr structure are described in more detail below:_ATR_SIZE_VER3 is 96 corresponding to the addition of sample_regs_user and sample_stack_user in Linux 3.7._hw_cache_id) | (perf_hw_cache_op_id << 8) | (perf_hw_cache_op_result_id << 16) where perf_hw_cache_id is one of: and perf_hw_cache_op_id is one of and perf_hw_cache_op_result_id is one of. sample_freq can be used if you wish to use frequency rather than period. In this case, you set the freq flag. The kernel will adjust the sampling period to try and achieve the desired rate. The rate of adjustment is a timer tick. See the branch_sample_type field for how to filter which branches are reported.). This new PERF_SAMPLE_IDENTIFIER setting makes the event stream always parsable by putting SAMPLE_ID in a fixed location, even though it means having duplicate SAMPLE_ID values in records.). When creating an event group, typically the group leader is initialized with disabled set to 1 and any child events are initialized with disabled set to 0. Despite disabled being 0, the child events will not start until the group leader is enabled. Inherit does not work for some combinations of read_formats, such as PERF_FORMAT_GROUP. Note that many unexpected situations may prevent events with the exclusive bit set from ever running. This includes any users running a system-wide measurement as well as any kernel use of the performance counters (including the commonly enabled NMI Watchdog Timer interface). The values of this are the following: */ }; wakeup_events only counts PERF_RECORD_SAMPLE record types. To receive a signal for every incoming PERF_RECORD type set wakeup_watermark to 1. The values can be combined via a bitwise or, but the combination of HW_BREAKPOINT_R or HW_BREAKPOINT_W with HW_BREAKPOINT_X is not allowed. config2 is a further extension of the config1 field. The first part of the value is the privilege level, which is a combination of one of the following values. If the user does not set privilege level explicitly, the kernel will use the event's privilege level. Event and branch privilege levels do not have to match. In addition to the privilege value, at least one or more of the following bits must be set. If you attempt to read into a buffer that is not big enough to hold the data ENOSPC is returned Here is the layout of the data returned by a read:]; }; struct read_format { u64 value; /* The value of the event */ u64 time_enabled; /* if PERF_FORMAT_TOTAL_TIME_ENABLED */ u64 time_running; /* if PERF_FORMAT_TOTAL_TIME_RUNNING */ u64 id; /* if PERF_FORMAT_ID */ }; The values read are as follows:k */ __u64 data_head; /* head in the data section */ __u64 data_tail; /* user-space written tail */ } The following list describes the fields in the perf_event_mmap_page structure in more detail: Starting with 3.12 these are renamed to cap_bit0 and you should use the new cap_user_time and cap_user_rdpmc fields instead. If not-set, it indicates an older kernel where cap_usr_time and cap_usr_rdpmc map to the same bit and thus both features should be used with caution. & ((1 << time_shift) - 1); timestamp = time_zero + quot * time_mult + ((rem * time_mult) >> time_shift); On SMP-capable platforms, after reading the data_head value, user space should issue an rmb().. For ease of reading, the fields with shorter descriptions are presented first. The CPU mode can be determined from this value by masking with PERF_RECORD_MISC_CPUMODE_MASK and looking for one of the following (note these are not bit masks, only one can be set at a time): struct { struct perf_event_header header; u32 pid, tid; u64 addr; u64 len; u64 pgoff; char filename[]; }; struct { struct perf_event_header header; u64 id; u64 lost; struct sample_id sample_id; }; struct { struct perf_event_header header; u32 pid; u32 tid; char comm[]; struct sample_id sample_id; }; struct { struct perf_event_header header; u32 pid, ppid; u32 tid, ptid; u64 time; struct sample_id sample_id; }; struct { struct perf_event_header header; u64 time; u64 id; u64 stream_id; struct sample_id sample_id; }; struct { struct perf_event_header header; u32 pid, ppid; u32 tid, ptid; u64 time; struct sample_id sample_id; }; struct { struct perf_event_header header; u32 pid, tid; struct read_format values; struct sample_id sample_id; }; */ u64 weight; /* if PERF_SAMPLE_WEIGHT */ u64 data_src; /* if PERF_SAMPLE_DATA_SRC */ u64 transaction;/* if PERF_SAMPLE_TRANSACTION */ }; This RAW record data is opaque with respect to the ABI. The ABI doesn't make any promises with respect to the stability of its content, it may vary depending on event, hardware, and kernel version. The entries are from most to least recent, so the first entry has the most recent branch. Support for mispred and predicted is optional; if not supported, both values will be 0. The type of branches recorded is specified by the branch_sample_type field.. The field is a bitwise combination of the following values:; }; (since at least as early as Linux 3.2), a signal is provided for every overflow, even if wakeup_events is not set. Support for this can be detected with the cap_usr_rdpmc field in the mmap page; documentation on how to calculate event values can be found in that section. Various ioctls act on perf_event_open() file descriptors: If the PERF_IOC_FLAG_GROUP bit is set in the ioctl argument, then all events in a group are enabled, even if the event specified is not the group leader (but see BUGS).). If the PERF_IOC_FLAG_GROUP bit is set in the ioctl argument, then all events in a group are reset, even if the event specified is not the group leader (but see BUGS).. The argument specifies the desired file descriptor, or -1 if output should be ignored. The argument is a pointer to the desired ftrace filter. The argument is a pointer to a 64-bit unsigned integer to hold the result. The perf_event_paranoid file can be set to restrict access to the performance counters. This sets the maximum sample rate. Setting this too high can allow users to sample at a rate that impacts overall machine performance and potentially lock up the machine. The default value is 100000 (samples per second). Maximum number of pages an unprivileged user can mlock(2). The default is 516 (kB)...); }
https://www.commandlinux.com/man-page/man2/perf_event_open.2.html
CC-MAIN-2018-51
refinedweb
1,041
57.91
Creating a Reader Creating a Reader Readers are the components that send you a stream without the XML processing that normally happens in a pipeline. Cocoon already comes with some readers out of the box such as your FileReader which serializes files from your webapp context. What if you need something that doesn't come from the file system? What if you need to create content on the fly but the XML processing gets in the way? That's where the Reader comes to play. Even though there is a DatabaseReader in the Cocoon's SQL block, we are going to go through the process of creating a cacheable database reader here. In the sitemap we use the reader we are going to develop like this: <map:match <map:read </map:match> The sitemap snippet above matches anything in the attachment path followed by the ID for the attachment. It then passes the ID into the src attribute for our reader. Why not include the nice neat little extension for the file after the ID? We actually have a very good reason: Microsoft. If you recall from the SitemapOutputComponent Contracts page, Internet Explorer likes to pretend its smarter than you are. If you have a file extension on the URL that IE knows, it will ignore your mime-type settings that you provide. However, if you don't provide any clues then IE has to fall back to respecting the standard. A Sitemap fills two of the core contracts with the Sitemap. It is both a SitemapModelComponent and a SitemapOutputComponent. You can make it a CacheableProcessingComponent as well, which will help reduce the load on your database by avoiding the need to retrieve your attachments all the time. In fact, unless you have a good reason not to, you should always make your components cacheable just for the flexibility in deployment later. I recommend you read the articles on the core contracts to understand where to find the resources you need. A sitemap will fulfill all its core contracts first. It will then query the reader using the getLastModified() method. The results of that method will be added to the response header for browser caching purposes--although it is only done for the CachingPipeline. Lastly, the sitemap will call the generate() method to create and send the results back to the client. It's a one stop shop, and because the Reader is both a SitemapModelComponent and a SitemapOutputComponent it is the beginning and the end of your pipeline. Considering the order in which the processing happens, the sooner you can send a response to the Sitemap because of a failure the better. The ServiceableReader provides a good basis for building our database bound AttachmentReader. The ServiceableReader implements the Recyclable, LogEnabled and Serviceable interfaces and captures some of the information you will need for you. We will need these three interfaces to get a reference to the DataSourceComponent, our Logger, and to clean up our request based artifacts. You might want to implement the Parameterizable or Configurable interfaces if you want to decide which particular database we will be hitting in your own code. For now, we are going to hard code the information. The Skeleton Our skeleton code will look like this: import org.apache.avalon.excalibur.datasource.DataSourceComponent; import org.apache.avalon.framework.activity.Disposable; import org.apache.avalon.framework.parameters.Parameters; import org.apache.avalon.framework.service.ServiceException; import org.apache.avalon.framework.service.ServiceManager; import org.apache.avalon.framework.service.ServiceSelector;.reading.ServiceableReader; import org.apache.excalibur.source.SourceValidity; import org.apache.excalibur.source.impl.validity.TimeStampValidity; import org.xml.sax.SAXException; import java.io.IOException; import java.io.InputStream; import java.io.Serializable; import java.sql.Connection; import java.sql.ResultSet; import java.sql.SQLException; import java.sql.Statement; import java.util.Map; public class AttachmentReader extends ServiceableReader implements CacheableProcessingComponent, Disposable { private static final int BUFFER = 1024; public static String DB_RESOURCE_NAME = "ourdb"; // warning: static database table name // ... skip many methods covered later public void setup( SourceResolver sourceResolver, Map model, String src, Parameters params ) throws IOException, ProcessingException, SAXException { // ... skip setup code for now } public void generate() throws IOException, SAXException, ProcessingException { // ... skip generate code for now } } If you'll notice we added the Disposable interface to the contract as well. This is so that we can be good citizens and release our components when we are done with them. Anything pooled needs to be released. Getting a Reference to Our DataSourceComponent While it's probably safe to treat your DataSourceComponent and your ServiceManager as singletons in the system, we still want to be responsible. First things first, let's get our DataSourceComponent and hold on to it as long as this Reader is around. To do this we will need to add two more class fields: private DataSourceComponent datasource; private ServiceSelector dbselector; Now we are going to override the service() method and implement the dispose() method to get and cleanup after ourselves. First lets start with getting the DataSourceComponent. Because Cocoon is configured to deal with multiple databases, you will need to use a ServiceSelector to choose the DataSourceComponent corresponding to your desired database. @Override public void service(ServiceManager services) throws ServiceException { super.service(services); dbselector = (ServiceSelector) manager.lookup(DataSourceComponent.ROLE + "Selector"); datasource = (DataSourceComponent) dbselector.select(DB_RESOURCE_NAME); } We ensured that we called the superclass's service() method so that we didn't upset the expectations of anyone wanting to extend our class. Keeping the user's expectations in mind always helps to produce a good product--and in this case the user is a developer. Next, we retrieved the selector for the DataSourceComponent and stored it in the class field we created earlier. Then we did the same for the actual DataSouceComponent itself. Now we have access to the component when we need it. We didn't get an actual connection yet because the connections are pooled. If we held onto a connection for the life of the component then we would run out and the application would come to a screaching halt waiting for a connection to become available. Since we are still dealing with managing the component itself, let's do the cleanup code next. The Avalon framework uses the Disposable.dispose() callback method to let the component know when it is safe to release all the components it is using and perform other cleanup. public void dispose() { dbselector.release(datasource); manager.release(dbselector); datasource = null; manager = null; } While setting the fields to null might not be necessary with modern day garbage collectors, it still doesn't hurt. By releasing those components we ensure that Cocoon can shut down nicely and safely when it is time. Make sure PDFs Work Since we expect to have PDF documents in our database alongside pictures and other types of documents, we need to make sure they display properly. Since the bug in the IE Acrobat Reader plugin wasn't fixed until version 7 we need to make sure the content length is returned. There is some overhead with this as Cocoon has to cache the results to get the content length, but because we are going to cache it anyway there is little difference on when it gets sent to the cache. This is how we do it: @Override public boolean shouldSetContentLength() { return true; } Setting up for the Read (Cache directives, finding the resource, etc.) In the setup() method we need to ask the database for the meta-information about our attachment. You may be curious why we need to do it in the setup as opposed to the generate phase of the Reader. The answer is simply this: the sitemap has already asked the Reader for all caching related information and it is too late to do it then. We'll assume the attachments table is really simple and it has an ID, a mimeType, a timeStamp, and the attachment content. We need to get our component and query it. You can never rely on your connection pooling code to clean up your open statements and resultsets, so we will have to do that ourselves. Let's add some more class fields to support the cache directives and cache the blob reference: private TimeStampValidity m_validity; private InputStream m_content; private String m_mimeType; Since our AttachementReader is pooled and recyclable, let's make sure we clean these values up when the AttachmentReader is returned to the pool: @Override public void recycle() { super.recycle(); if ( null != m_content ) try{ m_content.close(); } catch(Exception e) {/*ignore*/} m_content = null; m_validity = null; m_mimeType = null; } The next code snippet is the content of the setup() method from the code skeleton above. Let's break it down to understand what's going on. First we call the superclass's version of the method so that all expectations of the class hold true: super.setup(sourceResolver, objectModel, src, params); Next we set up the holders for the connection, statement and resultset so that we can clean them up later. Connection con = null; ResultSet rs = null; Statement stm = null; Now we have the meat of the method. We get a connection from the DataSourceComponent, and for good measure we set the AutoCommit to false. You can adjust this to your taste, but for a read we really don't need transactions. There is some standard query code next, and the part I want to point out is how we deal with the resultset. If you notice we have two courses of action depending on whether the record was found or not. If we did find the record we set the mimeType, validity, and content fields for the class. Otherwise, we throw ResourceNotFoundException. That exception is how Cocoon knows to differentiate between a 404 (HTTP Resource Not Found) and a 500 (HTTP Server Error) error. try { final String sql = "SELECT mimeType, sourceDate, attachmentData FROM attachments" + " WHERE attachmentId = '" + source + "'"; con = datasource.getConnection(); con.setAutoCommit(false); stm = con.createStatement(); rs = stm.executeQuery(sql); if (rs.next()) { m_mimeType = rs.getString(1); m_validity = new TimeStampValidity( rs.getTimestamp(2).getTime() ); m_content = rs.getBlob(3).getBinaryStream(); } else { throw new ResourceNotFoundException("Could not find the record"); } } If for some reason we catch a SQLException from the database, it is certainly not expected so we rethrow it wrapped with a general ProcessingException. catch (SQLException se) { throw new ProcessingException(se); } Lastly we cleanup our database objects in the finally method. Without that we run into database server memory leaks as the database keeps resources open for queries on the server side. Even the big name databases are sensitive to this. The JDBCDataSourceComponent connection pooling code does cache the resultsets and statements to make sure they are closed when you close the connection, but you might want to use a generic J2EEDataSourceComponent which may or may not do that for you. Never make assumptions and always clean up after yourself. finally { if (rs != null) try{ rs.close(); } catch(SQLException se) {/*ignore*/} if (stm != null) try{ stm.close(); } catch(SQLException se) {/*ignore*/} if (con != null) try{ con.close(); } catch(SQLException se) {/*ignore*/} } The setup is done. Now we just need to let the sitemap know what we found. The first thing is to let the sitemap know what kind of attachment we are sending. As you recall, we stored that in the class field "m_mimeType", and the getMimeType() method from SitemapOutputComponent informs the sitemap. @Override public String getMimeType() { return m_mimeType; } Now we want to let the sitemap know the last modified timestamp for the attachment. Since we stored this information in the "m_validity" field we will send the information from that field. There is a problem though: what if the resource was not found? We might get a NullPointerException if the m_validity field was never set. Even though the Sitemap shouldn't call this method in the event that we couldn't find a resource we still don't want to take any chances. A properly guarded getLastModified() method would be: @Override public long getLastModified() { return (null == m_validity) ? -1L : m_validity.getTimeStamp(); } The Caching Clues Lastly we want to provide the caching information to the CachingPipeline when needed. Since our source is an ID (from <map:read) it is probably the best cache key for our component. Let's just use it: public Serializable getKey() { return source; } We stored the TimeStampValidity object when we set up the attachment information, so let's just give that back. Alternatively you could use an ExpiresValidity to completely avoid hits to the database altogether--but for now this is good enough. public SourceValidity getValidity() { return m_validity; } Sending the Payload All this work was done just so we could send the results back to the client, and now we get to see the code that does it. Don't try to read the entire attachment into memory and then send it on to the user. It isn't necessary and it kills your scalability. Instead grab little chunks at a time and send it on to the output stream as you get it. You'll find that it feels faster on the client end as well. The next code snippet is the contents of the generate() method from the class skeleton above. All we are doing is pulling a little data at a time from the database and sending it directly to the user. Wait a minute! I hear you shout. What about the connection we just closed in the setup method? Remember that the connection isn't closed until the pool retires it. You will never practically need to worry about the system severing your connection to the database mid-stream. Try it. Throw a load test at the system just to make sure I'm not smoking some controlled substances. Nevertheless, without much further ado, the code: public void generate() throws IOException, SAXException, ProcessingException { try { byte[] buffer = new byte[BUFFER]; int len = 0; while ((len = m_content.read(buffer)) >= 0) { out.write(buffer, 0, len); } out.flush(); } finally { out.close(); m_content.close(); m_content = null; } } We close the stream in the finally clause. If there are any exceptions thrown, they are propogated up without rewrapping them. You may wonder why we close the m_content stream here and in the recycle() method above. The answer is assurance. The generate() method is only called when the resource exists so the content stream won't get closed. Additionally, most database drivers tend to wait on all open streams to be closed manually before the connection with the server is severed. Of course there are timeout limits as well, but we don't want to use them if we can avoid it. By including the call to close the attachment data stream in the generate() method, we shorten the amount of time that there might be resources tied up with the stream. We're done. It seems like we did a lot here, and that's because we did. If we simply did direct generation of the data the class would have been simpler. By incorporating a database into the mix we've covered most of the things you might be curious about. Things like how to access other components from your component, how to make sure our component is cacheable, and some real gotchas that you do want to avoid. The example we have here will be very performant, and is not too different from Cocoon's DatabaseReader. Of course, by doing it ourselves we get to learn a bit more about how things work inside of Cocoon.
http://cocoon.apache.org/2.2/core-modules/core/2.2/681_1_1.html
CC-MAIN-2015-35
refinedweb
2,569
56.35
Opened 4 years ago Closed 3 years ago #18907 closed Bug (fixed) Documentation regarding population of backrefs is incorrect Description It is stated at in the third paragraph that "the first time any model is loaded" Django iterates INSTALLED_APPS and populates backrefs. Either this is plain wrong, or the text needs clarification. - I create a minimal two-app project where app2.Model2 has a reference to app1.Model1 and both app1and app2are in INSTALLED_APPS - I create a Model1: Model1.objects.create() - I write and run a script: from app1.models import Model1 m = Model1.objects.get().model2s.all() - and get Traceback (most recent call last): File "f.py", line 8, in <module> Model1.objects.get().model2s.all() AttributeError: 'Model1' object has no attribute 'model2s' If I add import app2.models, it'll work. Change History (9) comment:1 Changed 4 years ago by comment:2 Changed 4 years ago by When running Django in a larger project, with something like mod_wsgi or gunicorn instead of runserver, and with Celery or other jobs doing background processing, models are also not auto-loaded, unless, as you say, they're loaded as a side-effect of other operations. Changing the text to say "under normal operation", therefore, is deeply misleading. I hesitated whether to submit this as a documentation issue or as a core bug. It might be argued that population of backrefs is a guarantee that Django actually should make. From my perspective, that would probably be the preferred solution. comment:3 Changed 4 years ago by comment:4 Changed 4 years ago by comment:5 Changed 3 years ago by @augustin, has this behavior been rectified with the app loading changes in 1.7? If not, I wonder if you could briefly describe the current behavior so I could write up a patch (assuming you don't want to do so yourself). comment:6 Changed 3 years ago by comment:7 Changed 3 years ago by Tim, here's a proposal to replace the third paragraph of "How are the backward relationships possible?". If you can improve the wording and commit it, that's perfect. Thank you! The answer lies in the app registry. When Django starts, it imports each application listed in :setting:`INSTALLED_APPS`, and then the ``models`` module inside each application. Whenever a new model class is created, Django adds backward-relationships to related models. If the related models haven't been imported yet, Django keeps tracks of the relationships and adds them when the related models eventually are imported. For this reason, it's particularly important that all the models you're using be defined in applications listed in :setting:`INSTALLED_APPS`. Otherwise, backwards relations may not work properly. I know, from time spent with pdb searching for import order problems, that the documentation is wrong. While there can be things in a models.py that trigger all installed apps to load, such as the use of non-lazy internationalization, many models.py files don't trigger general loading. In production the usual place that the installed apps are all loaded is in admin.autodiscover() called from the ROOT_URL_CONF, which gets imported at the first request. Under runserver, it happens sooner, during the model "validation" that runserver does explicitly. shell and some of the other management commands are not going to do it. This might almost be a white lie, if the tutorial didn't encourage the novice to play with models in the shell. Perhaps change "The first time any model is loaded," to "Under normal operation", and append a comment, maybe in parentheses, that "There may be times, such as when using the manage.py shell, when you must import models from more than the app of interest in order to have the reverse relationships connected."
https://code.djangoproject.com/ticket/18907
CC-MAIN-2017-04
refinedweb
629
55.44
Translating web applications into multiple languages is a common requirement. In the past, creating multilingual applications was not an easy task, but recently (thanks to the people behind the Next.js framework and Lingui.js library) this task has gotten a lot easier. In this post, I’m going to show you how to build internationalized applications with the previously mentioned tools. We will create a sample application that will support static rendering and on-demand language switching. You can check out the demo and fork the repository here. Setup First, we need to create Next.js application with TypeScript. Enter the following into the terminal: npx create-next-app --ts Next, we need to install all required modules: npm install --save-dev @lingui/cli @lingui/loader @lingui/macro babel-plugin-macros @babel/core npm install --save @lingui/react make-plural Internationalized routing in Next.js One of the fundamental aspects of internationalizing a Next.js application is internationalized routing functionality, so users with different language preferences can land on different pages, and be able to link to them. Additionally, with proper link tags in the head of the site, you can tell Google where to find all other language versions of the page for proper indexing. Next.js supports two types of internationalized routing scenarios. The first is subpath routing, where the first subpath ({language}/blog) marks the language that is going to be used. For example, or. In the first example, users will use the English version of the application ( en) and in the second, users will use the Spanish version ( es). The second is domain routing. With domain routing, you can have multiple domains for the same app, and each domain will serve a different language. For example, en.myapp.com/tasks or es.myapp.com/tasks. How Next.js detects the user’s language When a user visits the application’s root or index page, Next.js will try to automatically detect which location the user prefers based on the Accept-Language header. If the location for the language is set (via a Next.js configuration file), the user will be redirected to that route. If the location is not supported, the user will be served the default language route. The framework can also use a cookie to determine the user’s language. If the NEXT_LOCALE cookie is present in the user’s browser, the framework will use that value to determine which language route to serve to the user, and the Accept-Language header will be ignored. Configuring our sample Next.js app We are going to have three languages for our demo: default English ( en), Spanish ( es), and my native language Serbian ( sr). Because the default language will be English, any other unsupported language will default to that. We are also going to use subpath routing to deliver the pages, like so: //next.config.js module.exports = { i18n: { locales: ['en', 'sr', 'es', 'pseudo'], defaultLocale: 'en' } } In this code block, locales is all the languages we want to support and defaultLocale is the default language. You will note that, in the configuration, there is also a fourth language: pseudo. We will discuss more of that later. As you can see, this Next.js configuration is simple, because the framework is used only for routing and nothing else. How you are going to translate your application is up to you. Configuring Lingui.js For actual translations, we are going to use Lingui.js. Let’s set up the configuration file: // lingui.config.js module.exports = { locales: ['en', 'sr', 'es', 'pseudo'], pseudoLocale: 'pseudo', sourceLocale: 'en', fallbackLocales: { default: 'en' }, catalogs: [ { path: 'src/translations/locales/{locale}/messages', include: ['src/pages', 'src/components'] } ], format: 'po' } The Lingui.js configuration is more complicated than Next.js, so let’s go over each segment one by one. locales and pseudoLocale are all of the locations we are going to generate, and which locations will be used as pseudo locations, respectively. sourceLocale is followed by en because default strings will be in English when translation files are generated. That means that if you don’t translate a certain string, it will be left with the default, or source, language. The fallbackLocales property has nothing to do with the Next.js default locale, it just means that if you try to load a language file that doesn’t exist, Lingui.js will fallback to the default language (English, in our case). catalogs:path is the path where the generated files will be saved. catalogs:include instructs Lingui.js where to look for files that need translating. In our case, this is the src/pages directory, and all of our React components that are located in src/components. format is the format for the generated files. We are using the po format, which is recommended, but there are other formats like json. How Lingui.js works with React There are two ways we can use Lingui.js with React. We can use regular React components provided by the library, or we can use Babel macros, also provided by the library. Linqui.js has special React components and Babel macros. Macros transform your code before it is processed by Babel to generate final JavaScript code. If you are wondering about the difference between the two, take a look at these examples: //Macro import { Trans } from '@lingui/macro' function Hello({ name }: { name: string }) { return <Trans>Hello {name}</Trans> } //Regular React component import { Trans } from '@lingui/react' function Hello({ name }: { name: string }) { return <Trans id="Hello {name}" values={{ name }} /> } As you can see, the code between the macro and the generated React component is very similar. Macros enable us to omit the id property and write cleaner components. Now let’s set up translation for one of the components: // src/components/AboutText.jsx import { Trans } from '@lingui/macro' function AboutText() { return ( <p> <Trans id="next-explanation">My text to be translated</Trans> </p> ) } After we are done with the components, the next step is to extract the text from our source code that needs to be translated into external files called message catalogs. Message catalogs are files that you want to give to your translators for translating. Each language will have one file generated. To extract all the messages, we are going to use Lingui.js via the command line and run: npm run lingui extract The output should look like the following: Catalog statistics: ┌──────────┬─────────────┬─────────┐ │ Language │ Total count │ Missing │ ├──────────┼─────────────┼─────────┤ │ es │ 1 │ 1) Total count is the total number of messages that need to be translated, and in our code we only have one message from AboutText.jsx (ID: next-explanation). Missing is the number of messages that need to be translated. Because English is the default language, there are no missing messages for the en version. However, we are missing translations for Serbian and Spanish. The contents of the en generated file will be something like this: #: src/components/AboutText.jsx:5 msgid "next-explanation" msgstr "My text to be translated" And the contents of es file will be the following: #: src/components/AboutText.jsx:5 msgid "next-explanation" msgstr "" You will notice that the msgstr is empty. This is where we need to add our translation. In case we leave the field empty, at runtime, all components that refer to this msgid will be populated with the string from the default language file. Lets translate the Spanish file: #: src/components/AboutText.jsx:5 msgid "next-explanation" msgstr "Mi texto para ser traducido" Now, if we run the extract command again, this will be the output: Catalog statistics: ┌──────────┬─────────────┬─────────┐ │ Language │ Total count │ Missing │ ├──────────┼─────────────┼─────────┤ │ es │ 1 │ 0) Notice how the Missing field for the Spanish language is 0, which means that we have translated all the missing strings in the Spanish file. This is the gist of translating, now let’s start integrating Lingui.js with Next.js. Compiling messages For the application to consume the files with translations ( .po files), they need to be compiled to JavaScript. For that, we need to use the lingui compile CLI command. After the command finishes running, you will notice that inside the locale/translations directory there are new files for each locale ( es.js, en.js, and sr.js): ├── en │ ├── messages.js │ └── messages.po ├── es │ ├── messages.js │ └── messages.po └── sr ├── messages.js └── messages.po These are the files that are going to be loaded into the application. Treat these files as build artifacts and do not manage them with source control; only .po files should be added to source control. Working with plurals One other thing that will certainly come up is working with singular or plural words (in the demo, you can test that with the Developers dropdown element). Lingui.js makes this very easy: import { Plural } from '@lingui/macro' function Developers({ developerCount }) { return ( <p> <Plural value={developerCount} </p> ) } When the developerCount value is 1, the Plural component will render “We have 1 Developer.” You can read more about plurals in the Lingui.js documentation. Now, different languages have different rules for pluralization. To accommodate those rules we are later going to use one additional package called make-plural. Next.js and Lingui.js integration Now comes the hardest part: integrating Lingui.js with the Next.js framework. First, we are going to initialize Lingui.js: // utils.ts import type { I18n } from '@lingui/core' import { en, es, sr } from 'make-plural/plurals' //anounce which locales we are going to use and connect them to approprite plural rules export function initTranslation(i18n: I18n): void { i18n.loadLocaleData({ en: { plurals: en }, sr: { plurals: sr }, es: { plurals: es }, pseudo: { plurals: en } }) } Because initialization should only be done once for the whole app, we are going to call the function from the Next.js _app component, which by design wraps all other components: // _app.tsx import { i18n } from '@lingui/core' import { initTranslation } from '../utils' //initialization function initTranslation(i18n) function MyApp({ Component, pageProps }) { // code ommited } After the Lingui.js code is initialized, we need to load and activate the appropriate language. Again, we are going to use _app for that, like so: // _app.tsx function MyApp({ Component, pageProps }) { const router = useRouter() const locale = router.locale || router.defaultLocale const firstRender = useRef(true) if (pageProps.translation && firstRender.current) { //load the translations for the locale i18n.load(locale, pageProps.translation) i18n.activate(locale) // render only once firstRender.current = false } return ( <I18nProvider i18n={i18n}> <Component {...pageProps} /> </I18nProvider> ) } All components that consume the translations need to be under the Lingui.js <I18Provider> component. In order to determine which language to load, we are going to look into the Next.js router locale property. Translations are passed to the component via pageProps.translation. If you are wondering how is pageProps.translation property is created, we are going to tackle that next. Every page in src/pages before it gets rendered needs to load the appropriate file with the translations, which reside in src/translations/locales/{locale}. Because our pages are statically generated, we are going to do it via the Next.js getStatisProps function: // src/pages/index.tsx export const getStaticProps: GetStaticProps = async (ctx) => { const translation = await loadTranslation( ctx.locale!, process.env.NODE_ENV === 'production' ) return { props: { translation } } } As you can see, we are loading the translation file with the loadTranslation function. This is how it looks: // src/utils.ts async function loadTranslation(locale: string, isProduction = true) { let data if (isProduction) { data = await import(`./translations/locales/${locale}/messages`) } else { data = await import( `@lingui/loader!./translations/locales/${locale}/messages.po` ) } return data.messages } The interesting thing about this function is that it conditionally loads the file depending on whether we are running the Next.js project in production or not. This is one of the great things about Lingui.js; when we are in production we are going to load compiled ( .js) files, but when we are in development mode, we are going to load the source ( .po) files. As soon as we change the code in the .po files it is going to immediately reflect in our app. Remember, .po files are the source files where we write the translations, which are then compiled to plain .js files and loaded in production with the regular JavaScript import statement. If it weren’t for the special @lingui/loader! webpack plugin, we would have to constantly manually compile the translation files to see the changes while developing. Changing the language dynamically Up to this point, we handled the static generation, but we also want to be able to change the language dynamically at runtime via the dropdown. First, we need to modify the _app component to watch for location changes and start loading the appropriate translations when the router.locale value changes. This is pretty straightforward; all we need to do is to use the useEffect hook. Here is the final _app component: // _app.tsx // import statements omitted initTranslation(i18n) function MyApp({ Component, pageProps }) { const router = useRouter() const locale = router.locale || router.defaultLocale const firstRender = useRef(true) // run only once on the first render (for server side) if (pageProps.translation && firstRender.current) { i18n.load(locale, pageProps.translation) i18n.activate(locale) firstRender.current = false } // listen for the locale changes useEffect(() => { if (pageProps.translation) { i18n.load(locale, pageProps.translation) i18n.activate(locale) } }, [locale, pageProps.translation]) return ( <I18nProvider i18n={i18n}> <Component {...pageProps} /> </I18nProvider> ) } Next, we need the build the dropdown component. Every time the user selects a different language from the dropdown, we are going to load the appropriate page. For that, we are going to use the Next.js router.push method to instruct Next.js to change the locale of the page (which will, in turn, be picked up by the useEffect we created in the _app component): // src/components/Switcher.tsx import { useRouter } from 'next/router' import { useState, useEffect } from 'react' import { t } from '@lingui/macro' type LOCALES = 'en' | 'sr' | 'es' | 'pseudo' export function Switcher() { const router = useRouter() const [locale, setLocale] = useState<LOCALES>( router.locale!.split('-')[0] as LOCALES ) const languages: { [key: string]: string } = { en: t`English`, sr: t`Serbian`, es: t`Spanish` } // enable 'pseudo' locale only for development environment if (process.env.NEXT_PUBLIC_NODE_ENV !== 'production') { languages['pseudo'] = t`Pseudo` } useEffect(() => { router.push(router.pathname, router.pathname, { locale }) }, [locale, router]) return ( <select value={locale} onChange={(evt) => setLocale(evt.target.value as LOCALES)} > {Object.keys(languages).map((locale) => { return ( <option value={locale} key={locale}> {languages[locale as unknown as LOCALES]} </option> ) })} </select> ) } Pseudolocalization Now I’m going to address all the pseudo code that you have seen in the examples. Pseudo localization is a software testing method that replaces text strings with altered versions while still maintaining string visibility. This makes it easy to spot which strings we have missed wrapping in the Lingui.js components or macros. So when the user switches to the pseudo locale, all the text in the application should be modified like this: Account Settings --> [!!! Àççôûñţ Šéţţîñĝš !!!] If any of the text is not modified, that means that we probably forgot to do it. When it comes to Next.js, the framework has no notion of the special pseudo localization, it is just another language to be routed to. However, Lingui.js requires special configuration. Other than that, pseudo is just another language we can switch to. pseudo locale should only be enabled in the development mode. Conclusion In this article, I have shown you how to translate and internationalize a Next.js application. We have done static rendering for multiple languages and on-demand language switching. We have also created a nice development workflow where we don’t have to manually compile translation strings on every change. Next, we implemented a pseudo locale in order the visually check if there are no missing translations. If you have any questions post them in the comments, or if you find any issues with the code in the demo, make sure to open an issue on the github repository._1<< . 3 Replies to “The complete guide to internationalization in Next.js” Hi friend. It is a great article, With your tutorial I have set i18n in my next.js application with Lingui.js thanks. But I come with some issues that I resolve, may be it will help other if it happens. NB: I don’t use Typescript 1 – I get Module not found: Can’t resolve ‘fs’ when I add Lingui configurations and tools – to fix: configure .babelrc like this { “presets”: [“next/babel”], “plugins”: [“macros”] } 2 – In most Next.js application folder I think we don’t have “src” folder so the path where Lingui will look at translation can be an issue if they start with “src”, lingui extract will not return any data 3 – “npm run lingui extract” result as an issue because we don’t setup the script. – to fix: in script of package.json we can add : { “extract”: “lingui extract”, “compile”: “lingui compile” } 4 – In _app.js I remove “firstRender.current” because it blocks the rendering when I change the language in my menu. But again thank you I set translation in my app and may be I’ll add NEXT_LOCALE. Great article. Thanks, If you have any problems with the code you can file an issue on the github repo, and we can take it from there. Ok cool. I’ll do it.
https://blog.logrocket.com/complete-guide-internationalization-nextjs/
CC-MAIN-2022-05
refinedweb
2,864
57.67
Developer on the boo programming language and Bamboo.Prevalence. Also known as Bamboo. After a long wait boo is finally available as a programming language category on Source Forge. Now go and improve the categorization of your boo projects. Motivations: Environment Based Programming* is a design pattern founded on a very simple principle: This principle can be completely captured in C# with the following API: namespace EnvironmentBasedProgramming { public delegate void Code(); public interface IEnvironment { Need Provide<Need>(); } public static class Environments { /// <summary> /// Executes code in a given environment. /// </summary> public static void With(IEnvironment environment, Code code); } /// <summary> /// Used by code to fulfill its needs. /// </summary> public static class My<Need> { public static Need Instance { get; } } } To make it all concrete I'll use Martin Fowler's naive example specially for the contrast with the dependency management approaches he documents in his article. The MovieLister component provides a list of movies directed by a particular director. In order to fulfill its contract it needs the list of all known movies, something that a MovieFinder service would provide:; } } Notice how the code express its needs using the My idiom. MovieLister can now be executed in a suitable environment using the With primitive: Environments.With(environment, delegate { foreach (var movie in new MovieLister().MoviesDirectedBy("Terry Jones")) Console.WriteLine(movie.Title); }); A suitable environment in this case would have to deliver a valid IMoveFinder instance upon request. The following implementation should suffice: missing piece in the puzzle is the final EBP building block - ClosedEnvironment: public class ClosedEnvironment : IEnvironment { private readonly object[] _bindings; public ClosedEnvironment(params object[] bindings) { _bindings = bindings; } public T Provide<T>() { foreach (var binding in _bindings) if (binding is T) return (T) binding; return default(T); } } Which allows the environment for the example to be defined as: var environment = new ClosedEnvironment(new DummyMovieFinder()); Component activation and lifetime are not aspects dealt directly with by EBP and are better treated as different environment strategies (one can easily imagine an environment that automatically instantiates components based on naming conventions or metadata). The complete listings follow. The example: namespace EnvironmentBasedProgramming.NaiveExample { using System; using System.Collections.Generic; class Movie { public string Title { get; set; } public string Director { get; set; } }; } } class Program { static void Main(string[] args) { var environment = new ClosedEnvironment(new DummyMovieFinder()); Environments.With(environment, delegate { PrintMoviesDirectedBy("Terry Jones"); }); } private static void PrintMoviesDirectedBy(string directorName) { foreach (var movie in new MovieLister().MoviesDirectedBy(directorName)) Console.WriteLine(movie.Title); } minimalist EBP framework written for this article: namespace EnvironmentBasedProgramming { using System; public delegate void Code(); public interface IEnvironment { Need Provide<Need>(); } public static class Environments { /// <summary> /// Executes code in a given environment. /// </summary> public static void With(IEnvironment environment, Code code) { IEnvironment previous = _environment; _environment = environment; try { code(); } finally { _environment = previous; } } private static IEnvironment _environment; internal static IEnvironment Current { get { return _environment; } } } /// <summary> /// Used by code to fulfill its needs. /// </summary> public static class My<Need> { public static Need Instance { get { var current = Environments.Current; if (current == null) throw new InvalidOperationException("No environment to provide '" + typeof(Need) + "'."); return current.Provide<Need>(); } } } public class ClosedEnvironment : IEnvironment { private readonly object[] _bindings; public ClosedEnvironment(params object[] bindings) { _bindings = bindings; } public T Provide<T>() { foreach (var binding in _bindings) if (binding is T) return (T) binding; return default(T); } } } In a future article I'll explore environment chaining and convention based service discovery. Thoughts? * or to use a name more to the style of Martin Fowler: Dynamically Scoped Service Locator (not to be confused with Dynamic Service Locator) ... What do I need to do to get it to work on Windows? I've got mono + eclipse + rcp + monolipse installed. Eclipse's DMONO_HOME variable points the right place. I've got Boo 0.9 in a folder, but I'm not sure if it's the right place or if I need to do anything else. ... What's missing? --Søren, February 12, 2009 06:21 AM Thanks for the question. The eclipse plugins expects the boo assemblies to be in the mono GAC and it also expects to find the boo compiler under $MONO_HOME/lib/boo, something that can be easily arranged by running nant in the boo source folder (yes, windows users need a source distro or a svn checkout for now): $ nant install -D:mono.prefix=c:/dotnet/mono-2.2 Enjoy! Very cool man. BTW what's the name of the song in the background? :) --Andrés G. Aragoneses, February 6, 2009 09:52 PM Very cool man. BTW what's the name of the song in the background? :) --Andrés G. Aragoneses, February 6, 2009 09:52 PM Thanks, Andrés. The song is Take Five. I love it. Thanks, Cedric. I prefer /usr/local for my mono version and /usr for the system's mono version. Configuration via user interface is not yet implemented and I couldn't find the patch here, I hope that now that it's out there more people will contribute and this project might even realize its full potential. For now you need to start eclipse passing the MONO_HOME system property like this: eclipse -data /path/to/workspace -vmargs -DMONO_HOME=/usr The same will work on windows. Summing up: Let's consider the simple issue of defining a thread local variable. In .net this can be achieved rather efficiently through a static field annotated with the System.ThreadStatic attribute: class WithPerThreadState: [System.ThreadStatic] static _state as PerThreadState The problem with that is that there's redundancy (we have to say static twice) and there's noise (square brackets). If we're doing this more than once in a code base it would be good to avoid repeating ourselves. We should be able to hide the unnecessary implementation details of a thread static variable behind a macro and more succinctly say: class WithPerThreadState: ThreadStatic _state as PerThreadState And now we are: macro ThreadStatic: case [| ThreadStatic $name as $type |]: yield [| [System.ThreadStatic] static $name as $type |] This example makes use of some of the most important improvements to macro expression in boo 0.9. Let's dissect it block by block: macro ThreadStatic: ... Friendly macro definition using the macro macro. Yes, that's correct. For the boo compiler a macro is simply a type that implements a specific interface and optionally adheres to the "Macro" suffix naming convention. The macro macro generates the required boilerplate. ... case [| ThreadStatic $name as $type |]: ... The optional case clauses of a macro definition pattern match against the macro application and execute the body of the first matching case. The particular pattern we see here with the [| |] brackets is a code pattern. Pattern matching is a whole new feature in itself, for now it should suffice to say that $variableName inside a code pattern captures the code appearing at that position in the variable variableName. The case will only succeed if the macro was applied with a single as expression as its argument in which case the left operand of the as expression is made available through the name variable and its right operand through the type variable. ... yield [| [System.ThreadStatic] static $name as $type |] yield means macros are generators. They can produce many nodes of different types. In this particular case the macro is producing a single field declaration node expressed with a code literal*. The $ inside a code literal is the splice operator: its operand is evaluated and the resulting node inserted into the code tree. The usual modifiers can be used to control the visibility of members generated by the macro: public ThreadStatic State as PerThreadState Modifiers and attributes are automatically propagated to every node yielded by the macro. ** There's much more to be said on these new capabilities but I'll leave that for another day. For now these test cases might help. Happy meta hacking! * I've referred to these [| |] entities as code patterns before. That's correct. Code patterns are code literals used in a pattern matching context such as the case in the ThreadStatic example. ** macros can also control how modifiers and attributes are handled by modifying the code tree directly. Also known as the best boo release ever its chief weapons are: macro printLines: for arg in printLines.Arguments: yield [| System.Console.WriteLine($arg) |] printLines "silly", "silly", "silly" macro given: macro when: yield [| print "given.when" |] yield macro alert: macro when: yield [| print "alert.when" |] yield given: when // given.when alert: when // alert.when macro ThreadStatic: case [| ThreadStatic $name as $type |]: yield [| [System.ThreadStatic] static $name as $type |] class Environments: private ThreadStatic _current as Environment import Boo.Lang.PatternMatching def Eval(e as Expression) as int: match e: case Const(Value): return Value case InfixExpression(Operator: "+", Left, Right): return Eval(Left) + Eval(Right) case InfixExpression(Operator: "-", Left, Right): return Eval(Left) - Eval(Right) import System.Linq.Enumerable from System.Core [Extension] def MakeString[of T](source as T*): return join(source, ", ") evenDoubles = range(10).Where({ i as int | i % 2 == 0 }).Select({ i as int | i * 2 }) print evenDoubles.MakeString() def Using[of T(System.IDisposable)](value as T, block as System.Action[of T]): try: block(value) ensure: value.Dispose() Using(System.IO.File.OpenText("TFM.TXT"), { reader | print reader.ReadLine() }) class Song: Name as string: public get: return _name internal set: _name = value ... def ToHex(n as int): return "0x${n:x4}" print ToHex(42) This release is brought to you by Avishay Lavie, Cedric Vivier, Daniel Grunwald, Marcus Griep and yours truly. Full change log is here. Download it from here and have fun! Meanwhile in a repository not far away: namespace metaboo import Boo.Lang.Compiler import Boo.Lang.Compiler.Ast import Boo.OMeta import Boo.OMeta.Parser syntax Units: atom = mass | super mass = (integer >> value as Expression, "kg") ^ [| Mass($value, "kg") |] syntax Ranges: atom = integer_range | super integer_range = (integer >> begin as Expression, DOT, DOT, integer >> end as Expression) ^ [| range($begin, $end) |] def parse(code as string): compiler = BooCompiler() compiler.Parameters.References.Add(System.Reflection.Assembly.GetExecutingAssembly()) compiler.Parameters.Input.Add(IO.StringInput("code", code)) compiler.Parameters.Pipeline = CompilerPipeline() compiler.Parameters.Pipeline.Add(BooParserStep()) return compiler.Run() code = """ import metaboo.Units import metaboo.Ranges a = 3kg print a for i in 1..3: print i """ result = parse(code) assert 0 == len(result.Errors), result.Errors.ToString() assert 1 == len(result.CompileUnit.Modules) print result.CompileUnit.Modules[0].ToCodeString() And the output is, of course: import metaboo.Units import metaboo.Ranges a = Mass(3, 'kg') print a for i in range(1, 3): print i That's really funny :) It's a copy'n'paste culture indeed. Yeah,! When I joined Db4objects a few years ago my first assignment was to research and implement a decent solution for getting db4o to work on the .net platform. There was already some investigation going on on using the Eclipse JDT API as the basis for a source to source translator which proved to be a wise choice in the long run. Eventually sharpen was born and after a lot of love we finally reached a point where the translated c# code would look really good. People would get really interested every time sharpen was mentioned but for several reasons it wasn't publicly available. Until now. Hooray! I'm really looking forward to what people will build on top of that. Another great Ted talk. I particularly love the sequence from 7:08 to 7:40. I'm the man. The final man. You know, we've been mutating for 4 billion years but now because it's me we've stopped. I'm the man. The final man. You know, we've been mutating for 4 billion years but now because it's me we've stopped. Except this time is true. :) Almost four years ago the first feature request was entered into the boo issue tracker. There should be a way to extend the parser so it could recognize custom measurement unit literals such as 1kg and 2cm. There should be a way to extend the parser so it could recognize custom measurement unit literals such as 1kg and 2cm. Since then boo has improved a lot but with no solution for BOO-1 in the horizon. I think I'm getting close to solve it in a interesting way using PEGs implemented as a graph of expression objects. PEGs are very likable. Conceptually simple and composable. Take the PEG that recognizes integer expressions involving the + and * operators: grammar <- spaces addition eof addition <- term ("+" spaces term)* term <- factor ("*" spaces factor)* factor <- [0-9]+ spaces spaces <- (' ' / '\t')* eof <- !. grammar <- spaces addition eof addition <- term ("+" spaces term)* term <- factor ("*" spaces factor)* factor <- [0-9]+ spaces spaces <- (' ' / '\t')* eof <- !. where: (). The grammar can be translated to boo very simply using the peg macro from Boo.Pegs: import Boo.Pegs peg: grammar = spaces, addition, eof addition = term, --("+", spaces, term) term = factor, --("*", spaces, factor) factor = ++[0-9], spaces spaces = --(' ' / '\t') eof = not any() assert grammar.Match(PegContext(" 6*6 + 6 ")) import Boo.Pegs peg: grammar = spaces, addition, eof addition = term, --("+", spaces, term) term = factor, --("*", spaces, factor) factor = ++[0-9], spaces spaces = --(' ' / '\t') eof = not any() assert grammar.Match(PegContext(" 6*6 + 6 ")) I had to be a little creative in mapping the PEG operators to valid boo expressions because as it must be clear by now boo doesn't allow the introduction of completely new syntax and that's what the fuss is all about here. I actually like the way it looks. Implementing something more useful such as expression evaluation on top of that requires a few semantic actions operating a stack:() Semantic actions are just closures that get executed as matching succeeds. $text returns the text matched so far by the current rule. Beautiful. The underlying implementation based on a graph of expression objects really shines when one considers what it takes to extend the grammar above with support for hexadecimal literals: peg: // rebind factor.Expression = hex_number / factor.Expression hex_number = "0x", ++hex_digit, { push(int.Parse($text[2:], NumberStyles.HexNumber)) }, spaces hex_digit = [0-9, a-f, A-F] assert grammar.Match(PegContext(" 0xa*2 + 11*0x02")) assert 42 == pop() peg: // rebind factor.Expression = hex_number / factor.Expression hex_number = "0x", ++hex_digit, { push(int.Parse($text[2:], NumberStyles.HexNumber)) }, spaces hex_digit = [0-9, a-f, A-F] assert grammar.Match(PegContext(" 0xa*2 + 11*0x02")) assert 42 == pop() It's not yet clear how this extensibility mechanism will be exposed at the boo language level but the simplicity at the peg level is encouraging. One last feature worth pointing out is the ability to match based on a previously matched rule. For instance, the closing tag of a xml element must match the name in the starting tag: import Boo.Pegs peg:', content, '', @tag, '>' tag = ++(a-z) content = --(element / text) text = not "<", any() assert element.Match(PegContext("<foo><bar>Hello</bar></foo>")) import Boo.Pegs peg:', content, '', @tag, '>' tag = ++(a-z) content = --(element / text) text = not "<", any() assert element.Match(PegContext("<foo><bar>Hello</bar></foo>")) I've found the idea for the last match operator @ first described in this article. Great idea. I've been also greatly inspired by conversations I've had with Massi who's exploring similar territory and Jb during the last Mono Meeting in Barcelona Madrid and with Cedric over a beer in Paris. I think Massi will be pleased to know that I haven't given any thoughts to performance leaving all the fun to him. Extensible parsing. Soon in a boo compiler close to you. It was more than twenty years ago that Dijkstra wrote against the perils of anthropomorphism on science. And here we stand building whole industries on top of it. But is it programming dominantly math? Or is it mainly human communication? Maybe we swung too far to the the latter. I speculate that the current rise of functional programming can also be attributed to its liberating effect - one is no longer expected to attribute proper intent to entities before a program can be seen to make sense. As a constant reminder of that I'm tempted to use a different font for my programming. Lots of improvements in this release including a simpler way for writing macros, support for nested functions, a better interactive interpreter, error messages that include suggestions for misspelled names, exception filters, exception fault handlers and for loop IDisposeable.Dispose integration. With many many kudos to Avishay Lavie, Cédric Vivier, Daniel Grunwald and Marcus Griep! What? - - irc channel - irc://irc.codehaus.org/boo Paul Graham's arc language is finally out. "One of the things you'll discover as you learn more about macros is how much day-to-day coding in other languages consists of manually generating macro expansions." "One of the things you'll discover as you learn more about macros is how much day-to-day coding in other languages consists of manually generating macro expansions." Gaiaware has just announced the Gaia Programming Contest. A contest "... about creating an Ajax Application that will serve as a meeting place for people dedicated to solving environmental issues...". Very good but there's more: "... no Close Source dependencies can be used which means that the end product must be compilable on Mono ...". Great! Of course people using boo and db4o have a huge headstart. So what are you waiting for? Dennis Kucinich for emperor! It's reuseable as whole and any individual piece you look at it's just as reuseable. Yeah, I'm enjoying the Criptonomicom so far. As I went through the conference memories I recovered this one conversation I had with Jim Purbrick after the boo presentation. His idea would be to have boo as the language for building languages in Second Life. Niiiiice. What a great experience. A chance to interact live with a dear friend. Free Software, Hacking, Women, Futurama, McDonalds, love spreading, Militant Atheism, Monty Python, Douglas Adams and the French Way. Had lots of interesting exchange of ideas with Massi, ranging from "extensible parsing through composeable PEGs with optimal performance" to Carlos Castañeda, Jesus Christ, meta-physics, religion and Pink Floyd. The Cryptonomicon really got me. Got to put a face on Joachim and see how really cool Unity is. Jeroen IKVM Frijters is a funny guy! On Thursday I got to talk about db4o which led me to meet a few db4o users hanging around the conference.? I've also got to spread the gospel about boo for which I got a hugely positive response. :) The presentation material is here. Looking forward to the next one. Following Carl's steps I'm also claiming my blog by publishing a link to my Technorati Profile. So it's official now, I'll be speaking at the Mono Summit 2007. It will be great to see you there! Many thanks to the great folks at db4o for sponsoring my trip. Evelyn Glennie shares with us just how. Amazing presentation by an amazing human being/musician. Richard Dawkins: An atheist's call to arms. Lessig gives an inspiring talk on how creativity is being strangled by the law. Quoting JB: "We need more of him" We certainly do. We certainly do. I had the most fascinating dream last night, very complex and full of details and by its very nature impossible for me to describe it. It was mix of the Tao, The Hitchhiker's Guide to The Galaxy, Big Bang Theory and Computer Science and if I was to sum it all up it would be something like "the multiverse as a breadth-first search algorithm". So what would be the optimal universe configuration? Looking from where I stand it's hard to believe it's going to be ours. I don't believe I'm the first one to think it that way so now I'm googling for references. Drop me a line if you know of any. Update 1: this seems to be close enough so I'm definitely getting it. Update 2: I got depth-first and breadth-first mixed up :) Oren talks about a simple but interesting macro to aid with mocking. I decided to see if and how the latest meta programming facilities I've been working on are actually useful. Here's the complete application, what do you think? namespace Adapter import Boo.Lang.Compiler import Boo.Lang.Compiler.Ast import Boo.Lang.Compiler.Ast.Visitors import Boo.Lang.Compiler.TypeSystem import Boo.Lang.Compiler.MetaProgramming class AdapterMacro(AbstractAstMacro): def Expand(macro as MacroStatement): if macro.Arguments.Count != 1 or not macro.Arguments[0] isa ReferenceExpression: raise "adapter must be called with a single argument" entity = NameResolutionService.Resolve(macro.Arguments[0].ToString()) raise "adapter only accept types" unless entity.EntityType == EntityType.Type BuildType(macro, entity) def GetModule(node as Node) as Boo.Lang.Compiler.Ast.Module: return node.GetAncestor(NodeType.Module) def BuildType(macro as MacroStatement, type as IType): adapterInterface = [| interface $("I" + type.Name): pass |] adapter = [| class $(type.Name + "Adapter")($adapterInterface): theTarget as $(type.FullName) def constructor(target as $(type.FullName)): theTarget = target |] GetModule(macro).Members.Add(adapter) GetModule(macro).Members.Add(adapterInterface) for member in type.GetMembers(): AddMethod(adapter, adapterInterface, member) if member isa IMethod BooPrinterVisitor(System.Console.Out).Visit(adapterInterface) BooPrinterVisitor(System.Console.Out).Visit(adapter) def AddMethod(adapter as ClassDefinition, adapterInterface as InterfaceDefinition, method as IMethod): if not method.IsPublic: return if method.IsStatic: return if method.ReturnType.IsByRef: return if method.ReturnType.IsArray: return interfaceMethod = [| def $(method.Name)() as $(method.ReturnType.FullName): pass |] forwarder = interfaceMethod.CloneNode() forwardInvocation = [| theTarget.$(method.Name)() |] for param in method.GetParameters(): if param.IsByRef or param.Type.IsArray: return forwarder.Parameters.Add( ParameterDeclaration( Name: param.Name, Type: SimpleTypeReference(param.Type.FullName))) interfaceMethod.Parameters.Add( ParameterDeclaration( Name: param.Name, Type: SimpleTypeReference(param.Type.FullName))) forwardInvocation.Arguments.Add(ReferenceExpression(param.Name)) adapterInterface.Members.Add(interfaceMethod) adapter.Members.Add(forwarder) if method.ReturnType == TypeSystemServices.VoidType: forwarder.Body.Add(forwardInvocation) else: forwarder.Body.Add([| return $forwardInvocation |]) code = [| import Adapter adapter int print Int32Adapter(42) isa IInt32 |] try: module = compile(code, typeof(AdapterMacro).Assembly) module.EntryPoint.Invoke(null, (null,)) except x as CompilationErrorsException: print x.Errors.ToString(true) Today If you think it makes sense to have a JVM backend for the boo programming language, join us. This release includes bug fixes, performance improvements and better meta-programming capabilities [1]. Special thanks to Marcus Griep, Nick Fortune and Matt McElheny! What? - Download - Official irc channel - irc://irc.codehaus.org/boo Complete change log here. Have fun! [1] see the 'match' and 'data' macros in the boo-extensions project for examples Yeah, I won't be playing Halo 3 unless I get one for free (hint, hint).. I've finally took some time off this weekend to implement a simple object pattern matching facility as part of the newly created boo-extensions project. Here's some code using the new 'match' and 'data' macros to implement an expression evaluator: import Boo.PatternMatching import Boo.Adt def eval(e as Expression) as int: match e: case Const(value): return value case Add(left, right): return eval(left) + eval(right) def simplify(e as Expression) as Expression: match e: case Add(left: Const(value: 0), right): return simplify(right) case Add(left, right: Const(value: 0)): return simplify(left) case Add(left, right): return Add(simplify(left), simplify(right)) case _: return _ data Expression = Const(value as int) | Add(left as Expression, right as Expression) e = Add(Add(Const(19), Const(0)), Add(Const(0), Const(23))) print simplify(e) print eval(e) print eval(simplify(e)) 'match' coupled with our recently acquired quasiquoting capabilities makes writing macros for boo a pleasant endeavour actually. Dont believe me? Take a look at the data macro again. This is a Lisp tutorial worth reading. This is so nice. This release includes bug fixes, improves on generic support and introduces a few metaprogramming facilities (still on early stage). Many thanks to the growing boo community! What? - Download - Official irc channel - irc://irc.codehaus.org/boo Full change log here. It all really started when Bob Pasker asked me if db4o worked with scala. That kind of subtext is usually all I need to start exploring another programming language. Specially one of such a tasty functional flavor. My experiment was to port a simple time tracking application I wrote for myself some time ago from .net/boo to jvm/scala. The application works as a tray icon that lets you right-click your way through projects and tasks. There's no reporting interface other than a REPL window that allows you to execute arbitrary code against the app's object model :) It actually took me only a week to get it up and running on the three major platforms I work with (windows, linux and macosx) thanks to scala, db4o and swt. My impression so far is pretty darn good. Scala is a beautiful language. The tight integration with java means great tools such as db4o work out of the box. And the eclipse support goes as far as supporting eclipse plugins written in scala (niiiice). If you do java you should really be giving scala a ride. You can find the application at. There's a readme with instructions on how to get it running either through eclipse or ant. Even if you dont care about the time tracking functionality at all I think the SWT REPL window should give you some fun :) I had the privilege to attend to Lessig's talk at the FISL some time ago. It really moved me. I wish him the best of luck on his new crusade. By the way named arguments can also be used with meta methods: a = dict(A: "foo", B: "bar") [meta] def dict(keywords as (ExpressionPair)): h = [| {} |] for pair in keywords: h.Items.Add([| $((pair.First as ReferenceExpression).Name): $(pair.Second) |]) return h Boo meta methods are methods that take code trees as input and return code trees as output. In addition they must be marked with the MetaAttribute for the compiler to recognize them as so. The compiler invokes meta methods during the type resolution phase and replaces the code tree at the point of invocation with the code tree returned by the meta method. Multiple overloads can be specified in which case the types of the code tree arguments will be used for the purpose of overload resolution. Assert can be invoked with one or two arguments: assert x is null // use the code as the assertion message assert x is not null, "x shouldn't be null" // custom exception or string One way of implementing assert would be: [meta] def assert(condition as Expression): return [| if not $condition: raise AssertionFailedException($(condition.ToCodeString())) |] [meta] def assert(condition as Expression, exception as Expression): return [| if not $condition: raise $exception |] $ is generally called the "splice" operator and it means "evaluate me at compilation time". Alternatively, assert could declare a variable parameter list: [meta] def assert(*arguments as (Expression)): condition = arguments[0] if len(arguments) > 1: // assuming 2 arguments exception = arguments[1] else: exception = [| AssertionFailedException($(condition.ToCodeString())) |] return [| if not $condition: raise $exception |] An interesting aspect of boo's splicing semantics is exemplified by the subexpression: [| AssertionFailedException($(condition.ToCodeString())) |] 'using' provides for deterministic disposal of resources. The argument should implement the IDisposable interface to have its Dispose method called at the end of a provided code block. For instance, the following code: using socket=OpenConnection(): socket.Send("ping") socket=OpenConnection() try: socket.Send("ping") ensure: if socket isa IDisposable: (socket as IDisposable).Dispose() [meta] def using(e as Expression, block as BlockExpression): temp = uniqueName() return [| $temp = $e try: $(block.Body) ensure: if $temp isa IDisposable: ($temp as IDisposable).Dispose() |] Due to the reuse of syntactic elements in different contexts the parser needs to follow some conventions in order to infer the meaning of a quasi-quote expression. Take the quasi-quote [| a as string |]. What does it mean? If we were to interpret it as an expression it would mean a try cast expression. If we were to interpret it as a type member it would mean a field definition. For the inline form the convention is to try to interpret it as either an expression or an expression pair or an import declaration or a namespace declaration. The block form is first probed for a type member definition then for a single statement, then for a block of statements.and then for a module. Example: e = [| a as string |] assert e isa TryCastExpression f = [| a as string |] assert f isa Field m = [| namespace Spam print "Spam! "*3 |] assert m isa Module It should be possible to specify the exact context of quasi-quotation by using some special syntax but the specific details are not clear at this point. Soon to reach a svn repository near you. Oh, yeah, and many thanks to the people behind Template Haskell! Boo 0.7.8 is here! With many thanks to the people who contributed for this release: Andrew Davey, Avishay Lavie, Cedric Vivier, Chris Prinos, Doug Holton, Jim Lewis and Max Bolingbroke. Highlights for this release include dramatic improvements to dynamic dispatching performance, a friendlier DSL syntax and of course bug fixes. This is also the last release to support .net 1.1. My good friend JB has just let me know it was his idea. Sorry, JB :) Ge. I am impressed. Oren asks: "Assuming that you have no access to tooling, and you don't have the resources to built NHibernate-sque framework, how would you approach building a Domain Driven application on the naked CLR?" The most interesting part of the question for me is "and you don't have the resources to built NHibernate" because it immediately goes to the seemingly basic assumption most people have these days that "Persistence => SQL". While it might be certainly true that a relational backend is a given for most enterprisey scenarios it is certainly not true that all persistent applications have to go through the pain. Once upon a time a team with 4 people (2 developers, 2 web designers) built the web content management system for the 3rd largest TV station in Brazil on top of System.Runtime.Serialization using the Object Prevalence architecture. Yeah, skin naked CLR. If your specific application can't afford keeping all its objects in memory all the time and you don't mind putting a little clothes on, there's db4o. I've applied the same optimization technique to static method and binary operator dispatching and now we have: int*int: 1.1115984 list*int: 5.0773008 dynamicDispatch: 4.2661344 staticDispatch 1.4921456 Niiiice. Changes checked in by the way. Around 4 years ago (!) there was this discussion about how to support some dynamic language features on top of mono. One of the topics was optimizing dynamic dispatching and apparently my suggestion was redirected to nul. 4 years later here I am finally implementing the idea in order to take boo's dynamic dispatching performance to the next level. Before the optimization: $ build/booi performance/duckoperators.boo int*int: 1.101584 list*int: 29.0217312 dynamicDispatch: 51.9484224 staticDispatch 1.4921456 Each line reports how long it takes to execute the described operation with dynamic dispatching 5_000_000 times (except the last line which executes the same operation as the line before it but with static dispatching). The first line tells us that it takes 1.10 seconds for boo to multiply two integer objects using dynamic dispatching. The second line says boo takes 29.02 seconds to multiply a List instance by an integer using dynamic dispatching (dynamic dispatch over static methods). The third line which is the most interesting one for our purposes here says that boo takes roughly 52 seconds to dynamically dispatch 5_000_000 instance method calls. We can see a huge overhead over static dispatching. After the optimization: $ build/booi performance/duckoperators.boo int*int: 1.101584 list*int: 27.755072 dynamicDispatch: 4.055832 staticDispatch 1.4821312 Niiiiiice. So this first stab got it from 52 seconds down to 4 seconds. Not bad at all. A few changes and we'll have the same benefits for dynamic dispatching over static methods. I hope this will have a huge impact on environments that rely heavily on dynamic dispatching such as Brail. Unfortunately though this optimization is only available when building for the .NET 2.0 profile. Soon in a source code repository near you. I've just arrived in Seattle for the Microsoft DLR Compiler Lab. JB will be here soon and I heard Miguel is joining us as well. Fun! Looking forward to getting my first Boo Silverlight application running. One of the questions I'm here for is should we migrate Boo's duck typing support to be based on the DLR? On the pros side we get better integration with all the DLR languages and perhaps better performance when executing dynamic code. On the cons side it's an additional dependency (and not a very mature one for that matter). Thoughts? I started reviewing the overload resolution code in the boo compiler this week. Very old and, let's say, very interesting code. It was based on a fuzzy scoring system I have no idea how I came up with. Well, these things happen. The new code is based on the concepts discussed here. Thanks Avish and Daniel for that! A few test cases (mostly varargs related) had to be reviewed to comply with the improved behavior so expect a few compilation errors when updating to the new code and please let us know of any strange behavior. Life.! Feel the love! I'm finally reading Free Play again. Always a pleasure. I've also been pair programming a lot. Another pleasure: "Some jobs are too big to handle alone, or simply more fun when done with friends. Either case leads us into the fruitful and challenging field of collaboration. Artists working together play out yet another aspect of the power of limits. There is another personality and style to pull with and push against. Each collaborator brings to the work a different set of strengths and resistances. We provide both irritation and inspiration for each other - the grist for each other's pearl making. We need to remind ourselves here of what is obviously true but not often enough said: that different personality styles have different creative styles. There is no one idea of creativity that can describe it all. Therefore, in collaborating with others we round up, as in any relationship, an enlarged self, a more versatile creativity." I could quote the whole chapter but hey, do yourself a favor and get the book. An interesting article by Kartik Vaddadi comparing Ruby and Boo. I found Boo's subtext "it is about the whole experience" quite present as I went through the article. Many runtime facilities were replaced by compiler extensibility facilities so tool support was not sacrificed. It's always a good feeling when I start #develop 2.0 or MonoDevelop. Just got back from Corteo. WOW! What an experience! I'm feeling really good. If you have the chance don't miss it. And this is only two days after having the pleasure of listening to Charlie Haden's Liberation Orchestra live. Yeah, I'm feeling really good. I've just finished updating the cecil repository with our latest contributions: Martin Fowler blogged about in memory databases and I could not help reading the excerpt about Prevayler at the end as FUD: Prevayler got a lot of attention for taking this kind of approach. People I know who tried it found it's tight coupling to the in-memory objects and lack of migration tools caused serious problems. We are having a discussion about NativeQueries on castleproject-devel and I decided to wrap up the discussion here by describing some of the details and shared goodies of our implementation. > Can I assume that you're using the new GetMethodIL (or something like that)? We are using the Mono.Cecil API not the reflection API. Simply because Cecil's version of GetMethodBody is so much better and it already worked on most platforms we support. One of our first contributions was to get Cecil running on the Compact Framework. After some conditional compilation fun (among other things) Cecil now works on mono, ms.net 1.1, ms.net 2.0, cf 1.0 and cf 2.0. Pretty cool. > This is just freaking cool. And the idea is awesome. Yeah, we think so too! (: > If you cache the > execution plans, I suppose you can get a very good performance out of it. > Currently we are not even caching the execution plans and the performance it's pretty close to the raw query execution performance one would get from the underlying query api. But, yes, caching the execution plans (or transformed expression trees in our case) is the next logical step. > This is a whole different way of thinking about how to mess with the > code...! > I've only heard about Cecil until now, wasn't aware that it was operational > on this level. Are there are docs yet, or is it read the code to find out? > The Cecil object model is pretty self documenting. The code we are contributing to cecil is a little more involved but still easy to follow if you stick to the public API. It basically allows you to get high level control flow graphs out of method definitions. These high level cfgs can also contain decompiled expression trees (an expression tree is a structure pretty similar to a bound boo ast). Here's a simple but complete example that uses Cecil and our contributed code (Cecil.FlowAnalysis) to dump the body of a method: import System import Mono.Cecil import Cecil.FlowAnalysis import Cecil.FlowAnalysis.ActionFlow import Cecil.FlowAnalysis.CodeStructure class Address: [property(State)] _state as string class Customer: [property(Address)] _address as Address class Predicate: def Match(c as Customer): return c.Address.State == "SP" def GetMatchMethod(): asm = AssemblyFactory.GetAssembly(typeof(Predicate).Module.FullyQualifiedName) return asm.MainModule.Types["Predicate"].Methods.GetMethod("Match")[0] cfg = FlowGraphFactory.CreateControlFlowGraph(GetMatchMethod()) // a cfg is at the IL instruction level it is good for // low level IL optimization and simple analysis afg = FlowGraphFactory.CreateActionFlowGraph(cfg) // an afg is at the statement/expression level // good for code analysis // print the interesting blocks for block in afg.Blocks: expression as IExpression = null if block isa IReturnActionBlock: expression = (block as IReturnActionBlock).Expression Console.Write("return ") elif block isa IAssignActionBlock: expression = (block as IAssignActionBlock).AssignExpression continue if expression is null // expression is an ast like expression tree // which supports visitors // it is more like a bound ast (with all references // pointing to the right Mono.Cecil entities) // // ExpressionPrinter is a handy visitor print ExpressionPrinter.ToString(expression) The code above will print: local0 = string.op_Equality(c.get_Address().get_State(), "SP") return local0 Ubuntu breezy installed without a hiccup on my AMD64. Then jdk 5, eclipse and skype. Fingers crossed to install vmware player... What a nice surprise! I simply ran the installer script and it worked! So now I'm back to having linux as my main environment with Windows virtual machines. What a nice day! Let's see how mono goes... I still have to blog about my great PDC experience but here it goes a quickie. Anders having fun with boo and mono (or Miguel having fun with boo and c# or Bamboo having fun with mono and c#): Via Martin Fowler: """ Static types give me the same feeling of safety as the announcement that my seat cushion can be used as a floatation device. --Don Roberts """ Most people seem to think of static typing mainly as a safety net. I even had used that same term before. Well, as it turns out static typing is mainly about providing two things: And that's why boo is statically typed, because is the whole experience that counts. Thank you Lessig! Count me in. Georges and I have been working on a set of eclipse plugins for boo development. There are several reasons I decided to give eclipse a try: What we already have: It's already ok for my everyday, cross platform use but YMMV as usual. The glorious screenshots (click to enlarge): The contributed boo perspective on my ubuntu system The background builder reports errors and warnings as you save the files Some MacOSX love from Georges Code completion on my ubuntu system NUnit integration (running on windows) The plugins will move to the boo repository soon. Stay tuned. Working for db4o is really a joy. Not only I get to work on a great product which is changing and will keep changing the way many people develop software but also get to work and talk with very smart guys very frequently. And now for the extra sugar they are sponsoring my trip to the PDC. So if are you going to the PDC and want to exchange some ideas on object oriented databases, agile programming languages, agile software development, drumming and/or lucid dreaming, let me know! "Then I say, is it really necessary to have such strong typing in the name of tersity? I don't think it is, necessarily. I actually think that an even better world is a world where (a language) is terse, but it is still strongly typed." Agreed, Anders :) Byecycle is an auto-arranging dependency analysis plug-in for Eclipse. Its goal is to make you feel sick when you see bad code and to make you feel happy when you see good code. Visit the Byecycle home page for more details. Yeah. This got me thinking. Thanks. I had to write an eclipse plugin some time ago. I just can't live without TDD. TDD with eclipse plugins is a pain in the neck because of eclipse startup and execution times. In my old development machine (a PIV 2.2Mhz notebook), the entire test suite takes 125 seconds to run. After weeks of intensive development I figured I had two options either buy a faster workstation or go nuts. I bought myself a AMD64 3200+ workstation. Here's the time figures for the different development environments I tried: Notebook, Windows XP Home, Eclipse 3.1, jdk 1.5.0_01 (client vm): 125.7s Workstation, Windows XP Pro, Eclipse 3.1, jdk 1.5.0_01 (client vm): 57.8s Workstation, Windows XP Pro, Eclipse 3.1, jdk 1.5.0_01 (server vm): 61.3s Workstation, SuSE 9.2 64bit, Eclipse 3.1, jdk 1.5.0_01 64bit (server vm): 61s Workstation, SuSE 9.2 32bit inside VMWare running on SuSE 9.2 64bit, jdk 1.5.0_02 32bit (client vm): 59.2s I find absolutely amazing that running eclipse inside a 32bit SuSE 9.2 VMWare under SuSE 9.2 64bit runs at the same speed of eclipse under Windows XP Pro. These VMWare guys really know what they are doing. Quick update: I've just tried running the smae SuSE 9.2 32bit inside a VMWare running on Windows XP Pro 64bit and the full test suite took only 57s!!!! So it seems that I've finally found my development enviroment setup. Spread! What happens when you mix a developer looking for some fun, a great open source object database engine and a wrist friendly programming language? I tell you what happens: db4o boo object browser "You have to understand that most of these people are not ready to be unplugged. And many of them are so inurred, so hopelessly dependent on the system, that they will fight to protect it. ... Yet, their strength and their speed are still based in a world that is built on rules. Because of that, they will never be as strong or as fast as you can be." Yes, Klaus is finally blogging. I just can't buy this "we don't have resources to fix it before Whidbey" shit. Come on!!!! You're f$#$@#$ Microsoft! You have billions of dollars. Hire someone else to fix it. What are you looking at??!?!? Happy new year everybody! Some things boo related I'd like to see in 2005. metaboo? I want to have the Boo.Lang.Compiler assembly "ported" to boo. Boo.Lang.Interpreter would be merged with it in the process and we might even gain eval, compile, exec, locals builtins in the process :) In the process we would also incorporate the necessary improvements to finally being able to use the case insensitive dialect (or the white space agnostic one). The current system fails to infer types on recursive definitions. I'd really would like to hear some thoughts on this one. But my intention is to start boo in boo from day one having .net 2.0 (mono 2.0 profile) as a requirement. After spending some time with java and eclipse it was clear to me that it is all about the experience. The eclipse IDE can make even a verbose and limited language like java seem very useable. What I really would like to see in a boo focused IDE: The nice thing is that this is so easy to implement :D Ok, maybe not so easy but who doesn't like a challenge for a change? I'm not sure if SharpDevelop is the way to go though... This looks interesting. Interesting... What about a boo language service for VS.NET? Class viewer, code folding, code completion and a boo interactive console. Life is good indeed! Kudos to Daniel Grunwald. A compiler pipeline is a named sequence of compiler steps. Pipelines are like compilation recipes: CompileToFile, CompileToMemory, Parse, ResolveExpressions, etc. The good thing about the pipeline abstraction is that it is pretty easy to hook into the compilation process to do all sort of things (see how easy it was to implement the interactive interpreter). The bad thing about the way pipelines are implemented today is that if you want to customize, let's say, the parser (maybe you don't like syntactically significant white space), you have to change every single pipeline your application is going to use. For a clear example of why the current pipelien architecture is bad, think of the Boo Explorer application. It uses 3 pipeline definitions: Now imagine that you want to use your custom parser step with boox. How would you do it? See the problem? You would have to change code in a lot of places (inside and outside boox!) just to plug in your custom parser. Another use case would be making boo case insensitive (Hi Doug!). I was talking about this with Carl Rosenberger (db4o) and he asked me what the problem was :). He replied with something along these lines: "just use some sort of dictionary to hold all the configurable services and make the pipelines aware of it". Simple. Let's baptize this glorified dictionary then: Compiler Profile. A compiler profile represents a set of services that should be used by any given compiler pipeline. Each service in a profile is identified by a well known name (it's not yet clear to me if the name will be something like a string or a System.Type reference). I see the following services right now: The good thing about this model is that a specific service or profile could be even specified in the command line: booc -profile:caseinsensitive foo.boo boox -service parser:MyCustomParser,MyCustomAssembly foo.boo The way I see it it should be very easy to write a, let's say, vb.net compiler based on boo just by providing the right services. I plan to include this profile thing in the next major refactoring when I'll translate Boo.Lang.Compiler from c# to boo. Thoughts? From the looney who came up with prevayler: Sovereign Computing. Are you ready to join the crowd and make it happen? Too scared? Sovereign Computing Nice teaser but I just can't wait to see Sam Rockwell playing Zaphod. $ ./booish >>> load("d:/dotnet/ikvm/bin/IKVM.GNU.Classpath.dll") >>> import java.lang >>> System.getProperty("java.vendor") 'Jeroen Frijters' >>> rodrigob@bambook /cygdrive/d/dev/boo/build $ ./ikvm -classpath "booish.jar;." jbooish >>> v = Class.forName("java.util.Vector").newInstance() [] >>> v.add("Hello") true >>> v.add("Java!") true >>> v [Hello, Java!] >>> import cli.booish.InteractiveInterpreter; public class jbooish { public static void main(String[] args) { InteractiveInterpreter interpreter = new InteractiveInterpreter(); interpreter.set_RememberLastValue(true); interpreter.SetValue("Class", jbooish.class); interpreter.ConsoleLoopEval(); } }(); } } def readChars(fname as string): using reader=File.OpenText(fname): return reader.ReadToEnd().ToCharArray() def readChars(fname as string): using reader=File.OpenText(fname): return reader.ReadToEnd().ToCharArray() Q. How do you get a list of files matching a specific wildcard? A. java.io.File.listFiles(FilenameFilter) FilenameFilter??? What's wrong with Directory.GetFiles("*.boo")? And by the way we don't even have a java.io.Directory, it's everything on java.io.File. F#?%#!a a... Ok, let's just breath... I'll need my yoga classes more than ever now that I'll be doing lots of java. I'm working a lot with the Eclipse JDT core model lately (yes, yes, I confess, I'm doing lots of java lately :-)). I find it amazing how the Eclipse AST model is similar to one of the first AST models I designed for the boo compiler. Even some classes and interfaces have the same name (!!). The déjà vu climax was org.eclipse.jdt.core.dom.IBinding. It's been a long time (+8 months) since I renamed IBinding to IEntity but still... Do we have an explanation to that? Strikingly similar artifacts created independently of each other. Patterns of software? or patterns of the nature itself? After tweaking the EmitAssembly.EmitDebugInfo code a little I was able to get the debugger running. Not for me but I know some people that just can't live without decent debugging capabilities... Now they can try boo and debug away. Here's a shot of examples/fibonacci.boo being debugged: The source code is here $ booi extras/booish/booish.boo >>> e = i*2 for i in range(3) >>> print(join(e)) 0 2 4 >>> a = 2**4 >>> print(a) 16 >>> ^Z Interactive language shells are cool. If you don't agree go read something else. An interactive boo shell is probably the number one request I receive and finally the good news is that yesterday as I was leaving the elevator I had this very clear vision on how I could implement it. The great news is that it won't be too hard. In fact, if you compile boo from the subversion repository you can already take a look at the initial design of the InteractiveInterpreter class in the src/booish directory. Sample code? ok. Here it goes: interpreter = InteractiveInterpreter() interpreter.SetValue("name", "boo") interpreter.SetValue("age", 3) interpreter.Eval(""" print(name) print(age) age += 1 """) assert 4 == interpreter.GetValue("age") interpreter.Eval("age = 42") assert 42 == interpreter.GetValue("age") interpreter.Eval(""" value = 3 print(value*2) """) assert 3 == interpreter.GetValue("value") interpreter.Eval("x2 = { return value }") x2 as callable = interpreter.GetValue("x2") assert 3 == x2() Webdesigning me? Nah... Just the old javascript/CSS/XHTML reality check every now and then. So here it goes to Leonardo: 1:<html> 2:<head> 3:<style> 4:.body 5:{ 6: font-family: verdana; 7: font-size: 10pt; 8:} 9: 10:.menuItem, .menuItemHover 11:{ 12: background-color: blue; 13: font-weight: bold; 14: padding: 5px; 15: color: white; 16: display: inline; 17: cursor: pointer; 18:} 19: 20:.menuItemHover 21:{ 22: background-color: lightblue; 23:} 24:</style> 25: 26:<script language="javascript"> 27:document.onmouseover = function(ev) 28:{ 29: var element = getEventTarget(ev); 30: if ("menuItem" == element.className) 31: { 32: element.className = "menuItemHover"; 33: } 34:} 35: 36:document.onmouseout = function(ev) 37:{ 38: var element = getEventTarget(ev); 39: if ("menuItemHover" == element.className) 40: { 41: element.className = "menuItem"; 42: } 43:} 44: 45:function getEventTarget(ev) 46:{ 47: return (ev && ev.target) || window.event.srcElement; 48:} 49: 50:</script> 51:</head> 52:<body> 53: 54:<div id="menuBar"> 55:<div class="menuItem"> 56:<img src="images/arrow.gif" /> 57:About US 58:</div> 59:<div class="menuItem"> 60:<img src="images/arrow.gif" /> 61:Products 62:</div> 63:</div> 64: 65: 66:</body> 67:</html> Tough weekend but I've finally got generator methods to work. Integrating generator methods with generator expressions with closures was a good challenge... Next! :) And before I go I shall leave you with this amazing piece of cra... hmm... code: b = 5 g = def (): a = 0 yield { return a } yield { return ++a } yield { a = b; return { return (a+b)*i for i in range(3) } } assert 5 == a for f in g(): item = f() if item isa callable: b = 3 print(join(cast(callable, item)(), ", ")) else: print(item) 0 1 0, 8, 16 The Infinite Cat Project. I'm still deciding on which of my cats will go first... I've finally set some time aside to write about boo's type inference mechanism. Hey you! Yeah, you, the ones deciding the fate of the empire during the next few days. Have in mind that whatever you decide will have a huge impact on the life of everyone else on this planet (and maybe the universe, how greedy can those in power become?) so please vote against Bush! And if you haven't made up your mind yet or were thinking on not voting just because everyone sucks, please, be the voice of those without one and vote against Bush! Please? Grab yours here! Tons of improvements in this version, true closures are my favorite one since they allow ruby-like things such as: def each(items, closure as callable): for item in items: closure(item) items = [1, 2, 3, 4, 5] value = 0 each(items) do (item as int): value += item print(value) def each(items, closure as callable): for item in items: closure(item) items = [1, 2, 3, 4, 5] value = 0 each(items) do (item as int): value += item print(value) What ever happened to C++? I had to create a little (because boo code tends to be terse) console utility for a customer. The utility had to deal with lots of different command line options, that kind of thing. So I saw it as my opportunity to finally meet Mono.GetOptions. What a nice tool!() Fire this script with() booi GetOptions.boo Simply amazing! Just watch and check the coincidences for yourself. It's quite a trip. Edd Dumbill talks about one of the IronPython niceties here. boo also allows it and it works not only with properties but with events and fields too: import System class Button: [property(Text)] _text = "" public Width = 0 event Click as EventHandler def RaiseClick(): self.Click(self, EventArgs.Empty) b = Button(Text: "Click me", Width: 10, Click: ) print(b.Text) print(b.Width) b.RaiseClick() Cool or what? import System class Button: [property(Text)] _text = "" public Width = 0 event Click as EventHandler def RaiseClick(): self.Click(self, EventArgs.Empty) b = Button(Text: "Click me", Width: 10, Click: ) print(b.Text) print(b.Width) b.RaiseClick() import System class Button: [property(Text)] _text = "" public Width = 0 event Click as EventHandler def RaiseClick(): self.Click(self, EventArgs.Empty) b = Button(Text: "Click me", Width: 10, Click: ) print(b.Text) print(b.Width) b.RaiseClick() callable Function() def foo(): pass # the following line silently generates a # new (anonymous) callable type structurally # equivalent to the declared callable Function1 # above a = foo fn as Function fn = a fn() callable Function() def foo(): pass # the following line silently generates a # new (anonymous) callable type structurally # equivalent to the declared callable Function1 # above a = foo fn as Function fn = a fn()) callable Function() def click(): print("clicked!") fn as Function = click handler as EventHandler = fn callable Function() def click(): print("clicked!") fn as Function = click handler as EventHandler =) I've finally had the chance to watch Fahrenheit 9/11 last night. One of the funniest pieces I've seen in a long time. The americans on the line behind us didn't seem to understand why I laughed all the way through. Homer Simpson puts it best: "It's funny because it's true!". I have never heard anything like that. The Sun Ship of John Coltrane, Elvin Jones, McCoy Tyner and Jimmy Garrision took me far far away to the deepest and darkest corners of the universe. Ohm. An even flow of love that irradiates from the very essence of the existence. The 0.6 version of IronPython was just released. I still don't know how to feel about Jim being hired by MS though. Anyway, I've just played a little with it and it looks great! The distro comes with a System.Windows.Forms example that I decided to translate to boo just for the fun of it:() The future seems brighter on .net land. If you haven't read it yet here's another great essay by Paul Graham. Graham says: The programmers you'll be able to hire to work on a Java project won't be as smart as the ones you could get to work on a project written in Python. s/Python/boo/ and it is still true! :) Soooo... I've decided to spend most of my free time this weekend polishing the way test cases for boo are written. Hopefully I made it so easy that new feature requests and bug reports will already come with lots of attached test cases. Thoughts/comments/suggestions/criticisms? Let me know. The guys had a surprise for me today: From left to right: Leo, Andre, me and Carlos Oh, guys! I hate you so much. :) boo users on windows can already count on a simple but very effective way to edit and run their boo scripts: boox. boox even features code completion! (You're the man, GB!). The somewhat bad news (at least for me) is that I'm no longer a happy boox user since my notebook went to the repair shop two weeks ago. The good news is that now I got a real chance and the motivation to create a similar tool for my fellow linux users. I hope to finally learn some gtk programming techniques along the way. So here's my first shot at creating Boo Explorer for linux: a dumb GtkSourceView based editor with boo syntax highlighting enabled. import System import Gtk from "gtk-sharp" import GtkSourceView from "gtksourceview-sharp" def window_Delete(sender, args as DeleteEventArgs): Application.Quit() args.RetVal = true Application.Init() booSourceLanguage = SourceLanguagesManager().GetLanguageFromMimeType("text/x-boo") buffer = SourceBuffer(booSourceLanguage, Highlight: true) sourceView = SourceView(buffer, ShowLineNumbers: true, AutoIndent: true) window = Window("Simple Boo Editor", DefaultWidth: 600, DefaultHeight: 400, DeleteEvent: window_Delete) window.Add(sourceView) window.ShowAll() Application.Run() Now I just need to learn how to add a menubar, a toolbar, a statusbar, a tree view (document outline), how to handle accelerator keys then I'm pretty much set :-) Powered by a free Atlassian Confluence Open Source Project License granted to Codehaus. Evaluate Confluence today.
http://docs.codehaus.org/display/~bamboo
crawl-002
refinedweb
9,824
58.18
This is about how fork and exec works on Unix. You might already know about this, but some people don’t, and I was surprised when I learned it a few years back! So. You want to start a process. We’ve talked a lot about system calls on this blog – every time you start a process, or open a file, that’s a system call. So you might think that there’s a system call like this start_process(["ls", "-l", "my_cool_directory"]) This is a reasonable thing to think and apparently it’s how it works in DOS/Windows. I was going to say that this isn’t how it works on Linux. But! I went and looked at the docs and apparently there is a posix_spawn system call that does basically this. Shows what I know. Anyway, we’re not going to talk about that. fork and exec posix_spawn on Linux is behind the scenes implemented in terms of 2 system calls called fork and exec (actually execve), which are what people usually actually use anyway. On OS X apparently people use posix_spawn and fork/exec are discouraged! But we’ll talk about Linux. Every process in Linux lives in a “process tree”. You can see that tree by running pstree. The root of the tree is init, with PID 1. Every process (except init) has a parent, and any process has many children. So, let’s say I want to start a process called ls to list a directory. Do I just have a baby ls? No! Instead of having children, what I do is you have a child that is a clone of myself, and then that child gets its brain eaten and turns into ls. Really. We start out like this: my parent |- me Then I run fork(). I have a child which is a clone of myself. my parent |- me |-- clone of me Then I organize it so that my child runs exec("ls"). That leaves us with my parent |- me |-- ls and once ls exits, I’ll be all by myself again. Almost my parent |- me |-- ls (zombie) At this point ls is actually a zombie process! That means it’s dead, but it’s waiting around for me in case I want to check on its return value (using the wait system call.) Once I get its return value, I will really be all alone again. my parent |- me what fork and exec looks like in code This is one of the exercises you have to do if you’re going to write a shell (which is a very fun and instructive project! Kamal has a great workshop on Github about how to do it:) It turns out that with a bit of work & some C or Python skills you can write a very simple shell (like bash!) in C or Python in just a few hours (at least if you have someone sitting next to you who knows what they’re doing, longer if not :)). I’ve done this and it was awesome. Anyway, here’s what fork and exec look like in a program. I’ve written fake C pseudocode. Remember that fork can fail! int pid = fork(); // now i am split in two! augh! // who am I? I could be either the child or the parent if (pid == 0) { // ok I am the child process // ls will eat my brain and I'll be a totally different process exec(["ls"]) } else if (pid == -1) { // omg fork failed this is a disaster } else { // ok i am the parent // continue my business being a cool program // I could wait for the child to finish if I want } ok what does it mean for your brain to be eaten julia Processes have a lot of attributes! You have - open files (including open network connections) - environment variables - signal handlers (what happens when you run Ctrl+C on the program?) - a bunch of memory (your “address space”) - registers - an “executable” that you ran (/proc/$pid/exe) - cgroups and namespaces (“linux container stuff”) - a current working directory - the user your program is running as - some other stuff that I’m forgetting When you run execve and have another program eat your brain, actually almost everything stays the same! You have the same environment variables and signal handlers and open files and more. The only thing that changes is, well, all of your memory and registers and the program that you’re running. Which is a pretty big deal. why is fork not super expensive (or: copy on write) You might ask “julia, what if I have a process that’s using 2GB of memory! Does that mean every time I start a subprocess all that 2GB of memory gets copied?! That sounds expensive!” It turns out that Linux implements “copy on write” for fork() calls, so that for all the 2GB of memory in the new process it’s just like “look at the old process! it’s the same!”. And then if the either process writes any memory, then at that point it’ll start copying. But if the memory is the same in both processes, there’s no need to copy! why you might care about all this Okay, julia, this is cool trivia, but why does it matter? Do the details about which signal handlers or environment variables get inherited or whatever actually make a difference in my day-to-day programming? Well, maybe! For example, there’s this delightful bug on Kamal’s blog. It talks about how Python sets the signal handler for SIGPIPE to ignore. So if you run a program from inside Python, by default it will ignore SIGPIPE! This means that the program will behave differently depending on whether you started it from a Python script or from your shell! And in this case it was causing a weird bug! So, your program’s environment (environment, signal handlers, etc.) can matter! It inherits its environment from its parent process, whatever that was! This can sometimes be a useful thing to know when debugging.
https://jvns.ca/blog/2016/10/04/exec-will-eat-your-brain/
CC-MAIN-2019-09
refinedweb
1,015
79.8
Fundamental Analysis Module 3 — Fundamental Analysis Chapter 1 1.1 – Overview Fundamental Analysis (FA) is a holistic approach to study a business. When an investor wishes to invest in a business for the long term (say 3 – 5 years) it becomes extremely essential to understand the business from various perspectives. It is critical for an investor to separate the daily short term noise in the stock prices and concentrate on the underlying business performance. Over the long term, the stock prices of a fundamentally strong company tend to appreciate, thereby creating wealth for its investors. We have many such examples in the Indian market. To name a few, one can think of companies such as Infosys Limited, TCS Limited, Page Industries, Eicher Motors, Bosch India, Nestle India, TTK Prestige etc. Each of these companies have delivered on an average over 20% compounded annual growth return (CAGR) year on year for over 10 years. To give you a perspective, at a 20% CAGR the investor would double his money in roughly about 3.5 years. Higher the CAGR faster is the wealth creation process. Some companies such as Bosch India Limited have delivered close to 30% CAGR. Therefore, you can imagine the magnitude, and the speed at which wealth is created if one would invest in fundamentally strong companies. Here are long term charts of Bosch India, Eicher Motors, and TCS Limited that can set you thinking about long term wealth creation. Do remember these are just 3 examples amongst the many that you may find in Indian markets. At this point you may be of the opinion that I am biased as I am selectively posting charts that look impressive. You may wonder how the long term charts of companies such as Suzlon Energy, Reliance Power, and Sterling Biotech may look? Well here are the long term charts of these companies: These are just 3 examples of the wealth destructors amongst the many you may find in the Indian Markets. The trick has always been to separate the investment grade companies which create wealth from the companies that destroy wealth. All investment grade companies have a few common attributes that sets them apart. Likewise all wealth destructors have a few common traits which is clearly visible to an astute investor. Fundamental Analysis is the technique that gives you the conviction to invest for a long term by helping you identify these attributes of wealth creating companies. 1.3 – I’m happy with Technical Analysis, so why bother about Fundamental Analysis? Technical Analysis (TA) helps you garner quick short term returns. It helps you time the market for a better entry and exit. However TA is not an effective approach to create wealth. Wealth is created only by making intelligent long term investments. However, both TA & FA must coexist in your market strategy. To give you a perspective, let me reproduce the chart of Eicher Motors: Let us say a market participant identifies Eicher motors as a fundamentally strong stock to invest, and therefore invests his money in the stock in the year 2006. As you can see the stock made a relatively negligible move between 2006 and 2010. The real move in Eicher Motors started only from 2010. This also means FA based investment in Eicher Motors did not give the investor any meaningful return between 2006 and 2010. The market participant would have been better off taking short term trades during this time. Technical Analysis helps the investor in taking short term trading bets. Hence both TA & FA should coexist as a part of your market strategy. In fact, this leads us to an important capital allocation strategy called “The Core Satellite Strategy”. Let us say, a market participant has a corpus of Rs.500,000/-. This corpus can be split into two unequal portions, for example the split can be 60 – 40. The 60% of capital which is Rs.300,000/- can be invested for a long term period in fundamentally strong companies. This 60% of the investment makes up the core of the portfolio. One can expect the core portfolio to grow at a rate of at least 12% to 15% CAGR year on year basis. The balance 40% of the amount, which is Rs.200,000/- can be utilized for active short term trading using Technical Analysis technique on equity, futures, and options. The Satellite portfolio can be expected to yield at least 10% to 12% absolute return on a yearly basis. 1.4 – Tools of FA The tools required for fundamental analysis are extremely basic, most of which are available for free. Specifically you would need the following: 1. Annual report of the company – All the information that you need for FA is available in the annual report. You can download the annual report from the company’s website for free 2. Industry related data – You will need industry data to see how the company under consideration is performing with respect to the industry. Basic data is available for free, and is usually published in the industry’s association website 3. Access to news – Daily News helps you stay updated on latest developments happening both in the industry and the company you are interested in. A good business news paper or services such as Google Alert can help you stay abreast of the latest news 4. MS Excel – Although not free, MS Excel can be extremely helpful in fundamental calculations With just these four tools, one can develop fundamental analysis that can rival institutional research. You can believe me when I say that you don’t need any other tool to do good fundamental research. In fact even at the institutional level the objective is to keep the research simple and logical. Mindset of an Investor 45: Tarun: He has a slightly different opinion about the situation. His thought process is as below: o He feels expecting RBI to cut the rates is wishful thinking. In fact he is of the opinion that nobody can clearly predict what RBI is likely to do o He also identifies that the volatility in the markets is high, hence he believes that option contracts are trading at very high premiums o. 1. Let Rs.20 in profits remain invested along with the original principal of Rs.100 or. The Qualitative aspect mainly involves understanding the non numeric aspects of the business. This includes many factors such as:: Over the next few chapters we will understand how to read the basic financial statements, as published in the annual report. As you may know, the financial statement is the source for all the number crunching as required in the analysis of quantitative aspects. Since the annual report is published by the company, audience for the annual report. Annual reports should provide the most pertinent information to an investor and should also communicate the company’s primary message. For an investor, the annual report must be the default option to seek information about a company. Of course there are many media websites claiming to give the financial information about the company; however the investors should avoid seeking information from such sources. Remember the information is more reliable if we get, the objective of this chapter is to give you a brief orientation on how to read an annual report. Running through each and every page of an AR is not practical; however, I would like to share some insights into how I would personally read through an AR, and also help you understand what kind of information is required and what information we can ignore. For a better understanding, I would urge you to download the Annual Report of ARBL and go through it simultaneously as we progress through this chapter. o Financial Highlights o The Management Statement o Management Discussion & Analysis o 10 year Financial highlights o Corporate Information o Director’s Report o Report on Corporate governance o Financial Section, and o Notice Note, no two annual reports are the same; they are all made to suite the company’s requirement keeping in perspective the industry they operate in. However, some of the sections in the annual report are common across annual reports. The details that you see in the Financial Highlights section are basically an extract from the company’s financial statement. Along with the extracts, the company can also include a few financial ratios, which are calculated by the company itself. I briefly look through this section to get an overall idea, but I do not like to spend too much time on it. The reason for looking at this section briefly is that, I would anyway calculate these and many other ratios myself and while I do so, I would gain greater clarity on the company and its numbers. Needless to say, over the next few chapters we will understand how to read and understand the financial statements of the company and also how to calculate the financial ratios. The next two sections i.e the ‘Management Statement’ and ‘Management Discussion & Analysis’ are quite important. I spend time going through these sections. Both these sections gives you a sense on what the management of the company has to say about their business and the industry in general. As an investor or as a potential investor in the company, every word mentioned in these sections is important. In fact some of the details related to the ‘Qualitative aspects’ (as discussed in chapter 2), can be found in these two sections of the AR. One example that I explicitly remember was reading through the chairman’s message of a well established tea manufacturing company. In his message, the chairman was talking about a revenue growth of nearly 10%, however the historical revenue numbers suggested that the company’s revenue was growing at a rate of 4- 5%. Clearly in this context, the growth rate of 10% seemed like a celestial move. This also indicated to me that the man on top may not really be in sync with ground reality and hence I decided not to invest in the company. Retrospectively when I look back at my decision not to invest, it was probably the right decision. Here is the snapshot of Amara Raja Batteries Limited; I have highlighted a small part that I think is interesting. I would encourage you to read through the entire message in the Annual Report. Moving ahead, the next section is the ‘Management Discussion & Analysis’ or ‘MD&A’. This according to me the trends in the industry and what they expect for the year ahead. This is an important section as we can understand what the company perceives as threats and opportunities in the industry. Most importantly I read through this, and also compare it with its peers to understand if the company has any advantage over its peers. For example, if Amara Raja Batteries limited is a company of interest to me, I would read through this part of the AR and also would read through what Exide Batteries Limited has to say in their AR. Remember until this point the discussion in the Management Discussion & Analysis is broad based and generic (global economy, domestic economy, and industry trends). However going forward, the company would discuss various aspects related to its business. It talks about how the business had performed across various divisions, how did it fare in comparison to the previous year etc. The company in fact gives out specific numbers in this section. After discussing these in ‘Management Discussion & Analysis’ the annual report includes a series of other reports such as – Human Resources report, R&D report, Technology report etc. Each of these reports are important in the context of the industry the company operates in. For example, if I am reading through a manufacturing company annual report, I would be particularly interested in the human resources report to understand if the company has any labor issues. If there are serious signs of labor issues then it could potentially lead to the factory being shut down, which is not good for the company’s shareholders. holding structure: 1. Standard & Poor’s (S&P), a US based rating agency holds a 51% stake in CRISIL. Hence S&P is the ‘Holding company’ or the ‘Promoter’ of CRISIL 2. The balance 49% of shares of CRISIL is held by Public and other Financial institutions 3. However, S&P itself is 100% subsidiary of another company called ‘The McGraw-Hill Companies’ 1. This means McGraw Hill fully owns S&P, and S&P owns 51% of CRISIL 4.? Well, this is quite simple – CRISIL on its own made a loss of Rs.1000 Crs, but its subsidiary Irevna made a profit of Rs.700 Crs, hence the overall P&L of CRISIL is (Rs.1000 Crs) + Rs.700 Crs = (Rs.300 Crs). obviously would be interested to know how ARBL arrived at Rs.17.081 Crs as their share capital. To figure this out, one needs to look into the associated schedule (note number 2). Please look at the snapshot below: Of course, considering you may be new to financial statements, jargon’s like share capital make not make much sense. However the financial statements are extremely simple to understand, and over the next few chapters you will understand how to read the financial statements and make sense of it. But for now do remember that the main financial statement gives you the summary and the associated schedules give the details pertaining to each line item.. 1. The revenue of the company for the given period (yearly or quarterly) 2. The expenses incurred to generate the revenues 3. Tax and depreciation 4.: 1. 2. All currency is denominated in Rupee Million. Note – 1 Million Rupees is equal to Ten Lakh Rupees. It is upto the company’s discretion to decide which unit they would prefer to express their numbers in 3. The particulars show all the main headings of the statement. Any associated note to the particulars is present in the note section (also called the schedule). An associated number is assigned to the note (Note Number) 4. By default when companies report the numbers in the financial statement they present the current year number on the left most column and the previous year number to the right. In this case the numbers are for FY14 (latest) and FY13. 1. Sale of storage batteries in the form of finished goods for the year FY14 is Rs.3523 Crs versus Rs.3036 Crs in FY13 2. Sale of Storage batteries (stock in trade) is Rs.208 Crs in FY14 versus 149 Crs. Stock in trade refers to finished goods of previous financial year being sold in this financial year 3. Sale of home UPS (stock in goods) is at Rs.71 Crs in FY14 versus Rs.109 Crs FY13 4. Net sales from sales of products adjusted for excise duty amounts to Rs.3403 Crs, which matches with the number reported in the P&L statement 5. Likewise you can notice the split up for revenue from services. The revenue number of Rs.30.9 tallies with number reported in the P&L statement 6. In the note, the company says the “Sale of Process Scrap” generated revenue of Rs.2.1 Cr. Note that the sale of process scrap is incidental to the operations of the company, hence reported as ‘Other operating revenue”. 7. Adding up all the revenue streams of the company i.e Rs.3403 Crs+ Rs.30.9 Crs +Rs.2.1 Crs gets us the Net revenue from operations = Rs.3436 Crs. 8.. 1. The financial statement provides information and conveys the financial position of the company 2. A complete set of financial statements include the Profit & Loss Account, Balance Sheet and Cash Flow Statement 3. A fundamental Analyst is a user of financial statement, and he just needs to know what the maker of the financial statements states 4. The profit and loss statement gives the profitability of the company for the year under consideration 5. The P&L statement is an estimate, as the company can revise the numbers at a later point. Also by default companies publish data for the current year and the previous year, side by side 6. The revenue side of the P&L is also called the top line of the company 7. Revenue from operations is the main source of revenue for the company 8. Other operating income includes revenue incidental to the business 9. The other income includes revenue from non operating sources 10. The sum of revenue from operations (net of duty), other operating income, and other incomes gives the ‘Net Revenue from Operations’ Module 3 — Fundamental Analysis Chapter 5 The first line item on the expense side is ‘Cost of materials consumed’; this is invariably the cost of raw material that the company requires to manufacture finished goods. As you can see the cost of raw material consumed/raw material is the largest expense incurred by the company. This expense stands at Rs.2101 Crs for the FY14 and Rs.1760 Crs for the FY13. Note number 19 gives the associated details for this expense, let us inspect the same. As you can see note 19 gives us the details of the material consumed. The company uses lead, lead alloys, separators and other items all of which adds up to Rs.2101 Crs. The next two line items talks about ‘Purchases of Stock in Trade’ and ‘Change in Inventories of finished goods , work–in-process & stock–in-trade’. Both these line items are associated with the same note (Note 20). Purchases of stock in trade, refers to all the purchases of finished goods that the company buys towards conducting its business. This stands at Rs.211 Crs. I will give you more clarity on this line item shortly. A negative number indicates that the company produced more batteries in the FY14 than it managed to sell. To give a sense of proportion (in terms of sales and costs of sales). Here is an extract of Note 20 which details the above two line items: The details mentioned on the above extract are quite straightforward and is easy to understand. At this stage it may not be necessary to dig deeper into this note. It is good to know where the grand total lies. However, when we take up ‘Financial Modeling’ as a separate module we will delve deeper into this aspect. The next line item on the expense side is “Employee Benefit Expense”. This is quite intuitive as it includes expense incurred in terms of the salaries paid, contribution towards provident funds, and other employee welfare expenses. This stands at Rs.158 Crs for the FY14. Have a look at the extract of note 21 which details the ‘Employee Benefit Expense’.. The next line item is the “Finance Cost / Finance Charges/ Borrowing Costs”. Finance cost is interest costs and other costs that an entity pays when it borrows funds. The interest is paid to the lenders of the company. The lenders could be banks or private lenders. The company’s finance cost stands at Rs.0.7 Crs for the FY14. We will discuss more about the debt and related matters when we take up the chapter on the balance sheet later. Following the finance cost the next line item is “Depreciation and Amortization” costs which stand at Rs.64.5 Crs. To understand depreciation and amortization we need to understand the concept of tangible and intangible assets. A tangible asset is one which has a physical form and provides an economic value to the company. For example a laptop, a printer, a car, plants, machinery, buildings etc. An intangible asset is something that does not have a physical form but still provides an economic value to the company such as brand value, trademarks, An asset (tangible or intangible) has to be depreciated over its useful life. Useful life is defined as the period during which the asset can provide economic benefit to the company. For example the useful life of a laptop could be 4 years. Let us understand depreciation better with the help of the following example. Zerodha, a stock broking firm generates Rs.100,000/- from the stock broking business. However Zerodha incurred an expense of Rs.65,000/- towards the purchase of a high performance computer server. The economic life (useful life) of the server is expected to be 5 years. Now if you were to look into the earning capability of Zerodha it appears that on one hand Zerodha earned Rs.100,000/- and on the other hand spent Rs.65,000/- and therefore retained just Rs.35,000/-. This skews the earnings data for the current year and does not really reflect the true earning capability of the company. Remember the asset even though purchased this year, would continue to provide economic benefits over its useful life. Hence it makes sense to spread the cost of acquiring the asset over its useful life. This is called depreciation. This means instead of showing an upfront lump sum expense (towards purchase of an asset), the company can show a smaller amount spread across the useful life of an asset. Thus Rs.65,000/- will be spread across the useful life of the server, which is 5. Hence 65,000/ 5 = Rs.13,000/- would be depreciated every year over the next five years. By depreciating the asset, we are spreading the upfront cost. Hence after the depreciation computation, Zerodha would now show its earrings as Rs.100,000 – Rs.13,000 = Rs.87,000/-. We can do a similar exercise for non tangible assets. The depreciation equivalent for non tangible assets is called amortization. Now here is an important idea – Zerodha depreciates the cost of acquiring an asset over its useful life. However, in reality there is an actual outflow of Rs.65,000/- paid towards the asset purchase. But now, it seems like the P&L is not capturing this outflow. As an analyst, how do we get a sense of the cash movement? Well, the cash movement is captured in the cash flow statement, which we will understand in the later chapters. The last line item on the expense side is “other expenses” at Rs.434.6 Crs. This is a huge amount classified under ‘other expenses’, hence it deserves a detailed inspection. From the note it is quite clear that other expenses include manufacturing, selling, administrative and other expenses. The details are mentioned in the note. For example, Amara Raja Batteries Limited (ARBL) spent Rs.27.5 Crs on advertisement and promotional activities. Adding up all the expenses mentioned in the expense side of P&L, it seems that Amara Raja Batteries has spent Rs.2941.6 Crs. = Rs.3482 – Rs.2941.6 =Rs.540.5 = 540.5 – 3.88 = Rs.536.6 Crs The snapshot below (extract from P&L) shows the PBT(Profit Before Tax) of ARBL: As you can see from the snapshot above, to arrive at the profit after tax (PAT) we need to deduct all the applicable tax expenses from the PBT. Current tax is the corporate tax applicable for the given year. This stands at Rs.158 Crs. Besides this, there are other taxes that the company has paid. All taxes together total upto Rs.169.21 Crs. Deducting the tax amount from the PBT of Rs.536.6 gives us the profit after tax (PAT) at Rs.367.4 Crs. The last line in the P&L statement talks about basic and diluted earnings per share. The EPS is one of the most frequently used statistics in financial analysis. EPS also serves as a means to assess the stewardship and management role performed by the company directors and managers. The earnings per share (EPS) is a very sacred number which indicates how much the company is earning per face value of the ordinary share. It appears that ARBL is earning Rs.21.51 per share. The detailed calculation is as shown below: The company indicates that there are 17,08,12,500 shares outstanding in the market. Dividing the total profit after tax number by the outstanding number of shares, we can arrive at the earnings per share number. In this case: 5.4 – Conclusion Now that we have gone through all the line items in the P&L statement let us relook at it in its entirety. Hopefully, the statement above should look more meaningful to you by now. Remember almost all line items in the P&L statement will have an associated note. You can always look into the notes to seek greater clarity. Also at this stage we have just understood how to read the P&L statement, but we still need to analyze what the numbers mean. We will do this when we take up the financial ratios. Also, the P&L statement is very closely connected with the other two financial statements i.e the balance sheet and the cash flow statement. We will explore these connections at a later stage. 1. The expense part of the P&L statement contains information on all the expenses incurred by the company during the financial year 2. Each expense can be studied with reference to a note which you can explore for further information 3. Depreciation and amortization is way of spreading the cost of an asset over its useful life 4. Finance cost is the cost of interest and other charges paid when the company borrows money for its capital expenditure. 5. PBT = Total Revenue – Total Expense – Exceptional items (if any) 6. Net PAT = PBT – applicable taxes 7. EPS reflects the earning capacity of a company on a per share basis. Earnings are profit after tax and preferred dividends. 8. EPS = PAT / Total number of outstanding ordinary shares Module 3 — Fundamental Analysis Chapter 6: Therefore, =: 1. Capital reserves – Usually earmarked for long term projects. Clearly ARBL does not have much amount here. This amount belongs to the shareholders, but cannot be distributed to them. 2. Securities premium reserve / account – This is where the premium over and above the face/par value of the shares sits. ARBL has a Rs.31.18 Crs under this reserve 3.: 1. As per the last year (FY13) balance sheet the surplus was Rs.829.8Crs. This is what is stated as the opening line under surplus. See the image below: 1. The current year (FY14) profit of Rs.367.4 Crs is added to previous years closing balance of surplus. Few things to take note here: 1. Notice how the bottom line of P&L is interacting with the balance sheet. This highlights a very important fact – all the three financial statements are closely related 2. Notice how the previous year balance sheet number is added up to this year’s number. This highlights the fact that the balance sheet is prepared on a flow basis, adding the carrying forward numbers year on year 2. Previous year’s balance plus this year’s profit adds up to Rs.1197.2 Crs. The company can choose to apportion this money for various purposes. 1. The first thing a company does is it transfers some money from the surplus to general reserves so that it will come handy for future use. They have transferred close to Rs.36.7 Crs for this purpose 2. After transferring to general reserves they have distributed Rs.55.1 Crs as dividends over which they have to pay Rs.9.3 Crs as dividend distribution taxes. 3. After making the necessary apportions the company has Rs.1095.9 Crs as surplus as closing balance. This as you may have guessed will be the opening balance for next year’s (FY15) surplus account. 4., 1. A Balance sheet also called the Statement of Financial Position is prepared on a flow basis which depicts the financial position of the company at any given point in time. It is a statement which shows what the company owns ( assets) and what the company owes (liabilities) 2. A business will generally need a balance sheet when it seeks investors, applies for loans, submits taxes etc. 3. Balance sheet equation is Assets = Liabilities + Shareholders’ Equity 4. Liabilities are obligations or debts of a business from past transactions and Share capital is number of shares * face value 5. Reserves are the funds earmarked for a specific purpose, which the company intends to use in future 6. Surplus is where the profits of the company reside. This is one of the points where the balance sheet and the P&L interact. Dividends are paid out of the surplus 7. Shareholders’ equity = Share capital + Reserves + Surplus. Equity is the claim of the owners on the assets of the company. It represents the assets that remain after deducting the liabilities. If you rearrange the Balance Sheet equation, Equity = Assets – Liabilities. 8. Non-current liabilities or the long term liabilities are obligations which are expected to be settled in not less than 365 days or 12 months of the balance sheet date 9. Deferred tax liabilities arise due to the discrepancy in the way the depreciation is treated. Deferred tax liabilities are amounts of income taxes payable in the future with respect to taxable differences as per accounting books and tax books. 10. Current liabilities are the obligations the company plans to settle within 365 days /12 months of the balance sheet date. 11. In most cases both long and short term provisions are liabilities dealing with employee related matters 12. Total Liability = Shareholders’ Funds + Non Current Liabilities + Current Liabilities. . Thus, total liabilities represent the total amount of money the company owes to others Module 3 — Fundamental Analysis Chapter 7. 1. The Assets side of the Balance sheet displays all the assets the company owns 2. Assets are expected to give an economic benefit during its useful life 3. Assets are classified as Non-current and Current asset 4. The useful life of Non-current assets is expected to last beyond 365 days or 12 months 5. Current assets are expected to payoff within 365 days or 12 months 6. Assets inclusive of depreciation are called the ‘Gross Block’ 7. Net Block = Gross Block – Accumulated Depreciation 8. The sum of all assets should equal the sum of all liabilities. Only then the Balance sheet is said to have balanced. 9. The Balance sheet and P&L statement are inseparable. They are connected to each other in many ways. Module 3 — Fundamental Analysis Chapter 8 8.1 – Overview The Cash flow statement is a very important financial statement, as it reveals how much cash the company is actually generating. Is this information not revealed in the P&L statement you may think? Well, the answer is both a yes and a no. Assume a simple coffee shop selling coffee and short eats. All the sales the shop does is mostly on cash basis, meaning if a customer wants to have a cup of coffee and a snack, he needs to have enough money to buy what he wants. Going by that on a particular day, assume the shop manages to sell Rs.2,500/- worth of coffee and Rs.3,000/- worth of snacks. It is evident that the shop’s income is Rs.5,500/- for that day. Rs.5,500/- is reported as revenues in P&L, and there is no ambiguity with this. Now think about another business that sells laptops. For sake of simplicity, let us assume that the shop sells only 1 type of laptop at a standard fixed rate of Rs.25,000/- per laptop. Assume on a certain day, the shop manages to sells: If this shop was to show its total revenue in its P&L statement, you would just see a revenue of Rs.500,000/- which may seem good on the face of it. However, how much of this Rs.500,000/- is actually present in the company’s bank account is not clear. What. A statement of cash flows should be presented as an integral part of an entity’s financial statements. Hence in this context evaluation of the cash flow statement is highly critical as it reveals amongst other things, the true cash position of the company. Imagine a business, maybe a very well established fitness center (Talwalkars, Gold’s Gym etc) with a sound corporate structure. What are the typical business activities you think a fitness center would have? Let me go ahead and list a few business activities: 1. Operational activities (OA): Activities that are directly related to the daily core business operations are called operational activities. Typical operating activities include sales, marketing, manufacturing, technology upgrade, resource hiring etc. 2. Investing activities (IA): Activities pertaining to investments that the company makes with an intention of reaping benefits at a later stage. Examples include parking money in interest bearing instruments, investing in equity shares, investing in land, property, plant and equipment, intangibles and other non current assets etc 3. Financing activities (FA): Activities pertaining to one of the three categories /baskets. Keeping this in perspective, we will now understand for the example given above how the various activities listed would impact the cash balance and how would it impact the balance sheet. Activity Activity Cash Rational On Balance Sheet No Type Balance 1. Whenever the liabilities of the company increases the cash balance also increases 1. This means if the liabilities decreases, the cash balance also decreases 2. Whenever the asset of the company increases, the cash balance decreases 1. This means if the assets decreases, the cash balance increases The above conclusion is the key concept while constructing a cash flow statement. Also, extending this further you will realize that each activity of the company be it operating activity, financing activity, or investing activity either produces cash (net increase in cash) or reduces (net decrease in cash)the cash for the company. Hence the total cash flow for the company will be:- Cash Flow of the company = Net cash flow from operating activities + Net Cash flow from investing activities + Net cash flow from financing activities I want you to notice that ARBL has generated Rs.278.7 Crs from operating activities. Note, a company which has a positive cash flow from operating activities is always a sign of financial well being. future it would lead to an increase in the cash balance (remember increase in liabilities, increases cash balance). We know from the balance sheet that ARBL did not undertake any new debt. This means the company consumed a which is (Rs.119.19) Crs along with a minor forex exchange difference of Rs.2.58 Crs we get the total cash position of the company which is Rs.292.86 Crs. This means, while the company guzzled cash on a yearly basis, they still have adequate cash, thanks to the carry forward from the previous year.: The P&L statement discusses how much the company earned as revenues versus how much the company expended in terms of expenses. The retained earnings of the company as well as indicates the cash needs of a company. The statement of cash flows are prepared on a historical basis providing information about the cash and cash equivalents, classifying cash flows in to operating, financing and investing activities. The final number of the cash flow tells us how much money the company has in its bank account. We have so far looked into how to read the financial statements and what to expect out of each one 1. The Cash flow statement gives us a picture of the true cash position of the company 2. A legitimate company has three main activities – operating activities, investing activities and the financing activities 3. Each activity either generates or drains money for the company 4. The net cash flow for the company is the sum of operating activities, investing activities and the financing activities 5. Investors should specifically look at the cash flow from operating activities of the company 6. When the liabilities increase, cash level increases and vice versa 7. When the assets increase, cash level decreases and vice versa 8. The net cash flow number for the year is also reflected in the balance sheet 9. The Statement of Cash flow is a useful addition to the financial statements of a company because it indicates the company’s performance.. 9.2 – The Financial Ratios Financial ratios can be ‘somewhat loosely’ classified into different categories, namely –%.: RoE = 2500 / 8000*100 =% 1. What does an EBITDA of Rs.560 Crs and an EBITDA margin of 16.3% indicate? 2.. 10. Let us apply this ratio on Jain Irrigation Limited. Here is the snapshot of Jain Irrigation’s P&L statement for the FY 14, I have highlighted the Finance costs in red: We know EBITDA = [Revenue – Expenses] = Rs.769.98 – 204.54 = Rs. 565.44 =. Please note, the total debt here includes both the short term debt and the long term debt. Here is JSIL’s Balance Sheet, I have highlighted total equity, long term, and short term debt: This means roughly about 45% of the assets held by JSIL is financed through debt capital or creditors (and therefore 55% is financed by the owners). Needless to say, higher the percentage the more concerned the investor would be as it indicates higher leverage and risk.. The assets considered while calculating the fixed assets turnover should be net of accumulated depreciation, which is nothing but the net block of the company. It should also include the capital work in progress. Also, we take the average assets for reasons discussed in the previous chapter. = . Operating revenue (FY 14) is Rs. 3437 Crs. Hence Total Asset Turnover is: = 3437 / 1954.95 = 1.75 times This means Amara Raja Batteries Limited turns over its inventory 8 times in a year or once in every 1.5 months. Needless to say, to get a true sense of how good or bad this number is, one should compare it with its competitor’s numbers. The inventory number of days is usually calculated on a yearly basis. Hence in the formula above, 365 indicates the number of days in a year.: This means whenever you see impressive inventory numbers, always ensure to double check the production details as well.. 1. Leverage ratios include Interest Coverage, Debt to Equity, Debt to Assets and the Financial Leverage ratios 2. The Leverage ratios mainly study the company’s debt with respect to the company’s ability to service the long term debt 3. Interest coverage ratio inspects the company’s earnings ability (at the EBIT level) as a multiple of its finance costs 4. Debt to equity ratio measures the amount of equity capital with respect to the debt capital. Debt to equity of 1 implies equal amount of debt and equity 5. Debt to Asset ratio helps us understand the asset financing structure of the company (especially with respect to the debt) 6. The Financial Leverage ratio helps us understand the extent to which the assets are financed by the owner’s equity 7. The Operating Ratios also referred to as the Activity ratios include – Fixed Assets Turnover, Working Capital turnover, Total Assets turnover, Inventory turnover, Inventory number of days, Receivable turnover and Day Sales Outstanding ratios 8. The Fixed asset turnover ratio measures the extent of the revenue generated in comparison to its investment in fixed assets 9. Working capital turnover ratio indicates how much revenue the company generates for every unit of working capital 10. Total assets turnover indicates the company’s ability to generate revenues with the given amount of assets 11. Inventory turnover ratio indicates how many times the company replenishes its inventory during the year 12. Inventory number of days represents the number of days the company takes to convert its inventory to cash 1. A high inventory turnover and therefore a low inventory number of days is a great combination 2. However make sure this does not come at the cost of low production capacity 13. The Receivable turnover ratio indicates how many times in a given period the company receives money from its debtors and customers 14. The Days sales outstanding (DSO) ratio indicates the Average cash collection period i.e the time lag between the Billing and Collection Module 3 — Fundamental Analysis Chapter 11 The valuation ratios help us develop a sense on how the stock price is valued by the market participants. These ratios help us understand the attractiveness of the stock price from an investment perspective. The point of valuation ratios is to compare the price of a stock viz a viz the benefits of owning it. Like all the other ratios we had looked at, the valuation ratios of a company should be evaluated alongside the company’s competitors. Valuation ratios are usually computed as a ratio of the company’s share price to an aspect of its financial performance. We will be looking at the following three important valuation ratios: We also need the total number of shares outstanding in ARBL to calculate the above ratios. If you recollect, we have calculated the same in chapter 6. The total number of shares outstanding is 17,08,12,500 or 17.081Crs: Let us calculate the same for ARBL. We will take up the denominator first: This means for every share outstanding, ARBL does Rs.203.86 worth of sales. A P/S ratio of 3.24 times indicates that, for every Rs.1 of sales, the stock is valued Rs.3.24 times higher. Obviously, higher the P/S ratio, higher is the valuation of the firm. One has to compare the P/S ratio with its competitors in the industry to get a fair sense of how expensive or cheap the stock is. Here is something that you need to remember while calculating the P/S ratio. Assume there are two companies (Company A and Company B) selling the same product. Both the companies generate a revenue of Rs.1000/-each. However, Company A retains Rs.250 as PAT and Company B retains Rs.150 as PAT. In this case, Company A has a profit margin of 25% versus Company B’s which has a 15% profit margin. Hence the sales of Company A is more valuable than the sales of Company B. Hence if Company A is trading at a higher P/S, then the valuation maybe justified, simply because every rupee of sales Company A generates, a higher profit is retained. Hence whenever you feel a particular company is trading at a higher valuation from the P/S ratio perspective, do remember to check the profit margin for cues. Before we understand the Price to Book Value ratio, we need to understand what in case the company decides to liquidate. The ‘Book Value’ (BV) can be calculated as follows: Revaluation Reserves = 0 This means if ARBL were to liquidate all its assets and pay off its debt, Rs.79.8 per shares is what the shareholders can expect. Moving ahead, if we divide the current market price of the stock by the book value per share, we will get the price to the book value of the firm. The P/BV indicates how many times the stock is trading over and above the book value of the firm. Clearly the higher the ratio, the more expensive the stock is. P/BV = 661/79.8 This means ARBL is trading over 8.3 times its book value. A high ratio could indicate the firm is overvalued relative to the equity/ book value of the company. A low ratio could indicate the company is undervalued relative to the equity/ book value of the company. further Hence the EPS gives us a sense of the profits generated on a per share basis. Clearly, higher the EPS, better it is for its shareholders. If you divide the current market price with EPS we get the Price to Earnings ratio of a firm. The P/E ratio measures the willingness of the market participants to pay for the stock, for every rupee of profit that the company generates. For example if the P/E of a certain firm is 15, then it simply means that for every unit of profit the company earns, the market participants are willing to pay 15 times. Higher the P/E, more expensive is the stock. Let us calculate the P/E for ARBL. We know from its annual report – PAT = Rs.367Crs = 367 / 17.081 = Rs stock price of a company increases when the expectations from the company increases. Remember, P/E Ratio is calculated with ‘earnings’ in its denominator. While looking at the P/E ratio, do remember the following key points: 1. P/E indicates how expensive or cheap the stock is trading at. Never buy stocks that are trading at high valuations. I personally do not like to buy stocks that are trading beyond 25 or at the most 30 times its earnings, irrespective of the company and the sector it belongs to 2. The denominator in P/E ratio is the ‘Earnings’, and the earnings can be manipulated 3. Make sure the company is not changing its accounting policy too often – this is one of the ways the company tries to manipulate its earnings. 4. Pay attention to the way depreciation is treated. Provision for lesser depreciation can boost earnings 5. If the company’s earnings are increasing but not its cash flows and sales, then clearly something is not right * Source – Creytheon From the P/E chart above, we can make a few important observations – 1. The peak Index valuation was 28x (early 2008), what followed this was a major crash in the Indian markets 2. The corrections drove the valuation down to almost 11x (late 2008, early 2009). This was the lowest valuation the Indian market had witnessed in the recent past 3. Usually the Indian Indices P/E ratio ranges between 16x to 20x, with an average of 18x 4. As of today (2014) we are trading around 22x, which is above the average P/E ratio Based on these observations, the following conclusions can be made – 1. One has to be cautious while investing in stocks when the market’s P/E valuations is above 22x 2. Historically the best time to invest in the markets is when the valuations are around 16x or below. One can easily find out Index P/E valuation on a daily basis. Clearly as of today (13th Nov 2014) the Indian market is trading close to the higher end of the P/E range; history suggests that we need to be cautious while taking investment decisions at this level. –). 2.. 3.. 4.. 5. 6.. To find the answer, we do not go to Google and search, instead look for it in the company’s latest Annual Report or their website. This helps us understand what the company has to say about themselves. Once we are comfortable knowing the business, we move to stage 2 i.e application of the checklist. At this stage we get some performance related answers. Without much ado, here is the 10 point checklist that I think is good enough for a start – Sl Variable Comment What does it signify No EPS should be If a company is diluting its equity then it is not good 3 EPS consistent with the Net for its shareholders Profits Company should not be High debt means the company is operating on a high 4 Debt Level highly leveraged leverage. Plus the finance cost eats away the earnings Sales backed by Sales vs This signifies that the company is just pushing its 6 receivables is not a Receivables products to show revenue growth great sign Cash flow from If the company is not generating cash from operations 7 Has to be positive operations then it indicates operating stress. As mentioned in the previous chapter, we will structure the equity research process in 3 stages- out the company is evolving across business cycles. Here are a bunch of questions that I think helps us in our quest to understand the business. I have discussed the rationale behind each question. Sl Question Rational behind the question No 1 What does the company do? To get a basic understanding of the business Who are its promoters? What To know the people behind the business. A sanity check to 2 are their backgrounds? eliminate criminal background, intense political affiliation etc What do they manufacture (in To know their products better, helps us get a sense of the 3 case it is a manufacturing product’s demand supply dynamics company)? Are they running the plant in full Gives us an idea on their operational abilities, demand for their 5 capacity? products, and their positioning for future demand Who are the company’s clients By knowing the client base we can get a sense of the sales cycle 7 or end users? and efforts required to sell the company’s products Who are their bankers, Good to know, and to rule out the possibility of the companies 14 auditors? association with scandalous agencies How many employees do they Gives us a sense of how labor intensive the company’s 15 have? Does the company have operations are. Also, if the company requires a lot of people with labor issues? niche skill set then this could be another red flag Does the company have too If yes, you need to question why? Is it a way for the company to 18 many subsidiaries? siphon off funds? These questions are thought starters for understanding any company. In the process sense that I have discovered about the company. This information sheet has to be crisp and to the point. If I’m unable to achieve this, then it is a clear indication that I do not know enough about the company. Only to illustrate a framework of what I perceive as a ‘fairly adequate’ equity research process. The objective of the 2nd stage of equity research is to help us comprehend the numbers and actually evaluate if both the nature of the business and the financial performance of the business complement each other. If they do not complement each other then clearly the company will not qualify as investible grade. We looked at the checklist in the previous chapter; I’ll reproduce the same here for quick reference. Sl Variable Comment What does it signify No Net Profit In line with the gross Revenue growth should be in line with the profit 1 Growth profit growth growth Sales backed by Sales vs This signifies that the company is just pushing its 6 receivables is not a great Receivables products to show revenue growth sign The first sign of a company that may qualify as investable grade is the rate at which it is growing. To evaluate the growth the company, we need to check the revenue and PAT growth. We will evaluate growth from two perspectives – 1. Year on Year growth – this will gives us a sense of progress the company makes on a yearly basis. Do note, industries do go through cyclical shifts. From that perspective if a company has a flat growth, it is ok. However just make sure you check the competition as well to ensure the growth is flat industry wide. 2.. Personally I prefer to invest in companies that are growing (Revenue and PAT) over and above 15% on a CAGR basis.. Where, Cost of goods sold is the cost involved in making the finished good, we had discussed this calculation while understanding the inventory turnover ratio. Let us proceed to check how ARBL’s Gross Profit margins has evolved over the years. Clearly the Gross Profit Margins (GPM) looks very impressive. The checklist mandates a minimum GPM of 20%. ARBL has a much more than the minimum GPM requirement. This implies a couple of things – 1. ARBL enjoys a premium spot in the market structure. This maybe because of the absence of competition in the sector, which enables a few companies to enjoy higher margins 2. Good operational efficiency, which in turn is a reflection of management’s capabilities Debt level – Balance Sheet check The first three points in the checklist were mainly related to the Profit & Loss statement of the company. We will now look through a few Balance sheet items. One of the most important line item that we need to look at on the Balance Sheet is the Debt. An increasingly high level of debt indicates a high degree of financial leverage. Growth at the cost of financial leverage is quite dangerous. Also do remember, a large debt on balance sheets means a large finance cost charge. This eats into the retained earnings of the firm. The debt seems to have stabilized around 85Crs. In fact it is encouraging to see that the debt has come down in comparison to the FY 09-10. Besides checking for the interest coverage ratio (which we have discussed previously) I also like to check the debt as a percent – 1. Raising inventory with raising PAT indicates are signs of a growing company 2. A stable inventory number of days indicates management’s operational efficiency to some extent Let us see how ARBL fares on the inventory data – Inventory Days 68 72 60 47 47 The inventory number of days is more or less stable. In fact it does show some sign of a slight decline. Do note, we have discussed the calculation of the inventory number of days in the previous chapter. Both the inventory and PAT are showing a similar growth signs which is again a good sign. Sales vs Receivables We now look at the sales number in conjunction liked the inventory number of days, the receivables as % of net sales has also showed signs of a decline, which is quite impressive. This is in fact though a bit volatile has remained positive throughout the last 5 years. This only means ARBL’s core business operations are generating cash and therefore can be considered successful. Return on Equity Here is how ARBL’s ROE has fared for the last 5 years – These numbers are very impressive. I personally like to invest in companies that have. Now, you as an equity research analyst have to view the output of stage 2 in conjunction with your finding from stage 1 (which deals with understanding the business). If you are able to develop a comfortable opinion (based on facts) after these 2 stages, then the business surely appears to have investable grade attributes and therefore worth investing. However before you go out and buy the stock, you need to ensure the price is right. This is exactly what we do in stage 3 of equity research. DCF Primer 47 The objective of the next two chapters is to help you understand “the price”. The price of a stock can be estimated by a valuation technique. Valuation per say helps you determine the ‘intrinsic value’ of the company. We use a valuation technique called the “Discounted Cash Flow (DCF)” method to calculate the intrinsic value of the company. The intrinsic value as per the DCF method is the evaluation of the ‘perceived stock price’ of a company, keeping all the future cash flows in perspective. The DCF model is made up of several concepts which are interwoven with one another. Naturally we need to understand each of these concepts individually and then place it in the context of DCF. In this chapter we will understand the core concept of DCF called “The Net Present Value (NPV)” and then we will proceed to understand the other concepts involved in DCF, before understanding the DCF as a whole. 14.2 – The future cash flow The concept of future cash flow is the crux of the DCF model. We will understand this with the help of a simple example. Assume Vishal is a pizza vendor who serves the best pizza’s in town. His passion for baking pizzas leads him to an innovation. He invents an automatic pizza maker which automatically bakes pizzas. All he has to do is, pour the ingredients required for making a pizza in the slots provided and within 5 minutes a fresh pizza pops out. He figures out that with this machine, he can earn an annual revenue of Rs.500,000/- and the machine has a life span of 10 years. His friend George is very impressed with Vishal’s pizza machine. So much so that, George offers to buy this machine from Vishal. Now here is a question for you – What do you think is the minimum price that George should pay Vishal to buy this machine? Well, obviously to answer this question we need to see how economically useful this machine is going to be for George. Assuming he buys this machine today (2014), over the next 10 years, the machine will earn him Rs.500,000/- each year. 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 500,000 500,000 500,000 500,000 500,000 500,000 500,000 500,000 500,000 500,000 Do note, for the sake of convenience, I have assumed the machine will start generating cash starting from 2015. Clearly, George is going to earn Rs.50,00,000/- (10 x 500,000) over the next 10 years, after which the machine is worthless. One thing is clear at this stage, whatever is the cost of this machine, it cannot cost more than Rs.50,00,000/-. Think about it – Does it make sense to pay an entity a price which is more than the economic benefit it offers? To go ahead with our calculation, assume Vishal asks George to pay “Rs.X” towards the machine. At this stage, assume George has two options – either pay Rs.X and buy the machine or invest the same Rs.X in a fixed deposit scheme which not only guarantees his capital but also pays him an interest of 8.5%. Let us assume that George decides to buy the machine instead of the fixed deposit alternative. This implies, George has foregone an opportunity to earn 8.5% risk free interest. This is the ‘opportunity cost’ for having decided to buy the machine. So far, in our quest to price the automatic pizza maker we have deduced three crucial bits of information – 1. The total cash flow from the pizza maker over the next 10 years – Rs.50,00,000/- 2. Since the total cash flow is known, it also implies that the cost of the machine should be less than the total cash flow from the machine 3. The opportunity cost for buying the pizza machine is, an investment option that earns 8.5% interest Keeping the above three points in perspective, let us move ahead. We will now focus on the cash flows. We know that George will earn Rs.500,000/- every year from the machine for the next 10 years. So think about this – George in 2014, is looking at the future – 1. How much is the Rs.500,000/- that he receives in 2016 worth in today’s terms? 2. How much is the Rs.500,000/- that he receives in 2018 worth in today’s terms? 3. How much is the Rs.500,000/- that he receives in 2020 worth in today’s terms? 4. To generalize, how much is the cash flow of the future worth in today’s terms? The answer to these questions lies in the realms of the “Time value of money”. In simpler words, if I can calculate the value of all the future cash flows from that machine in terms of today’s value, then I would be in a better situation to price that machine. Please note – in the next section we will digress/move away from the pizza problem, but we will eventually get back to it. 14.3 – Time Value of Money (TMV) Time value of money plays an extremely crucial role in finance. The TMV finds its application in almost all the financial concepts. Be it discounted cash flow analysis, financial derivatives pricing, project finance, calculation of annuities etc, the time value of money is applicable. Think of the ‘Time value of money’ as the engine of a car, with the car itself being the “Financial World”. The concept of time value of money revolves around the fact that, the value of money does not remain the same across time. Meaning, the value of Rs.100 today is not really Rs.100, 2 years from now. Inversely, the value of Rs.100, 2 years from now is not really Rs.100 as of today. Whenever there is passage of time, there is an element of opportunity. Money has to be accounted (adjusted) for that opportunity. If we have to evaluate, what would be the value of money that we have today sometime in the future, then we need to move the ‘money today’ through the future. This is called the “Future Value (FV)” of the money. Likewise, if we have to evaluate the value of money that we are expected to receive in the future in today’s terms, then we have to move the future money back to today’s terms. This is called the “Present Value (PV)” of money. In both the cases, as there is a passage of time, the money has to be adjusted for the opportunity cost. This adjustment is called “Compounding” when we have to calculate the future value of money. It is called “Discounting” when we have to calculate the present value of money. Without getting into the mathematics involved (which by the way is really simple) I will give you the formula required to calculate the FV and PV. Example 1 – How much is Rs.5000/- in today’s terms (2014) worth five years later assuming an opportunity cost of 8.5%? This is a case of Future Value (FV) computation, as we are trying to evaluate the future value of the money that we have today – = 7518.3 This means Rs.5000 today is comparable with Rs.7518.3 after 5 years, assuming an opportunity cost of 8.5%. = 6129.5 Example 3 – If I reframe the question in the first example – How much is Rs.7518.3 receivable in 5 years worth in today’s terms given an opportunity cost @ 8.5%? We know this requires us to calculate the present value. Also, since we have done the reverse of this in example 1, we know the answer should be Rs.5000/- . Let us calculate the present value to check this – = 7518.3 / (1 + 8.5%) ^ 5 = 5000.0 Assuming you are clear with the concept of time value of money, I guess we are now equipped to go back to the pizza problem. 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 500,000 500,000 500,000 500,000 500,000 500,000 500,000 500,000 500,000 500,000 We posted this question earlier, let me repost it again – How much is the cash flow of the future worth in today’s terms? As we can see, the cash flow is uniformly spread across time. We need to calculate the present value of each cash flow (receivable in the future) by discounting it with the opportunity cost. Here is a table that calculates the PV of each cash flow keeping the discount rate of 8.5% – Year Cash Flow (INR) Receivable in (years) Present Value (INR) The sum of all the present values of the future cash flow is called “The Net Present Value (NPV)”. The NPV in this case is Rs. 32,80,842 This also means, the value of all the future cash flows from the pizza machine in today’s terms is Rs. 32,80,842. So if George has to buy the pizza machine from Vishal, he has to ensure the price is Rs. 32,80,842 or lesser, but definitely not more than that and this is roughly how much the pizza machine should cost George. Now, think about this – What if we replace the pizza machine with a company? Can we discount all future cash flows that the company earns with an intention to evaluate the company’s stock price? Yes, we can and in fact this is exactly what will we do in the “Discounted Cash Flow” model. 1. A valuation model such as the DCF model helps us estimate the price of a stock 2. The DCF model is made up of several inter woven financial concepts 3. The ‘Time Value of Money’ is one of the most crucial concept in finance, as it finds its application in several financial concepts including the DCF method 4. The value of money cannot be treated the same across the time scale – which means the value of money in today’s terms is not really the same at some point in the future 5. To compare money across time we have to ‘time travel the money’ after accounting for the opportunity cost 6. Future Value of money is the estimation of the value of money we have today at some point in the future 7. Present value of money is the estimation of the value of money receivable in the future in terms of today’s value 8. The Net Present Value (NPV) of money is the sum of all the present values of the future cash flows Module 3 — Fundamental Analysis Chapter 15 In the previous chapter in order to evaluate the price of the pizza machine, we looked at the future cash flows from the pizza machine and discounted them back to get the present value. We added all the present value of future cash flows to get the NPV. Towards the end of the previous chapter we also toyed around with the idea –What will happen if the pizza machine is replaced by the company’s stock? Well, in that case we just need an estimate of the future cash flows from the company and we will be in a position to price the company’s stock. But what cash flow are we talking about? And how do we forecast the future cash flow for a company? 15.1 – The Free Cash Flow (FCF) The cash flow that we need to consider for the DCF Analysis is called the “Free Cash flow (FCF)” of the company. The free cash flow is basically the excess operating cash that the company generates after accounting for capital expenditures such as buying land, building and equipment. This is the cash that shareholders enjoy after accounting for the capital expenditures. The mark of a healthy business eventually depends on how much free cash it can generate. Thus, the free cash is the amount of cash the company is left with after it has paid all its expenses including investments. When the company has free cash flows, it indicates the company is a healthy company. Hence investors often look out of such companies whose share prices are undervalued but who have high or rising free cash flow, as they believe over time the disparity will disappear as the share price will soon increase. Thus the Free cash flow helps us know if the company has generated earnings in a year or not. Hence as an investor to assess the company’s true financial health, look at the free cash flow besides the earnings. FCF for any company can be calculated easily by looking at the cash flow statement. The formula is – Let us calculate the FCF for the last 3 financial years for ARBL – Cash from Operating Activities (after income tax) Rs.296.28 Crs Rs.335.46 Rs.278.7 Here is the snapshot of ARBL’s FY14 annual report from where you can calculate the free cash flow – Please note, the Net cash from operating activities is computed after adjusting for income tax. The net cash from operating activities is highlighted in green, and the capital expenditure is highlighted in red. You may now have a fair point in your mind – When the idea is to calculate the future free cash flow, why are we calculating the historical free cash flow? Well, the reason is simple, while working on the DCF model, we need to predict the future free cash flow. The best way to predict the future free cash flow is by estimating the historical average free cash flow and then sequentially growing the free cash flow by a certain rate.. This is a standard practice in the industry. Now, by how much do we grow the free cash flow is the next big question? Well, the growth rate you would assume should be as conservative as possible. I personally like to estimate the FCF for at least 10 years. I do this by growing the cash flow at a certain rate for the first 5 years, and then I factor in a lower rate for the next five years. If you are getting a little confused here, I would encourage you to go through the following step by step calculation for a better clarity. As the first step, I estimate the average cash flow for the last 3 years for ARBL – =Rs.140.36 Crs The reason for taking the average cash flow for the last 3 years is to ensure, we are averaging out extreme cash flows, and also accounting for the cyclical nature of the business. For example in case of ARBL, the latest year cash flow is negative at Rs.51.6 Crs. Clearly this is not a true representation of ARBL’s cash flow, hence for this reason it is always advisable to take the average free cash flow figures. Select a rate which you think is reasonable. This is the rate at which, the average cash flow will grow going forward. I usually prefer to grow the FCF in 2 stages. The first stage deals with the first 5 years and the 2ndstage deals with the last 5 years. Specifically with reference to ARBL, I prefer to use 18% for the first 5 years and around 10% for the next five years. If the company under consideration is a mature company, that has grown to a certain size (as in a large cap company), I would prefer to use a growth rate of 15% and 10% respectively. The idea here is to be as conservative as possible. We know the average cash flow for 2013 -14 is Rs.140.26 Crs. At 18% growth, the cash flow for the year 2014 – 2015 is estimated to be – = 140.36 * (1+18%) The free cash flow for the year 2015 – 2016 is estimated to be – 165.62 * (1 + 18%) With this, we now have a fair estimate of the future free cash flow. How reliable are these numbers you may ask. After all, predicting the free cash flow implies we are predicting the sales, expenses, business cycles, and literally every aspect of the business. Well, the estimate of the future cash flow is just that, it is an estimate. The trick here is to be as conservative as possible while assuming the free cash flow growth rate. We have assumed 18% and 10% growth rate for the future, these are fairly conservative growth rate numbers for a well managed and growing company. The rate at which the free cash flow grows beyond 10 years (2024 onwards) is called the “Terminal Growth Rate”. Usually the terminal growth rate is considered to be less than 5%. I personally like to set this rate between 3-4%, and never beyond that. The “Terminal Value” is the sum of all the future free cash flow, beyond the 10th year, also called the terminal year. To calculate the terminal value we just have to take the cash flow of the 10th year and grow it at the terminal growth rate. However, the formula to do this is different as we are calculating the value literally to infinity. Do note, the FCF used in the terminal value calculation is that of the 10th year. Let us calculate the terminal value for ARBL considering a discount rate of 9% and terminal growth rate of 3.5% : = Rs.9731.25 Crs For example in 2015 – 16 (2 years from now) ARBL is expected to receive Rs.195.29 Crs. At 9% discount rate the present value would be – = 195.29 / (1+9%)^2 = Rs.164.37 Crs So here is how the present value of the future cash flows stack up – Sl No Year Growth rate Future Cash flow (INR Crs) Present Value (INR Crs) Net Present Value (NPV) of future free cash flows Rs.1968.14 Crs Along with this, we also need to calculate the net present value for the terminal value, to calculate this we simply discount the terminal value by discount rate – = 9731.25 / (1+9%)^10 = Rs.4110.69 Crs Therefore, the sum of the present values of the cash flows is = NPV of future free cash flows + PV of terminal value = 1968.14 + 4110.69 = Rs.6078.83 Crs This means standing today and looking into the future, I expect ARBL to generate a total free cash flow of Rs.6078.83 Crs all of which would belong to the shareholders of ARBL. 15.4 – The Share Price We are now at the very last step of the DCF analysis. We will now calculate the share price of ARBL based on the future free cash flow of the firm. We now know the total free cash flow that ARBL is likely to generate. We also know the number of shares outstanding in the markets. Dividing the total free cash flow by the total number of shares would give us the per share price of ARBL. However before doing that we need to calculate the value of ‘Net Debt’ from the company’s balance sheet. Net debt is the current year total debt minus current year cash & cash balance. Net Debt = Current Year Total Debt – Cash & Cash Balance = (Rs.218.6 Crs) A negative sign indicates that the company has more cash than debt. This naturally has to be added to the total present value of free cash flows. = Rs.6297.43 Crs Dividing the above number by the total number of shares should give us the share price of the company also called the intrinsic value of the company. Share Price = Total Present Value of Free Cash flow / Total Number of shares We know from ARBL’s annual report the total number of outstanding shares is 17.081 Crs. Hence the intrinsic value or the per share value is – A leeway for the modeling error simply allows us to be a flexible with the calculation of the per share value. I personally prefer to add + 10% as an upper band and – 10% as the lower band for what I perceive as the intrinsic value of the stock. Hence, instead of assuming Rs.368 as the fair value of the stock, I would now assume that the stock is fairly valued between 331 and 405. This would be the intrinsic value band. Now keeping this value in perspective, we check the market value of the stock. Based on its current market price we conclude the following – 1. If the stock price is below the lower intrinsic value band, then we consider the stock to be undervalued, hence one should look at buying the stock 2. If the stock price is within the intrinsic value band, then the stock is considered fairly valued. While no fresh buy is advisable, one can continue to hold on to the stock if not for adding more to the existing positions 3. If the stock price is above the higher intrinsic value band, the stock is considered overvalued. The investor can either book profits at these levels or continue to stay put. But should certainly not buy at these levels. Keeping these guidelines, we could check for the stock price of Amara Raja Batteries Limited as of today (2nd Dec 2014). Here is a snapshot from the NSE’s website – The stock is trading at Rs.726.70 per share! Way higher than the upper limit of the intrinsic value band. Clearly buying the stock at these levels implies one is buying at extremely high valuations. In fact this is the reason why they say – Bear markets create value. The whole of last year (2013) the markets were bearish, creating valuable buying opportunities in quality stocks. 15.7 – Conclusion Over the last 3 chapters, we have looked at different aspects of equity research. As you may have realized, equity research is simply the process of inspecting the company from three different perspectives (stages). Assuming the company clears both stage 1 and 2 of equity research, I proceed to equity research stage 3. In stage 3, we evaluate the intrinsic value of the stock and compare it with the market value. If the stock is trading cheaper than the intrinsic value, then the stock is considered a good buy. Else it is not. When all the 3 stages align to your satisfaction, then you certainly would have the conviction to own the stock. Once you buy, stay put, ignore the daily volatility (that is in fact the virtue of capital markets) and let the markets take its own course. Please note, I have included a DCF Model on ARBL, which I have built on excel. You could download this and use it as a calculator for other companies as well. 1. The free cash flow (FCF) for the company is calculated by deducting the capital expenditures from the net cash from operating activates 2. The free cash flow tracks the money left over for the investors 3. The latest year FCF is used to forecast the future year’s cash flow 4. The growth rate at which the FCF is grown has to be conservative 5. Terminal growth rate is the rate at which the company’s cash flow is supposed to grow beyond the terminal year 6. The terminal value is the value of the cash flow the company generates from the terminal year upto infinity 7. The future cash flow including the terminal value has to be discounted back to today’s value 8. The sum of all the discounted cash flows (including the terminal value) is the total net present value of cash flows 9. From the total net present value of cash flows, the net debt has to be adjusted. Dividing this by the total number of shares gives us the per share value of the company 10. One needs to accommodate for modeling errors by including a 10% band around the share price 11. By including a 10% leeway we create a intrinsic value band 12. Stock trading below the range is considered a good buy, while the stock price above the intrinsic value band is considered expensive 13. Wealth is created by long term ownership of undervalued stocks 14. Thus, the DCF analysis helps the investors to identify whether the current share price of the company is justified or not. Module 3 — Fundamental Analysis Chapter 16 The Finale 161 1. DCF requires us to forecast – To begin with, the DCF model requires us to predict the future cash flow and the business cycles. This is a challenge, let alone for a fundamental analyst but also for the top management of the company 2. Highly sensitive to the Terminal Growth rate – The DCF model is highly sensitive to the terminal growth rate. A small change in the terminal growth rate would lead to a large difference in the final output i.e. the per share value. For instance in the ARBL case, we have assumed 3.5% as the terminal growth rate. At 3.5%, the share price is Rs.368/- but if we change this to 4.0% (an increase of 50 basis points) the share price would change to Rs.394/- 3. Constant Updates – Once the model is built, the analyst needs to constantly modify and align the model with new data (quarterly and yearly data) that comes in. Both the inputs and the assumptions of the DCF model needs to be updated on a regular basis. 4. Long term focus – DCF is heavily focused on long term investing, and thus it does not offer anything to investors who have a short term focus. (i.e. 1 year investment horizon) Also, the DCF model may make you miss out on unusual opportunities as the model are based on certain rigid parameters. Having stated the above, the only way to overcome the drawbacks of the DCF Model is by being as conservative as possible while making the assumptions. Some guidelines for the conservative assumptions are – 1. FCF (Free Cash Flow) growth rate – The rate at which you grow the FCF year on year has to be around 20%. Companies can barely sustain growing their free cash flow beyond 20%. If a company is young and belongs to the high growth sector, then probably a little under 20% is justified, but no company deserves a FCF growth rate of over 20% 2. Number of years – This is a bit tricky, while longer the duration, the better it is. At the same time longer the duration, there would be more room for errors. I generally prefer to use a 10 year 2 stage DCF approach 3. 2 stage DCF valuation – It is always a good practice to split the DCF analysis into 2 stages as demonstrated in the ARBL example in the previous chapter. As discussed ,In stage 1 I would grow the FCF at a certain rate, and in stage 2 I would grow the FCF at a rate lower than the one used in stage 1 4. Terminal Growth Rate – As I had mentioned earlier, the DCF model is highly sensitive to the terminal growth rate. Simple thumb rule here – keep it as low as possible. I personally prefer to keep it around 4% and never beyond it. Here is how I exercise the ‘Margin of Safety’ principle in my own investment practice. Consider the case of Amara Raja Batteries Limited; the intrinsic value estimate was around Rs.368/- per share. Further we applied a 10% modeling error to create the intrinsic value band. The lower intrinsic value estimate was Rs.331/-. At Rs.331/- we are factoring in modeling errors. The Margin of Safety advocates us to further discount the intrinsic value. I usually like to discount the intrinsic value by another 30% at least. But why should we discount it further? Aren’t we being extra conservative you may ask? Well, yes, but this is the only way you can insulate yourself from the bad assumptions and bad luck. Think about it, given all the fundamentals, if a stock looks attractive at Rs.100, then at Rs.70, you can be certain it is indeed a good bet! This is in fact what the savvy value investors always practice. Also, remember good stocks will be available at great discounts mostly in a bear market, when people are extremely pessimistic about stocks. So make sure you have sufficient cash during bear markets to go shopping! 1. Be reasonable – Markets are volatile; it is the nature of the beast. However if you have the patience to stay put, markets can reward you fairly well. When I say “reward you fairly well” I have a CAGR of about 15-18% in mind. I personally think this is a fairly decent and realistic expectation. Please don’t be swayed by abnormal returns like 50- 100% in the short term, even if it is achievable it may not be sustainable 2. Long term approach – I have discussed this topic in chapter 2 as to why investors need to have a long term approach. Remember, money compounds faster the longer you stay invested 3. Look for investible grade attributes – Look for stocks that display investible grade attributes and stay invested in them as long as these attributes last. Book profits when you think the company no longer has these attributes 4. Respect Qualitative Research – Character is more important than numbers. Always look at investing in companies whose promoters exhibit good character 5. Cut the noise, apply the checklist – No matter how much the analyst on TV/newspaper brags about a certain company don’t fall prey to it. You have a checklist, just apply the same to see if it makes any sense 6. Respect the margin of safety – As this literally works like a safety net against bad luck 7. IPO’s – Avoid buying into IPOs. IPOs are usually overpriced. However if you were compelled to buy into an IPO then analyze the IPO in the same 3 stage equity research methodology 8. Continued Learning – Understanding markets requires a lifetime effort. Always look at learning new things and exploring your knowledge base. I would like to leave you with 4 book recommendations that I think will help you develop a great investment mindset.
https://www.scribd.com/document/368645807/Module-3-Fundamental-Analysis
CC-MAIN-2019-35
refinedweb
14,250
63.49
. Lately, I've been writing scripts that look more like modules. Some of this comes naturally from writing so many modules, but I've also discovered that these scripts are easier to manage and test. The result is a Perl file that acts like a module when I use it like a module, and acts like a script when I use it like a script. When I first started writing Perl, I mostly wrote the usual type of script: I started at the top of the page and wrote statements that I expected to execute in order as I moved down the page. I could read it like a movie script, going from one line to the next. Later, as I learned more Perl and got more programming experience (even with other languages such as C, Java, and Smalltalk), I started using more functions. The code, however, was still very procedural and I could follow it linearly down the page, even if I had to look at a function definition every so often. In the past couple of years, I've gotten on the testing bandwagon. I want to test everything. The Test::More module made that very easy, and there has been an explosion in the number of Test::* modules to check various things (and I'm responsible for a couple of them). Scripts are hard to test, though. I can choose the input at the start, such as the environment variables, command-line arguments, and standard input. After that I have to wait for the script's output to see if things went right. If things didn't go right, I have to figure out where, between the first line and the last line, things went wrong. A module is relatively easy to test. A good module author breaks down everything into methods (or plain old functions) that do one task or small bits of a task. As long as the methods don't use side effects (like looking at global variables, including nonlocal versions of Perl's special variables), I can give each method some arguments and check its return values. Once I test each of the methods, I can confidently use them knowing that they do what I expect. When things go wrong, I have a lot less to search through to find the problem. The Core Structure To create this sort of script, which I have been calling a "modulino," I take Perl back a step. Remember the main() function of a C program, and how Perl made the whole script file main() and called it main::? I need that back, but I'm going to call it the run() method: #!/usr/bin/perl package Local::Modulino; __PACKAGE__->run( @ARGV ) unless caller(); sub run { print "I'm a script!\n" } __END__ My modulino no longer assumes that it is in the main:: namespace and that the whole file is the script. I need to put everything that I want to do in the run() method, just like I would do with C's main(). As a start, my modulino just prints a short message. The script-or-module magic works in the third line, which checks the result of the caller() Perl built-in function. If something else in the Perl script calls the file, the caller() function (in scalar context) returns the calling package name. That's true if another Perl file loads this one with use() or require(). If I run my modulino as a script, there is no other file loading it, so caller() returns undef. If caller() returns a false value, then I execute the __PACKAGE__->run() method (the Perl compiler replaces the __PACKAGE__ token with the current package name). That's it. That's the core of the dual-duty Perl modulino. Everything else is just programming. I save this file as Modulino.pm and execute it in a couple of different ways. From the command line, I can call it like a script. The caller() expression returns False, so Perl executes the run(), which prints out my short message: prompt% perl Modulino.pm I'm a script! prompt% When I load the file as a module using the -M switch (which works like use()), the caller() expression returns a true value (extra credit for knowing what the actual value is), so unless() doesn't evaluate the rest of the statement. The run() never runs, and I don't get any output: prompt% perl -MModulino.pm -e 1 prompt% I can still get output thoughI just need to call the run() method myself: prompt% perl -MModulino -e 'Local::Modulino->run()'; I'm a script! prompt% The Rest of the Story Now that I have the basic structure of the modulino, I need to apply it to something useful, but that's probably too long for the space I have left in this article. Well, maybe not. For a while, I've wanted a little tool to download the RSS feed from The Perl Journal and print a table of contents. Unlike the PDF files I get for each issue, I always know where the RSS file is: It's the same URL every time. I wrote a modulino to download, parse, and display the table of contents of The Perl Journal. I could have written this as a script and gone through each of those steps in sequence, but with a modulino, I have a bit more flexibility; and when I decide to test it, I should be able to find problems easier and faster. In Local::Modulino::RSS (see Listing 1), I display the table of contents as text, which works just fine for me in my terminal window. However, since I structured the code as a module, I could very easily do something else. Perhaps I want to convert the table of contents into HTML so I can display it on my personal home page. Since only my run() knows anything about the data presentation, I just have to override it, which I show later. The rest of the modulino is a collection of very short functions doing a very specific task. I can easily write some testing code to make sure each of the small functions does what I think they should. I skip that part here since the topic has been covered so well in other articles. On line 1, I start with a shebang line. If I want to run this as a Perl script without specifying "perl" on the command line, the operating system needs to know which interpreter I intend to use. Next, I define the package name and invoke the run() method if I call the file as a script. If I use this file as a module, caller() returns true and I don't call the run() method. On line 8, I define my run() method. On line 11, I take the first argument, which is the package name, off of the argument list. Each method does this, so I can subclass the task. I call each function as a class method so inheritance works out right. The methods will always know who is calling them, even if it is a derived package. Most of the complexity of the task is hidden behind functions. The fetch_items() method is composed of the get_url(), get_data(), and get_items() methods that do most of the actual work. My run() method simply gets the parts it needs. This way, when I want to write another run() method, I won't have to do so much work. On line 13, I go through each item and extract the information for that issue. The get_issue() function returns the title of the item, which turns out to be the month and year of publication, along with the articles in that issue as a list of article title, author anonymous array pairs. It's the data, so up to this point, I can still do just about anything I like, but once I have the text for the title and the articles for the latest issue, I simply print them to the terminal as plain text, as I show in Listing 2. Some of you might have noticed the start of a model-view-controller (MVC) design (although I don't have much controller going on). The data handling and the presentation don't depend on each other. The MVC design, which may sound fancy or exotic, naturally pops up when I use a lot of small functions to do single tasks. The only part of my script that deals with the presentation of the data is the run() method, and that's easy to override with a subclass. In fact, it's so easy to subclass that I might as well do it here. In Listing 3, I create the Local::Modulino::RSS::HTML modulino, although it only overrides the run() method by defining its own version. I have to tell it that it is a subclass of Local::Modulino::RSS with the use base declaration so it looks in that class for methods it does not define, such as fetch_items() and get_issue(). I also require "RSS.pm" because I didn't bother to install these files as proper modules, so I don't want my modulino to look in Local/Modulino/RSS.pm to find the file. I show the new output format in Listing 4. Code By creating a modulino, I get my Perl scripts to do double duty as scripts and as modules. If I structure the code as a module, I can reuse and override it just like a module. Since I broke everything down to small functions instead of using a procedural style, I also make things easier to test. TPJ 1 #!/usr/bin/perl 2 package Local::Modulino::RSS; 3 4 __PACKAGE__->run() unless caller(); 5 6 use HTML::Entities; 7 use Data::Dumper; 8 9 sub run 10 { 11 my $class = shift; 12 13 foreach my $item ( $class->fetch_items ) 14 { 15 my( $title, @articles ) = $class->get_issue( $item ); 16 17 print "\n$title\n------------------\n"; 18 printf "%-45s %-30s\n", @$_ foreach ( @articles ); 19 } 20 21 } 22 23 sub fetch_items 24 { 25 my $class = shift; 26 27 my $url = $class->get_url(); 28 my $data = $class->get_data( $url ); 29 my @items = $class->get_items( $$data ); 30 } 31 32 sub get_issue 33 { 34 my $class = shift; 35 36 my $title = $class->get_title( $_[0] ); 37 my @articles = $class->get_articles( $_[0] ); 38 39 return ( $title, @articles ); 40 } 41 42 sub get_articles 43 { 44 my $class = shift; 45 46 my $d = $class->get_description( $_[0] ); 47 48 my @b = split /<br>\s*<br>/, $d; 49 my @articles = (); 50 51 foreach my $b ( @b ) 52 { 53 my @bits = split /<br>/, $b; 54 $author = pop @bits; 55 56 my $title = join " ", @bits; 57 58 $class->_normalize( $author, $title ); 59 push @articles, [ $title, $author ]; 60 } 61 62 @articles; 63 } 64 65 sub get_description { $_[0]->_field( $_[1], 'description' ) } 66 sub get_title { $_[0]->_field( $_[1], 'title' ) } 67 sub get_items { $_[0]->_field( $_[1], 'item' ) } 68 69 sub _normalize 70 { 71 my $class = shift; 72 73 foreach ( 0 .. $#_ ) 74 { 75 $_[$_] =~ s/^\s*|\s*$//g; 76 $_[$_] =~ s|</?b>||g; 77 $_[$_] =~ s|\s+| |g; 78 } 79 } 80 81 sub _field 82 { 83 my $data = $_[1]; 84 85 HTML::Entities::decode_entities( $data ); 86 87 my @matches = $data =~ m|<\Q$_[2]\E>(.*?)</\Q$_[2]\E>|sig; 88 89 wantarray ? @matches : $matches[0]; 90 } 91 92 sub get_data 93 { 94 my $class = shift; 95 96 require LWP::Simple; 97 my $data = LWP::Simple::get( $_[0] ); 98 defined $data ? \$data : $data; 99 } 100 101 sub get_url { 102 "" . 103 "feeds/public/the_perl_journal.xml" 104 }Back to article Listing 2 September 2004 PDF ------------------ Objective Perl: Objective-C-Style Syntax And Runtime for Perl Kyle Dawkins Scoping: Letting Perl Do the Work for You David Oswald Secure Your Code With Taint Checking Andy Lester Detaching Attachments brian d foy Unicode in Perl Simon Cozens PLUS Letter from the Editor Perl News Source Code Appendix August 2004 PDF ------------------ Regex Arcana Jeff Pinyan XML Subversion Curtis Lee Fulton OSCON 2004 Round-Up Andy Lester Molecular Biology in Perl Simon Cozens Pipelines and E-mail Addresses brian d foy PLUS Letter from the Editor Perl News Source Code AppendixBack to article Listing 3 #!/usr/bin/perl package Local::Modulino::RSS::HTML; use base qw( Local::Modulino::RSS ); require "RSS.pm"; __PACKAGE__->run() unless caller(); use HTML::Entities; sub run { my $class = shift; foreach my $item ( $class->fetch_items ) { my( $title, @articles ) = $class->get_issue( $item ); print "\n<h3>$title</h3>\n\n<ul>\n"; printf "<li><b>%s</b>, %s\n", @$_ foreach ( @articles ); print "</ul>\n"; } }Back to article Listing 4 <h3>September 2004 PDF</h3> <ul> <li><b>Objective Perl: Objective-C-Style Syntax And Runtime for Perl</b>, Kyle Dawkins <li><b>Scoping: Letting Perl Do the Work for You</b>, David Oswald <li><b>Secure Your Code With Taint Checking</b>, Andy Lester <li><b>Detaching Attachments</b>, brian d foy <li><b>Unicode in Perl</b>, Simon Cozens <li><b>PLUS Letter from the Editor Perl News</b>, Source Code Appendix </ul> <h3>August 2004 PDF</h3> <ul> <li><b>Regex Arcana</b>, Jeff Pinyan <li><b>XML Subversion</b>, Curtis Lee Fulton <li><b>OSCON 2004 Round-Up</b>, Andy Lester <li><b>Molecular Biology in Perl</b>, Simon Cozens <li><b>Pipelines and E-mail Addresses</b>, brian d foy <li><b>PLUS Letter from the Editor Perl News</b>, Source Code Appendix </ul>Back to article
http://www.drdobbs.com/web_development/184416165
CC-MAIN-2015-32
refinedweb
2,273
65.76
Adds support for Jade as a client-side template language to DocPad. Convention: .js.cjade npm install --save docpad-plugin-client-jade add to your pages before any client Jade templates add your templates to the pages as scripts, i.e. having the documents/templates/something.js.cjade, add it like ``` use it from you script: var html = JST['something']({contextvar: 'someval'}); The 'something' key is obtained from the path file by stripping the `.cjade` extension, then `.js` extension (see below how to configure), and then removing some common directory from the beginning of the path (see below how to configure).## ConfigureRead on Configuration file here: <>**Note** that in your config the key must be `"client-jade"` (which is part of the plugin name after 'docpad-plugin-').Currently the plugin supports the following options:* `namespace` – string, the namespace functions are attached to, defaults to `JST`* `prettify` – bool, if the output should be prettified, defaults to `true` in development and `false` in production* `baseDir` — string, the base directory (relative to the `documents` root) to be stripped from the path when the key is generated, defaults to `templates`* `stripJsExt` – bool, whether to strip `.js` extension when the key is generated## LicenseLicensed under [MIT License]()<br/>Copyright © 2012 Eugene Mirotin
https://www.npmjs.com/package/docpad-plugin-client-jade
CC-MAIN-2018-05
refinedweb
207
50.06
Hi, The problem with the link failure for arm is as follow: in the chips driver there seems to be code which is for arm32 and for NetBSD. somehow the person who wrote this just added a define fro arm32, but not to check for __NetBSD__. furthermore the actual include which provides function (sysarch) that was missing was commented out. maybe someone tried something and commented it out, or whatever, who knows? the reason why I assume it's __NetBSD__-ish is, that in other places this <machine/sysarch.h> include was within a __NetBSD__ define, so it's most likely that the same applies here. anyway, with the great help of Michel Daenzer I created a patch and it seems to fix it so far, it linked without any errors. what it does is simply to add a defined(__NetBSD__) to the #if which includes the include statement for sysarch.h and otherwise just an empty function. patch attached. you can change the name/number of the patch to your likeing, i am not really creative when it comes to give it sound names ;). furthermore, the arm .debs for X 4.2.1 are in ~othmar/public_html/xfree86_4.2.1-0pre1v2 on gluck if you want to use them for your X task force. or will be there shortly. ... ok, as it turns out another problem arose, the MANIFEST changed, nv was added and libafb.a was removed. i read the README but well, won't actually apply to me probably, because it probably just that a new driver was added and libafb.a was removed (whatever that is). so i assume the debhelper ocmmands aren't affected by that, no? i have attached the diff of the MANIFEST.arm which just showed up while buidling the debian packages. hope it's ok. btw. is it ok that there were quite many debconf-mergetemplate messages, dropping things? or whole templates dropped, etc? so long Othmar --- xc/programs/Xserver/hw/xfree86/drivers/chips/ct_bank.c.orig 2002-10-03 19:06:39.000000000 +0000 +++ xc/programs/Xserver/hw/xfree86/drivers/chips/ct_bank.c 2002-10-03 19:06:51.000000000 +0000 @@ -53,12 +53,15 @@ /* Driver specific headers */ #include "ct_driver.h" -#ifdef __arm32__ -/*#include <machine/sysarch.h>*/ +#if defined(__arm32__) && defined(__NetBSD__) +#include <machine/sysarch.h> #define arm32_drain_writebuf() sysarch(1, 0) -#define ChipsBank(pScreen) CHIPSPTR(xf86Screens[pScreen->myNum])->Bank +#elif defined(__arm32__) +#define arm32_drain_writebuf() #endif +#define ChipsBank(pScreen) CHIPSPTR(xf86Screens[pScreen->myNum])->Bank + #ifdef DIRECT_REGISTER_ACCESS int CHIPSSetRead(ScreenPtr pScreen, int bank) --- debian/MANIFEST.arm 2002-09-27 09:38:11.000000000 +0000 +++ debian/MANIFEST.arm.new 2002-10-04 07:32:35.000000000 +0000 @@ -5641,0 +5642 @@ +usr/X11R6/lib/modules/drivers/nv_drv.o @@ -5677 +5677,0 @@ -usr/X11R6/lib/modules/libafb.a @@ -7370,0 +7371 @@ +usr/X11R6/man/man4/nv.4x
https://lists.debian.org/debian-arm/2002/10/msg00010.html
CC-MAIN-2016-36
refinedweb
472
61.93
From: Chris Cheng (hycheng@3pardata.com) Date: Wed Jan 19 2000 - 12:55:29 PST dc, i have the fortunate opportunity to sit on both side of the table. i have to agree it is very hard to start the process. the semiconductor company i worked for arguably put the paranoid in english dictation ( :-) in my personal experience, i have enough credibility and persistent to push the issue through management and lawyers. and encryption and total customer support helps. when given a choice of spice vs. ibis, most of the high performance design customers will prefer spice models. it is true that not everyone is dell or ibm that have the power or weight to force their request but that's the reason why they are big and powerful. i give the same analogy to nuclear power knowledge of superpower. size does matter. if you are small shops, u just have to live with whatever people hand it to u. but that doesn't mean the practice of only giving out ibis model correct. chris -----Original Message----- From: owner-si-list@silab.eng.sun.com [mailto:owner-si-list@silab.eng.sun.com]On Behalf Of D. C. Sessions Sent: Wednesday, January 19, 2000 11:44 AM To: si-list@silab.eng.sun.com Subject: Re: [SI-LIST] : receiver jitter Scott McMorrow wrote: > Chris cheng. Chris, I hate to break it to you but I'm on the semiconductor side, not the PWB side. I can stat equite confidently that there IS a problem with providing *my* customers with SPICE models because I've been through the mill with *my* management trying to do just that and failed. Semi companies are (perhaps justifiably) paranoid about letting process data out of the fab; even I have to sign away my grandchildren to get each update of the process characterization tables. Keep in mind that not everyone doing high-speed design is Dell. They're big enough that having Legal spend a few days working on the NDA and getting a VP to bless it can be justified. Multiply that by a dozen or so major component suppliers and it adds up fast. If you're a small shop you're just plain SOL. Oh, and OF COURSE Intel uses SPICE, as do I. We're the ones doing the transistor-level work, after all --- those behavioral models have to come from somewhere. > > design. > > Unfortunately, when SPICE models are encrypted it it next to impossible > to resolve fundamental simulator issues, such as convergence, without > support from the vendor. If a vendor's models were to exist in a vacuum > then this might not be a bad thing. However, many of my SPICE issues > stem from interoperability of multiple vendor supplied models within > the same simulation. > > Perhaps SPICE would be more useful if there were some standards > to provide better encapsulation of necessary information about the model > such as the extraction assumptions, and subcircuit interface documentation. One of my favorite problems is namespace collision. Every semi vendor in the world calls their (encrypted) transistor model "N" or "NMOS" or some such and names their (encrypted) I/O cell "IO" or something equally original. Then you try to do a system-level simulation with four different processes each using the same name for different transistors and six ICs each calling its transceiver "IO" and watch the fun. IBIS eliminates the process model problem and at least lets you rename the flipping [Model] sections. --
http://www.qsl.net/wb6tpu/si-list4/0267.html
CC-MAIN-2016-50
refinedweb
579
63.29
With the release of Scala 2.11 it became fully JSR-223 compliant scripting language for Java. JSR-223 is the community request to allow scripting language to have an interface to Java and to allow Java to use the scripting language inside of applications. In Java 8 the Nashorn scripting engine was released as a native component of the JDK to support JavaScript with applications. This is possible through another JSR in Java 7 – namely JSR-229, which brings support for invoke dynamics, a way to support dynamic programming by the Java byte code compiler. Nashorn can be seen as a proof of concept of this newly added functionality. Running Scala Script in Java Application Let’s make a simple example to execute Scala as a script inside of a Java application. We need to import and initialize the script engine like this: import javax.script.*; import javax.script.ScriptEngine; import javax.script.ScriptEngineManager; public class TestScript { public static void main(String... args) throws Exception { ScriptEngine engine = new ScriptEngineManager().getEngineByName("scala"); ... With ScriptEngineManager().getEngineByName(“scala”) a matching script engine is looked up. This just gives us the engine but not the required libraries and hooks to really execute a script. There is one thing to note about the way Scala loads the required standard classes for the JVM. In case of the execution of scala standard Scala libraries are placed in the classpath of the JVM. This would not be the case here. You can read this for a reference. Trying to run a sample class would result in the following exception: reflect.jar:. TestScript [init] error: error while loading Object, Missing dependency 'object scala in compiler mirror', required by /Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/rt.jar(java/lang/Object.class) Failed to initialize compiler: object scala in compiler mirror not found. ** Note that as of 2.8 scala does not assume use of the java classpath. ** For the old behavior pass -usejavacp to scala, or if using a Settings ** object programmatically, settings.usejavacp.value = true. Exception in thread "main" scala.reflect.internal.MissingRequirementError: object scala in compiler mirror not found. We can work around this by using the scala java tools helping us to load the standard Scala libraries into the JVM classpath. This is one way to make our Scala script work: mport { .... ((BooleanSetting)(((IMain)engine).settings() .usejavacp())).value_$eq(true); .... The other is to simply execute the code with the following option: $ java -Dscala.usejavacp=true ... The complete script is this: import { ScriptEngine engine = new ScriptEngineManager().getEngineByName("scala"); ((BooleanSetting)(((IMain)engine) .settings().usejavacp())) .value_$eq(true); String testScript = "var a:Int = 10"; engine.eval(testScript); String testScript2 = "println(a)"; engine.eval(testScript2); String testScript3 = "println(a+5)"; engine.eval(testScript3); } Compiling and running this is pretty straight forward as we do need the Scala libraries they get places into the classpath. Compiling: $ javac .java Running: $ java -Dscala.usejavacp=true 10 15 One thought on “Scripting Scala – JSR-223” The Scala 2.12.1 jar seems to no longer export the ScriptEngine. Trying to use it gives null.
https://henning.kropponline.de/2016/04/17/scripting-scala-jsr-223/
CC-MAIN-2018-09
refinedweb
517
59.19
Angular JS Interview Questions and Answers Difficulty Level: AllBeginnerIntermediateExpert Ques 1. What is AngularJS? Ques 2. Why is this project called "AngularJS"? Why is the namespace called "ng"? Ques 3. Tell me does Angular use the jQuery library? Ques 4. Tell me can we use the open-source Closure Library with Angular? Ques 5. Why to choose Angular JS Javascript Framework for front-end web development? Ques 6. What are the key features of AngularJS? Ques 7. What is a scope in AngularJS? Ques 8. How will you initialize a select box with options on page load? Ques 9. How will you add options to a select box?
http://www.withoutbook.com/InterviewQuestionList.php?tech=63&dl=1&subject=Angular%20JS%20Interview%20Questions%20and%20Answers
CC-MAIN-2017-09
refinedweb
108
70.19
ROS process. Setup The connection is based on serial communication and there is one big advantage: you don’t need to install the Arduino IDE in the host computer when running it. In other words, you don’t even need a GUI to enable it. However, there are quite a few items to install to import the ros library onto arduino to code. Type the following code in the terminal : (Replace Indigo with your current version of ROS) sudo apt-get install ros-indigo-rosserial-arduino sudo apt-get install ros-indigo-rosserial The preceding installation steps created the necessary libraries, now the following will create the ros_lib folder that the Arduino build environment needs to enable Arduino program s to interact with ROS. ros_lib has to be added in the arduino libraries directory. By default, it will be typically located in ~/sketchbook/libraries or in my documents. If there is a ros_lib in that location already, delete it and run the following commands. cd ~/sketchbook/libraries rm -rf ros_lib rosrun rosserial_arduino make_libraries.py . (DO NOT FORGET THE PERIOD AT THE END) Example Code - Publisher Now you have everything set up to start writing your publisher and subscriber in Arduino IDE. After doing the above steps, you should be able to see ros_lib under File -> Examples. I’ll step you through an example program and help you run it as well at the end as it might be tricky at first. Let’s talk about a basic publisher subscriber program. If you are confused or want to refresh your memory on Ros Publishers and Subscribers, checkout Ros Pub-Sub. Once you understand that, you can head back and continue from here. Here is an example from the official repository and also an example of my own which uses a Boolean data type rather than a String. Basic Hello world program: #include <ros.h> #include <std_msgs/String.h> ros::NodeHandle nh; //Node Handler () std_msgs::String str_msg; //initialise variable ros::Publisher chatter("chatter", &str_msg); char hello[13] = "Hi, This is my first Arduino ROS program!"; void setup() { nh.initNode(); nh.advertise(chatter); } void loop() { str_msg.data = hello; chatter.publish( &str_msg ); nh.spinOnce(); delay(1000); } Explanation: The above program can be used to send some messages to ros through serial communication. But if you are working on a real project, sending characters or String has very minimum use as you may know. So, let’s also go through the same example, but with some modifications to use a different data type, like a Boolean. Boolean, in my opinion, is the most used data type to flag different aspects of a program and to make decisions based on that to enable or disable various items. #include <ros.h> #include <std_msgs/Bool.h> ros::NodeHandle nh; //Node Handler () std_msgs::Bool bool_flag; //initialise variable ros::Publisher chatter("chatter", &bool_flag); # Bool flag = true; void setup() { nh.initNode(); nh.advertise(chatter); } void loop() { bool_flag.data = flag; chatter.publish( &bool_flag ); nh.spinOnce(); delay(1000); } The only difference between the code above and the one before is the change in datatypes and the initialized values. You could use this to process information and send true or false information based on the condition. An example would be to monitor the temperature in a room with a sensor and when it’s greater than a threshold, send out a warning message or turn on the Air conditioner. This can be achieved directly by connecting the sensors to Arduino itself, but in more complex applications it is often desirable for ROS to handle all communications between devices. In that case, this can be achieved by subscribing to a topic which is published on ROS by something else. So, let’s see how to write a subscriber in Arduino. Example Code - Subscriber #include <ros.h> #include <std_msgs/Empty.h> ros::NodeHandle nh; void messageCb(const std_msgs::Empty& toggle_msg){ digitalWrite(13, HIGH-digitalRead(13)); // blink the led } ros::Subscriber<std_msgs::Empty> sub("toggle_led", &messageCb ); void setup() { pinMode(13, OUTPUT); nh.initNode(); nh.subscribe(sub); } void loop() { nh.spinOnce(); delay(1); } Explanation for the above code: It looks pretty similar to a generic non-arduino ROS subscriber. The topic being subscribed to toggle_led doesn’t have any data being published through it but the availability of the topic itself triggers a call back which would turn on the LED on pin 13 of the arduino. This is different from a usual subscriber wherein the subscriber subscribes for some data from the topic, say a boolean or integer, but it is std_msgs::Empty in this case. This is probably not known by many of us and if that’s the case, you learned a new thing today! You can subscribe to an Empty Data type and have a call back based on that. As an addition, let’s also have a look at an example which actually subscribes to a topic with some data unlike the previous case. Specifically, let’s look at subscribing to a topic which has boolean data. The following example is to toggle the magnet on and off based on the bool value subscribed under the Ros Topic - magnet_state #include <ros.h> #include <std_msgs/Empty.h> ros::NodeHandle nh; int flag; bool magnet_state; void call_back( const std_msgs::Bool& msg){ magnet_state = msg.data; flag = 1; } ros::Subscriber<std_msgs::Bool> sub(“magnet_state", call_back ); void setup() { nh.initNode(); nh.subscribe(sub); } void loop() { if ((magnet_state== false) && flag) { //Turn magnet on; } else if ((magnet_state == true) && flag) { //Turn magnet off; } nh.spinOnce(); } Running the code Now, since we are done with all of the basic ways and examples to write an Arduino Pub Sub code, let’s see how to run it. First and foremost, Upload the code to Arduino and make sure all the relevant connections are established. Now, launch the roscore in a new terminal window by typing: roscore Next, run the rosserial client application that forwards your Arduino messages to the rest of ROS. Make sure to use the correct serial port: rosrun rosserial_python serial_node.py /dev/ttyUSB0 In the above line, /dev/ttyUSB0 is the default port. It might be /dev/ttyACM0 depending on your local machine. The number at the end may vary depending on the COM port the arduino is connected to. Once the publisher and the subscriber are turned on, and the above code is executed, check with the rostopic list command to see if it is working. You should see the names of the topic that is being published and the topic that is being subscribed to. Note: Once the Arduino has the code, it can be used in any other machine with ROS. The arduino IDE and the ros_lib library are not necessary to run rosserial. (No GUI as such is needed.) Run the following after connecting the required items. sudo apt-get install ros-indigo-rosserial-arduino sudo apt-get install ros-indigo-rosserial rosrun rosserial_python serial_node.py /dev/ttyUSB0 Note in many cases user permission can be an issue. If the connection is denied when trying to establish the rosserial connection, type the following command. sudo chmod +x /dev/ttyUSB0 for example should open the port. Alternatively, try sudo chmod a+rw /dev/ttyUSB0 if the above command doesn’t work.
https://roboticsknowledgebase.com/wiki/common-platforms/ros/ros-arduino-interface/
CC-MAIN-2021-04
refinedweb
1,204
55.74
CodePlexProject Hosting for Open Source Software I have a breadcrumb template that we're using throughout the site, and basically, if I'm on /controlpanel/training, I need to be able to get the model title for /controlpanel to use in the breadcrumb. How would I go about doing this? If the title is the one on the Route part, something like Model.ContentItem.RoutePart.Title should do the trick. I'm not quite sure I follow... mmh., not sure I follow what you don't follow: you asked a question, and I gave you code that seems to answer that question. So unless I've misunderstood the question... Model.ContentItem.RoutePart.Title gets me the page title for the current page, no problem, but I need the title from a different page. In the example above, the breadcrumb would be "Home > (/controlpanel title) > (/controlpanel/training title). I see. You need a reference to these other content items. Once you have that, you can put it inot a dynamic variable and apply the same path as above. "if it has a Routable", but the correct way to do it is to call contentManager.GetItemMetadata(contentItem).DisplayText, which will get the Routable Title if there is one, or another valid title for this content item. Ha! thanks Sébastien :) And how would I define contentItem as a different page? I'm sorry, I don't quite understand the question. You mean you don't have the reference to that other content item? What relates it to the current item? The slug. /contentupdates and /contentupdates/page are two completely different pages, but in a "hierarchy", so the breadcrumb on /contentupdates/page needs to know the proper page title from /contentupdates. Sorry if I'm making this difficult; I still don't have a very good grasp of MVC or Orchard's models... used to building everything from scratch. Oh, ok, I see. Well, you'd have to split the slug along the slashes then, and query the content manager for items with the parent slugs. I gathered that much, and I'm already splitting out the slug just fine. It's the querying part that I don't know. Something like contentManager.Query<RoutePart, RoutePartRecord>(VersionOptions.Published).Where(r => r.Path == matchedPath) What namespace does contentManager need? With a @using Orchard.ContentManagement You can get a reference to the content manager by doing WorkContext.Resolve<IContentManager>() Hmm... neither VS or Orchard are seeing the contentManager object as available... The name 'contentManager' does not exist in the current context. And it doesn't complain bout the using statement? You did assign what the Resolve method returned, right? Ah, gotcha. Sorry, I'm doing most of my editing over FTP, and generally don't have Intellisense immediately at my disposal. Now it's complaining about the RoutePart and RoutePartRecord, and not giving any suggestions on the namespace... Let me search that for you ;) You should get the source code... Orchard.Core.Routable.Models My searching on the subject of Orchard hasn't been particularly helpful... I can't seem to find any real documentation on it. Apologies if I sound like a noob, but this is a relatively large project being crammed into three weeks, while I'm busy fixing everything else that happens to break. This is the only outstanding issue left, and between my inability to find documentation and my lack of experience with MVC, something just isn't clicking. So I have the following: var contentManager = WorkContext.Resolve<IContentManager>(); IContentQuery rec = contentManager.Query<RoutePart, RoutePartRecord>(VersionOptions.Published).Where(r => r.Path == "/" + category); On the second line? I don't know, it compiles fine here. Note that you can (and probably should) use a var on the second line as well. Ok, so there was a stupid moment on my part there... scope issues. Anyway, now that i have var rec, and it's compiling, what the hell *is* rec at this point, and how do I get a title out of it? Rec at this point is still a query object. You need to get its first result (if there is one, but there should be), and that will be the route part object. Holy crap that took too long to figure out... thanks a ton! Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://orchard.codeplex.com/discussions/275215
CC-MAIN-2017-34
refinedweb
750
66.94
FreeBSD Bugzilla – Bug 35817 tcl84 port fails to place a link to tclPlatDecls.h into /usr/local/include/tcl8.4 Last modified: 2002-03-18 09:50:00 UTC The tcl84 port places the tcl include file hierarchy under /usr/local/include/tcl84. This directory contains a symbolic link to generic/tcl.h. The user expected to find tcl.h by supplying -I/usr/local/include/tcl8.4 to cc and using #include <tcl.h> or -I/usr/local/include and #include <tcl8.4/tcl.h>. tcl.h itself contains the line #include "tclPlatDecls.h" This line expects the file to find in the same directory as tcl.h was found: /usr/local/include/tcl8.4. This means, that there should be a symbolic link: /usr/local/include/tcl8.4/tclPlatDecls.h -> /usr/local/include/tcl8.4/generic/tclPlatDecls.h Fix: Make a symbolic link for the file in question by adding the following line around line 627 of unix/Makefile.in @ln -sf $(GENERIC_INCLUDE_INSTALL_DIR)/tclPlatDecls.h $(INCLUDE_INSTALL_DIR)/tclPlatDecls.h How-To-Repeat: try to compile the following file: /* BOF */ #include <tcl.h> /* EOF */ with cc -I/usr/local/include/tcl8.4 or /* BOF */ #include <tcl8.4/tcl.h> /* EOF */ with cc -I/usr/local/include Observer that in both cases tclPlatDecls.h cannot be found. Responsible Changed From-To: freebsd-ports->dinoex Over to MAINTAINER State Changed From-To: open->closed Committed Thanks. On Mon, 18 Mar 2002, Kirk wrote: K>Hello. This fix didn't work for me. I added the line: K> K>@ln -sf $(GENERIC_INCLUDE_INSTALL_DIR)/tclPlatDecls.h $(INCLUDE_INSTALL_DIR)/tclPlatDecls.h K> K>first before the existing line 627, and then in a second try, after K>the existing line 627, in the file K> K>/usr/ports/lang/tcl84/work/tcl8.4a3/unix/Makefile.in K> K>in the ports of Freebsd release 4.5. I then did make deinstall and K>make install. I still have the exact problem reported. I'm not K>complaining, Freebsd is after all written for free, and bugs are few K>and far between. Also, installing TCL8.3 fixed the problem and works K>fine. I know the problem report says it's for 5.0-CURRENT, and perhaps K>that's why it didn't work on 4.5 release, but I wouldn't think it K>would make a difference. You must also re-configure the port after you have changed Makefile.in. tcl8.3 works , because its tcl.h did not include tclPlatDecls.h. NB: tcl8.4 is bad anyway, because the tcl maintainers have managed to change the API between two patch levels of 8.4 - they have changed function delcarations like TclSplitList that were stable for ages. Really annoying. harti -- harti brandt, brandt@fokus.fhg.de
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=35817
CC-MAIN-2015-48
refinedweb
462
54.08
Hello, I have an Atmel Start project on SAM V71. I put bit rates in spi_m_sync_set_baudrate and was not getting the frequencies I was expecting. When I tried on the V71, the spi_m_sync_set_baudrate sets the clock divisor field SCBR in SPI_CSR0 (SPI clock phase select register). The SCBR field is only 8 bits. The function does not check if the argument is out of bounds. On the V71, the initialization code makes the fperipheral_clock run at 150 MHz. I tested multiple values on spi_m_sync_set_baudrate() with a logic analyzer and I get a frequency of 150MHz / divisor. I used this code: #include <atmel_start.h> int main(void) { /* Initializes MCU, drivers and middleware */ atmel_start_init(); static uint8_t example_SPI_0[2] = {0x40, 0x05}; struct io_descriptor *io; spi_m_sync_get_io_descriptor(&SPI_0, &io); spi_m_sync_set_baudrate(&SPI_0, 150); /*Found it sets divisor, not bit rate. 150 MHz /150 = 1 MHz */ spi_m_sync_enable(&SPI_0); /* Replace with your application code */ while (1) { gpio_set_pin_level(ARDUINO_D9, false); delay_us(10); io_write(io, example_SPI_0, 2); gpio_set_pin_level(ARDUINO_D9, true); delay_us(10); } } inside atmel_start_pins.h #define ARDUINO_D9 GPIO(GPIO_PORTC, 9)
https://www.avrfreaks.net/forum/spimsyncsetbaudrate-sets-spi-clock-divisor-sam-v71
CC-MAIN-2022-33
refinedweb
170
56.96
Day 9: Using the Elixir Registry to perform locking of resources In yesterday’s blog post, I shared how I got a worker pool working with a Queue and GenStage. However, there was still one issue with that implementation. When there was more than one (worker which is the case in my setup), it had the potential to run multiple user syncs in multiple workers. Let us look at an example of how this happened. - A sync event for user #1 arrives in the queue and is dispatched to worker#1. - Another sync event for user #1 arrives and since the queue is empty, it gets filled up with this user id #1. - And as soon as the queue has some data it is dispatched to the currently available workers, if we have 2 workers in our setup, this is dispatched to worker #2. At this point we have 2 workers trying to sync the same user which is a recipe for disaster. One way to work around this is to use database locks provided by postgresql. However, I didn’t want to go down this route if it was possible. ETS to store current workers At this point I contemplated using a simple ETS table to store the {user_id, pid} when a worker starts and removing it when a worker ends. This would work. However, if a worker crashed after storing a user_id, it would permanently stop the user_id’s syncs from being processed. I would then have to monitor the workers and clean up stuff if something crashed. However, at this point I was leaning more towards using the Registry as it did clean up the data associated with a process when it crashed. Using the Registry Using the Registry turned out to be much simpler than I thought. I had to setup a supervisor for it in my Application supervisor(Registry, [:unique, Danny.SyncQueue.Worker.registry_name]), # create registry to keep track of current worker jobs Worker code before using Registry for locking def handle_events([uid], _from, state) do debug "syncing #{uid} on #{inspect self()}" UserSync.sync(uid) {:noreply, [], state} end Worker code after using Registry for locking @registry_name :worker_wip def registry_name, do: @registry_name def handle_events([uid], _from, state) do get_lock_or_re_nq(uid, fn -> debug "syncing #{uid} on #{inspect self()}" UserSync.sync(uid) end) {:noreply, [], state} end defp get_lock_or_re_nq(uid, fun) do # register ourselves under the uid key case Registry.register(@registry_name, uid, :ok) do {:ok, _} -> # this means, we are good to go ahead and do our processing fun.() Registry.unregister(@registry_name, uid) # unregister once our function is done {:error, {:already_registered, pid} } -> # someone else is already working with this uid # let us re enqueue it so that it can be processed later debug("re nqing on account of another worker: #{inspect pid} processing this job. self: #{inspect self()}") spawn(fn -> :timer.sleep(:timer.seconds(1)) # sleep for a second before nqing, to avoid being quickly picked up by another worker Queue.nq([uid]) end) end end The process, is pretty simple. I let the workers pick up whatever user_ids are available in the queue. However, when they start processing, I try to register this user_id in the Registry with the current process’s pid. If this user_id has already been registered with a different process, the Registry returns an {:error, {:alreday_registered, pid } }. At which point, I wait a for a second and re enqueue it. I do the waiting in a different process to avoid blocking a worker. I am close to the half month mark and am hopefully done with the difficult product stuff. I’ll be spending more time trying to get feedback after releasing a private beta. Wish me luck! Originally published at blog.12startupsin12months.in on June 22, 2017.
https://medium.com/@12startupsin12months/day-9-using-the-elixir-registry-to-perform-locking-of-resources-1db0e7848b0a
CC-MAIN-2017-34
refinedweb
625
62.78
I2C at expansion board Hello there, im a bit confused about setting up the I2C Bus. I can use my sensor at Pin9 and Pin10 without any problems, but i cant manage to set up any other Pins for I2C. I read that this should be working but i couldnt get it done by now. I use FW 1.8.6 The Sensor seems not to exist when i connect it to any other Pins. (i.e. Pins 11 and 12) Actually i use a class to control my sensor import time import pycom from machine import I2C class DHT12(): def __init__(self,pinSDA='P9',pinSCL='P10',bd=100000): self.i2c = I2C(2, pins=(pinSDA,pinSCL)) self.i2c.init(I2C.MASTER, baudrate=bd) self.addr = self.i2c.scan() When i change pinSDA to P19 and pinSCL to P20 i get I2C bus error and a strange behaviour Bild Link) Thanks in advance for any help on this topic. - ledbelly2142 last edited by @lopyusr I apologize, I misunderstood. I provided too much info on something not your issue. I'm assuming you are using this driver and example: HERE @ledbelly2142 Im using a DHT12 temperature and humidity sensor. I dont get the point with that battery reading code. I just wonder why it is working on the Pins 9 and 10 but not at any onthers :/ Yfyi i use the Expansion board also. Best regards for your help. - ledbelly2142 last edited by ledbelly2142 This post is deleted! @ledbelly2142 It all works fine until i try changing the Pins i cant figure out what im doing wrong at this point :/ Im building a driver file. So im initing the sensor at given ports. When i try i2c = I2C(0,mode=I2C.MASTER,pins=('P9','P10'),baudrate=1000000) i get the following result i2c = I2C(0,mode=I2C.MASTER,pins=('P20','P19'),baudrate=1000000) results in when i try i2c.scan() at this config i dont get a result. - Xykon administrators last edited by @lopyusr said in I2C at expansion board: self.i2c = I2C(2, pins=(pinSDA,pinSCL)) self.i2c.init(I2C.MASTER, baudrate=bd) Please try the following instread: self.i2c = I2C(2, mode=I2C.MASTER, pins=(pinSDA,pinSCL),baudrate=bd) self.i2c.init(I2C.MASTER, baudrate=bd) Please also remember that I2C bus nr. 2 is emulated in software while I2C bus 0 and 1 are hardware I2C ports, so you might not get reliable communication at this speed when using bus nr. 2 - ledbelly2142 last edited by @lopyusr Try this before you class or def i2c = I2C(0, I2C.MASTER, baudrate=100000, pins=('P8','P9')) What are you init -ing? Or maybe try this def __init__(self, i2c, address=0x29): # where0x29 is your device I2C address self.i2c = i2c self._address = address
https://forum.pycom.io/topic/2318/i2c-at-expansion-board/1
CC-MAIN-2019-09
refinedweb
462
67.96
Principles of Computer Systems Spring 2019 Stanford University Computer Science Department Lecturer: Chris Gregg Lecture 13: An Ice Cream Store - Today, we will focus on a single, rather involved program that demonstrates how threads can communicate with each other through the use of mutexes and semaphores (which, as you will recall, is based on a conditional_variable_any). - We are going to discuss five primary ideas: - The binary lock - A generalized counter - A binary rendezvous - A generalized rendezvous - Layered construction - Using these ideas, we will construct an ice cream store simulation that involves customers, clerks, managers, and cashiers. The original program was created as a final exam question by Julie Zelenski in CS 107 when CS 107 also taught multithreading. - There is a handout with all of the code (on paper in class), which you can download here. - You can download the runnable code here. Lecture 13: An Ice Cream Store - We have already discussed binary locks in some detail - Using a mutex, we construct a single-owner lock. - When created, a mutex is unlocked and brackets critical regions of code that are matched with lock and unlock calls on the mutex. (We can also use a lock_guard<mutex> if the situation calls for it and our lock can go out of scope without further non-locked code being run). - This concurrency pattern locks down sole access to some shared state or resource that only one thread can be manipulating at a time. Lecture 13: An Ice Cream Store, idea 1: the binary lock - When we use a semaphore, we can track the use of a resource, be it empty buffers, full buffers, available network connection, or what have you. - The semaphore is essentially an integer count, capitalizing on its atomic increment, decrement, and the efficient blocking when a decrement is levied against a zero. - A semaphore is constructed to the initial count on the resource (sometimes 0, sometimes N—it depends on the situation). - As threads require a resource, they wait on the semaphore to transactionally consume it. - Other threads (possibly, but not necessarily the same threads that consume) signal the semaphore when a new resource becomes available. - This sort of pattern is used to efficiently coordinate shared use of a limited resource that has a discrete quantity. It can also be used to limit throughput (such as in the Dining Philosophers problem) where unmitigated contention might otherwise lead to deadlock. Lecture 13: An Ice Cream Store, idea 2: a generalized counter - When we use a semaphore, we can coordinate cross-thread communication. - Suppose thread A needs to know when thread B has finished some task before it itself can progress any further. - Rather than having A repeatedly loop (e.g. busy wait) and check some global state, a binary rendezvous can be used to foster communication between the two. - The rendezvous semaphore is initialized to 0. When thread A gets to the point that it needs to know that another thread has made enough progress, it can wait on the rendezvous semaphore. - After completing the necessary task, B will signal it. If A gets to the rendezvous point before B finishes the task, it will efficiently block until B's signal. If B finishes the task first, it signals the semaphore, recording that the task is done, and when A gets to the wait, it will be sail right through it. - A binary rendezvous semaphore records the status of one event and only ever takes on the value 0 (not-yet-completed or completed-and-checked) and 1 (completed-but-not-yet-checked). - This concurrency pattern is sometimes used to wakeup another thread (such as disk reading thread that should spring into action when a request comes in), or to coordinate two dependent actions (a print job request that can't complete until the paper is refilled), and so forth. - If you need a bidirectional rendezvous where both threads need to wait for the other, you can add another semaphore in the reverse direction (e.g. the wait and signal calls inverted). - Be careful that both threads don’t try to wait for the other first and signal afterwards, else you can quickly arrive at deadlock! Lecture 13: An Ice Cream Store, idea 3: a binary rendezvous The generalized rendezvous is a combination of binary rendezvous and generalized counter, where a single semaphore is used to record how many times something has occurred. For example, if thread A spawned 5 thread Bs and needs to wait for all of them make a certain amount of progress before advancing, a generalized rendezvous might be used. The generalized rendezvous is initialized to 0. When A needs to sync up with the others, it will call wait on the semaphore in a loop, one time for each thread it is syncing up with. A doesn't care which specific thread of the group has finished, just that another has. If A gets to the rendezvous point before the threads have finished, it will block, waking to "count" each child as it signals and eventually move on when all dependent threads have checked back. If all the B threads finish before A arrives at the rendezvous point, it will quickly decrement the multiply-incremented semaphore, once for each thread, and move on without blocking. The current value of the generalized rendezvous semaphore gives you a count of the number of tasks that have completed that haven't yet been checked, and it will be somewhere between 0 and N at all times. The generalized rendezvous pattern is most often used to regroup after some divided task, such as waiting for several network requests to complete, or blocking until all pages in a print job have been printed. As with the generalized counter, it’s occasionally possible to use thread::join instead of semaphore::wait, but that requires the child threads fully exit before the joining parent is notified, and that’s not always what you want (though if it is, then join is just fine). Lecture 13: An Ice Cream Store, idea 4: a generalized rendezvous - Once you have the basic patterns down, you can start to think about how mutexes and semaphores can be layered and grouped into more complex constructions. - Consider, for example, the constrained dining philosopher solution in which a generalized counter is used to limit throughput and mutexes are used for each of the forks. - Another layered construct might be a global integer counter with a mutex lock and a binary rendezvous that can do something similar to that of a generalized rendezvous. As tasks complete, they can each lock and decrement the global counter, and when the counter gets to 0, a single signal to a rendezvous point can be sent by the last thread to finish. - The combination of mutex and binary rendezvous semaphore could be used to set up a "race": thread C waits for the first of threads A and B to signal. threads A and B each compete to be one who signals the rendezvous. thread C only expects exactly one signal, so the mutex is used to provide critical-region access so that only the first thread signals, but not the second. Lecture 13: An Ice Cream Store, idea 5: layered construction The program we will create simulates the daily activity in an ice cream store. The simulation’s actors are the clerks who make ice cream cones, the single manager who supervises, the customers who buy ice cream cones, and the single cashier who accepts payment from customers. A different thread is launched for each of these actors. Each customer orders a few ice cream cones, waits for them to be made, gets in line to pay, and then leaves. customers are in a big hurry and don’t want to wait for one clerk to make several cones, so each customer dispatches one clerk thread for each ice cream cone he/she orders. Once the customer has all ordered ice cream cones, he/she gets in line at the cashier and waits his/her turn. After paying, each customer leaves. Lecture 13: An Ice Cream Store: the simulation Each clerk thread makes exactly one ice cream cone. The clerk scoops up a cone and then has the manager take a look to make sure it is absolutely perfect. If the cone doesn't pass muster, it is thrown away and the clerk makes another. Once an ice cream cone is approved, the clerk hands the gem of an ice cream cone to the customer and is then done. The single manager sits idle until a clerk needs his or her freshly scooped ice cream cone inspected. When the manager hears of a request for an inspection, he/she determines if it passes and lets the clerk know how the cone fared. The manager is done when all cones have been approved. The customer checkout line must be maintained in FIFO order. After getting their cones, a customer "takes a number" to mark their place in the cashier queue. The cashier always processes customers from the queue in order. The cashier naps while there are no customers in line. When a customer is ready to pay, the cashier handles the bill. Once the bill is paid, the customer can leave. The cashier should handle the customers according to number. Once all customers have paid, the cashier is finished and leaves. Lecture 13: An Ice Cream Store: the simulation (continued) Let's look at the following in turn: random time/cone-perfection generation functions struct inspection struct checkout customer clerk makeCone manager inspectCone cashier main Lecture 13: An Ice Cream Store: the simulation (continued) - Because we are modeling a "real" ice cream store, we want to randomize the times for each event. We also want to generate a boolean that says yay/nay about whether a cone is perfect. The following functions accomplished this task: Lecture 13: An Ice Cream Store: random generation functions static mutex rgenLock; static RandomGenerator rgen; static unsigned int getNumCones() { lock_guard<mutex> lg(rgenLock); return rgen.getNextInt(kMinConeOrder, kMaxConeOrder); } static unsigned int getBrowseTime() { lock_guard<mutex> lg(rgenLock); return rgen.getNextInt(kMinBrowseTime, kMaxBrowseTime); } static unsigned int getPrepTime() { lock_guard<mutex> lg(rgenLock); return rgen.getNextInt(kMinPrepTime, kMaxPrepTime); } static unsigned int getInspectionTime() { lock_guard<mutex> lg(rgenLock); return rgen.getNextInt(kMinInspectionTime, kMaxInspectionTime); } static bool getInspectionOutcome() { lock_guard<mutex> lg(rgenLock); return rgen.getNextBool(kConeApprovalProbability); } - There are two global structs -- because threads share global address space, this is the easiest way to handle them. We could, of course, create the data structures in mainand then pass them into each thread and function by reference or pointer, but this simplifies it (though it does pollute the global namespace). - The first struct we will look at is the inspection struct: Lecture 13: An Ice Cream Store: struct inspection struct inspection { mutex available; semaphore requested; semaphore finished; bool passed; } inspection; - This struct coordinates between the clerk and the manager. - The available mutex ensures the manager's undivided attention, so the single manager can only inspect one cone cone at a time - The requested and finished semaphores coordinate a bi-directional rendezvous between the clerk and the manager. - The passed bool provides the approval for a single cone. - Note that we declare a variable of the struct right after the definition (line 6). This is the global variable (the struct definition itself, while global too, does not take up memory). Lecture 13: An Ice Cream Store: struct checkout - The second struct we will look at is the checkout struct: struct checkout { checkout(): nextPlaceInLine(0) {} atomic<unsigned int> nextPlaceInLine; semaphore customers[kNumCustomers]; semaphore waitingCustomers; } checkout; - This struct coordinates between the customers and the cashier. - The nextPlaceInLine variable is a new, atomic variable that guarantees that ++ and -- work correctly without any data races. - The customers array-based queue of semaphores allows the cashier to tell the customers that they have paid. - The waitingCustomers semaphore informs the cashier that there are customers waiting to pay. - Again, we define the global variable checkout on line 6. Customers in our ice cream store, order cones, browse while waiting for them to be made, then wait in line to pay, and then leave. The customer function handles all of the details of the customer's ice cream store visit: Lecture 13: An Ice Cream Store: customer function static void customer(unsigned int id, unsigned int numConesWanted) { // order phase vector<thread> clerks; for (unsigned int i = 0; i < numConesWanted; i++) clerks.push_back(thread(clerk, i, id)); browse(); for (thread& t: clerks) t.join(); // checkout phase int place; cout << oslock << "Customer " << id << " assumes position #" << (place = checkout.nextPlaceInLine++) << " at the checkout counter." << endl << osunlock; checkout.waitingCustomers.signal(); checkout.customers[place].wait(); cout << "Customer " << id << " has checked out and leaves the ice cream store." << endl << osunlock; } The customer needs one clerk for each cone. The customer browses and then must join all of the threads before checking out. The customers line up by signaling for checkout. The customers wait in line until they are checked out. Note that the customer starts a clerk thread, and clerks are not waiting around like the manager or cashier. The browse function is straightforward: Lecture 13: An Ice Cream Store: browse function static void browse() { cout << oslock << "Customer starts to kill time." << endl << osunlock; unsigned int browseTime = getBrowseTime(); sleep_for(browseTime); cout << oslock << "Customer just killed " << double(browseTime)/1000 << " seconds." << endl << osunlock; } The sleep_for function pushes the thread off the processor, so it is not busy-waiting. A clerk has multiple duties: make a cone, then pass it to a manager and wait for it to be inspected, then check to see if the inspection passed, and if not, make another and repeat until a well-made cone passes inspection: Lecture 13: An Ice Cream Store: clerk function static void clerk(unsigned int coneID, unsigned int customerID) { bool success = false; while (!success) { makeCone(coneID, customerID); inspection.available.lock(); inspection.requested.signal(); inspection.finished.wait(); success = inspection.passed; inspection.available.unlock(); } } The clerk and the manager use the inspection struct to pass information -- note that there is only a single inspection struct, but that is okay because there is only one manager doing the inspecting. This does not mean that we can remove the available lock -- it is critical because there are many clerks trying to get the manager's attention. Note that we only acquire the lock after making the cone -- don't over-lock. Note also that we signal the manager that we have a cone ready for inspection -- this wakes up the manager if they are sleeping. If the manger is in the middle of an inspection, they will immediately go to the next cone after the inspection. The makeCone function is straightforward: Lecture 13: An Ice Cream Store: makeCone function static void makeCone(unsigned int coneID, unsigned int customerID) { cout << oslock << " Clerk starts to make ice cream cone #" << coneID << " for customer #" << customerID << "." << endl << osunlock; unsigned int prepTime = getPrepTime(); sleep_for(prepTime); cout << oslock << " Clerk just spent " << double(prepTime)/1000 << " seconds making ice cream cone#" << coneID << " for customer #" << customerID << "." << endl << osunlock; } The manager (somehow) starts out the day knowing how many cones they will have to approve (we could probably handle this with a global "all done!" flag) The manager waits around for a clerk to hand them a cone to inspect.For each cone that needs to be approved, the manager inspects the cone, then updates the number of cones approved (locally) if it passes. If it doesn't pass, the manger waits again. When the manager has passed all necessary cones, they go home. Lecture 13: An Ice Cream Store: manager function static void manager(unsigned int numConesNeeded) { unsigned int numConesAttempted = 0; // local variables secret to the manager, unsigned int numConesApproved = 0; // so no locks are needed while (numConesApproved < numConesNeeded) { inspection.requested.wait(); inspectCone(); inspection.finished.signal(); numConesAttempted++; if (inspection.passed) numConesApproved++; } cout << oslock << " Manager inspected a total of " << numConesAttempted << " ice cream cones before approving a total of " << numConesNeeded << "." << endl; cout << " Manager leaves the ice cream store." << endl << osunlock; } The manager signals the waiting clerk that the cone has been inspected (why can there only be one waiting clerk?) The inspectCone function updates the inspection struct: Lecture 13: An Ice Cream Store: inspectCone function static void inspectCone() { cout << oslock << " Manager is presented with an ice cream cone." << endl << osunlock; unsigned int inspectionTime = getInspectionTime(); sleep_for(inspectionTime); inspection.passed = getInspectionOutcome(); const char *verb = inspection.passed ? "APPROVED" : "REJECTED"; cout << oslock << " Manager spent " << double(inspectionTime)/1000 << " seconds analyzing presented ice cream cone and " << verb << " it." << endl << osunlock; } Why aren't there any locks needed here? This is a global struct!? We must ensure that customers get handled in order (otherwise, chaos) -- not so for the clerks, who can fight for the manager's attention. Finally, we can look at the main function. The main function's job is to set up the customers, manager, and cashier. Why not the clerks? (they are set up in the customer function) Lecture 13: An Ice Cream Store: main function int main(int argc, const char *argv[]) { int totalConesOrdered = 0; thread customers[kNumCustomers]; for (unsigned int i = 0; i < kNumCustomers; i++) { int numConesWanted = getNumCones(); customers[i] = thread(customer, i, numConesWanted); totalConesOrdered += numConesWanted; } thread m(manager, totalConesOrdered); thread c(cashier); for (thread& customer: customers) customer.join(); c.join(); m.join(); return 0; } main must wait for all of the threads it created to join before exiting. Now we see how the manager and cashier know how many cones / customers there are -- we let everyone in at the beginning, ask them how many cones they want, and off we go. - There is a lot going on in this program! - Managing all of the threads, locking, waiting, etc., takes planing and foresight. - This isn't the only way to model the ice cream store - How would you modify the model? - What would we have to do if we wanted more than one manager? - Could we create multiple clerks in main, as well? (sure) - This example prepares us for the next idea: ThreadPool. - Our manager and cashier threads are just waiting around much of the time, but they are created before needing to do their work. - It does take time to spin up a thread, so if we have the threads already waiting, we can use them quickly. This is similar to farm, except that now, instead of processes, we have threads. Lecture 13: An Ice Cream Store: takeaways Lecture 13: An Ice Cream Store By Chris Gregg
https://slides.com/tofergregg/lecture-13-ice-cream-store
CC-MAIN-2019-39
refinedweb
3,074
59.74
Despite all of the benefits that QML and Qt Quick offer, they can be challenging in certain situations. The following sections elaborate on some of the best practices that will help you get better results when developing applications. 2. They cater to the most common use cases without any change, and offer a lot more possibilities with their customization options. In particular, Qt Quick Controls 2 provides styling options that align with the latest UI design trends. If these UI controls do not satisfy your application's needs, only then it is recommended to create a custom control. See QML Coding Conventions.. Although Qt enables you to manipulate QML from C++, it is not recommended to do so. To explain why, let's take a look at a simplified example. Suppose we were writing the UI for a settings page: import QtQuick 2.13 import QtQuick.Controls 2.13 Page { Button { text: qsTr("Restore default settings") } } We want the button to do something in C++ when it is clicked. We know objects in QML can emit change signals just like they can in C++, so we give the button an objectName so that we can find it from C++: Button { objectName: "restoreDefaultsButton" text: qsTr("Restore default settings") } 2.13 import QtQuick.Controls 2.13: Layout.preferredWidthand Layout.preferredHeight: } } } Note: Layouts and anchors are both types of objects that take more memory and instantiation time. Avoid using them (especially in list and table delegates, and styles for controls) when simple bindings to x, y, width, and height properties are enough. When declaring properties in QML, it's easy and convenient to use the "var" type: property var name property var size property var optionsMenu However, this approach has several disadvantages: Instead, always use the actual type where possible: property string name property int size property MyMenu optionsMenu For information on performance in QML and Qt Quick, see Performance Considerations And Suggestions. For information on useful tools and utilies that make working with QML and Qt Quick easier, see Qt Quick Tools and Utilities. For information on Qt Quick's scene graph, see Qt Quick Scene Graph.: qt-logo.pngfor @2x, @3x, and @4xresolutions, enabling the application to cater to high resolution displays. Qt automatically chooses the appropriate image that is suitable for the given display, provided the high DPI scaling feature is explicitly enabled. With this in place, your application's UI should scale depending on the display resolution on offer. © The Qt Company Ltd Licensed under the GNU Free Documentation License, Version 1.3.
https://docs.w3cub.com/qt~5.13/qtquick-bestpractices/
CC-MAIN-2020-34
refinedweb
424
52.7
You may not have realized that some of the patterns that you’ve been using for ages in your C# programs are based on conventions, rather than on a specific API. What I mean by this is, some constructs in C# are based on some magical methods with well-defined names which are not defined in any base class or interface, but yet just work. Let’s see what they are. You are probably used to iterating through collections using the foreach statement. If you are, you may know that foreach actually wraps a call to a method called GetEnumerator, like the one that is defined by the IEnumerable and IEnumerable<T> interfaces. Thus, you might think, the magic occurs because the collection implements one or the two, but you would be wrong: it turns out that in order to iterate through a class using foreach all it takes is that the class exposes a public method called GetEnumerator that returns either a IEnumerator or a IEnumerator<T> instance. For example, this works: public class Enumerable { public IEnumerator GetEnumerator() { yield return 1; yield return 2; yield return 3; } } var e = new Enumerable(); foreach (int i in e) { /*...*/ } As you see, there is no need to implement any of these interfaces, but it is a good practice, for example, because it gives you access to LINQ extension methods. Tuples were introduce in C# 7. In a nutshell, they provide a way for us to return multiple values from a method: (int x, int y) GetPosition() { return (x: 10, y: 20); } Another option is to have a class deconstructed into a tuple. Say, for example, that we have a class like this: public class Rectangle { public int Height { get; set; } public int Width { get; set; } } We can have it deconstructed into a tuple, by providing one or more Deconstruct methods in this class: public void Deconstruct(out int h, out int w) { h = this.Height; w = this.Width; } Which allows you to write code like this: var rectangle = new Rectangle { Height = 10, Width = 20 }; var (h, w) = rectangle; You can implement multiple Deconstruct methods with different parameters, which must always be out. When you try to assign your class to a tuple, C# will try to find a Deconstruct method that matches the tuple’s declaration, or throw an exception if one cannot be found: public void Deconstruct(out int perimeter, out int area, out bool square) { perimeter = this.Width * 2 + this.Height * 2; area = this.Width * this.Height; square = this.Width == this.Height; } var (perimeter, area, square) = rectangle; Since C# 6, we have a more concise syntax for initializing collections: var strings = new List<string> { "A", "B", "C" }; The syntax to follow is an enumeration of items whose type matches the collection’s item type, inside curly braces, each separated by a comma. This is possible because there is a public Add method that takes a parameter of the appropriate type. What happens behind the scene is that the Add method is called multiple times, one for each item inside the curly braces. Meaning, this works too: public class Collection : IEnumerable { public IEnumerator GetEnumerator() => /* ... */ public void Add(string s) { /* ... */ } } Notice that this collection offers a public Add method and needs to implement either IEnumerable or IEnumerable<T>, which, mind you, do not define an Add method. Having this, we can write: var col = new Collection { "A", "B", "C" }; The magical Add method can have multiple parameters, like for dictionaries: var dict = new Dictionary<string, int> { { "A", 1 }, { "B", 2 }, { "C", 3 } }; Each parameter will need to go inside it’s own set of curly braces. What’s even funnier is, you can mix different Add methods with different parameters: public void Add(int i) { /* ... */ } public void Add(string s) { /* ... */ } var col = new Collection { 1, 2, 3, "a", "b", "c" }; In this post I introduced the magical GetEnumerator, Deconstruct and Add methods. This was just for fun, the information here is probably useless, but, hey, it’s done!
https://weblogs.asp.net/ricardoperes/c-special-method-names
CC-MAIN-2018-47
refinedweb
666
56.69
network_dict 0.1 network_dict creates a network subnet based dictionary that returns the most specific subnet(s) for a given IP. Summary network_dict creates a network subnet based dictionary that returns the most specific subnet(s) for a given IP. It will work equally with both IPv4 and IPv6. There’s a few more simple bells and whistles to make the library useful in different circumstances. This is a case where examples speak louder than words… Simple Example from network_dict import NetworkDict networks = { '0.0.0.0/0': 'Everything', '10.0.0.0/8': 'Office', '10.1.0.0/16': 'Region 1', '10.1.1.0/255.255.255.0': 'City 1' # Can take multiple netmasks } ns = NetworkDict(networks) >>> nd['10.1.1.1'] 'City 1' >>> nd.firstHit = False # Return all matching values in a list # Results are in order, most to least specific >>> nd['10.1.1.1'] ['City 1', 'Region 1', 'Office', 'Everything'] >>> nd.format = both # return both the subnet and value in a tuple >>> nd['10.1.1.1'] [('10.1.1.0/24', 'City 1'), ('10.1.0.0/16', 'Region 1'), ('10.0.0.0/8', 'Office'), ('0.0.0.0/0', 'Everything')] >>> nd.format = key # return just the subnet address >>> nd['10.1.1.1'] ['10.1.1.0/24', '10.1.0.0/16', '10.0.0.0/8', '0.0.0.0/0'] Adding Subnets >>> nd['192.168.1.1'] ['0.0.0.0/0'] # If 0.0.0.0/0 is not set, will return KeyError exception >>> nd['192.168.1.1/16'] = 'Home' >>> nd['192.168.1.1'] ['192.168.0.0/16', '0.0.0.0/16'] # Note that the key was normalized to a proper subnet Hosts and /32 prefixes >>> nd['10.1.1.1'] = 'Router' >>> nd['10.1.1.1'] ['10.1.1.0/24', '10.1.0.0/16', '10.0.0.0/8', '0.0.0.0/0'] # Hosts are ignored by default >>> nd.ignoreHosts = False >>> nd['10.1.1.1'] = 'Router' ['10.1.1.1/32', '10.1.1.0/24', '10.1.0.0/16', '10.0.0.0/8', '0.0.0.0/0'] IPv6 Subnets >>> nd['::1'] Traceback (most recent call last): File "<stdin>", line 1, in <module> KeyError: No matching networks found, and no default network set # we didn't set '::/0', which is different from the '0.0.0.0/0' which is already set >>> nd['::1/128'] = 'Localhost' # Note: /128 is a hostmask, so will be ignored if ignoreHosts = True (default) >>> nd['::1'] ['::1/128'] Setting options at creation >>> nd = NetworkDict(format = 'both', firstHit = False, ignoreHosts = True) # Returns an empty NetworkDict object, but with default options set Requirements - Tested on python 2.8 - netaddr library Installation Via pip or easy_install $ sudo pip install network_dict # If you prefer PIP $ sudo easy_install network_dict # If you prefer easy_install Manual installation $ git clone $ cd python-network_dict $ sudo python setup.py install Conditions of Use I wrote this library for my own use, but realized others may find it useful. Unfortunately I cannot guarentee any active support, but will do my best as time permits. That said, I’ll happily accept push requests with suitable changes that address the general audience of this library. Put simply, use this at your own risk. If it works, great! If not, I may not be able to help you. If you fix anything, however, please push it back and I’ll likely accept it. :-) Also, if you use this library in your package, tool, or comercial software, let me know, and I’ll list it here! - Downloads (All Versions): - 3 downloads in the last day - 20 downloads in the last week - 74 downloads in the last month - Author: Michael Henry a.k.a. neoCrimeLabs - License: LGPLv2.1 - Categories - Package Index Owner: neoCrimeLabs - DOAP record: network_dict-0.1.xml
https://pypi.python.org/pypi/network_dict
CC-MAIN-2015-14
refinedweb
642
68.36
memmove - Man Page copy bytes in memory with overlapping areas Prolog This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. Synopsis #include <string.h> void *memmove(void *s1, const void *s2, size_t n); Description The functionality described on this reference page is aligned with the ISO C standard. Any conflict between the requirements described here and the ISO C standard is unintentional. This volume of POSIX.1-2017. Return Value The memmove() function shall return s1; no return value is reserved to indicate an error. Errors No errors are defined. The following sections are informative. Examples None. Application Usage None. Rationale None. Future Directions None. See Also string.h(0p).
https://www.mankier.com/3p/memmove
CC-MAIN-2022-21
refinedweb
140
51.75
On 12.05.2016 17:30, Michal Privoznik wrote: > On 12.05.2016 16:34, Peter Krempa wrote: >> On Thu, May 12, 2016 at 14:36:22 +0200, Michal Privoznik wrote: >>> The intent is that this library is going to be called every time >>> to check if we are not touching anything outside srcdir or >>> builddir. >>> >>> Signed-off-by: Michal Privoznik <mprivozn redhat com> >>> --- >>> cfg.mk | 2 +- >>> tests/Makefile.am | 13 +++- >>> tests/testutils.c | 9 +++ >>> tests/testutils.h | 10 +-- >>> tests/vircgroupmock.c | 15 ++--- >>> tests/virpcimock.c | 14 ++-- >>> tests/virtestmock.c | 175 ++++++++++++++++++++++++++++++++++++++++++++++++++ >>> 7 files changed, 210 insertions(+), 28 deletions(-) >>> create mode 100644 tests/virtestmock.c >>> >> >> [...] >> >>> diff --git a/tests/testutils.c b/tests/testutils.c >>> index 79d0763..595b64d 100644 >>> --- a/tests/testutils.c >>> +++ b/tests/testutils.c >> >> [...] >> >>> @@ -842,6 +845,12 @@ int virtTestMain(int argc, >>> char *oomstr; >>> #endif >>> >>> +#ifdef __linux__ >>> + VIRT_TEST_PRELOAD(TEST_MOCK); >> >> So I was thinking about it a bit. I think we should pre-load this only >> conditionally on a ENV var which will enable it. > > Yeah, I was thinking the same when implementing this. Problem with that > approach would be that nobody would do that. But I guess for now it's a > fair trade and once we get the whitelist rules complete we can make > 'make check' to actually set the variable and possibly die on an error > if the perl script founds one. Got any good idea about the var name? > What if I reuse VIR_TEST_FILE_ACCESS (introduced in 3/4) just for this > purpose, to enable this whole feature; and then introduce > VIR_TEST_FILE_ACCESS_OUTPUT to redirect output into a different file > than the default one. > If so, do you want me to send another version of these patches? I just realized, it's not going to be that easy. Problem is, my mock lib, implements both lstat and __lxstat, and stat and __xstat. Now, due to changes made to other mocks (i.e. virpcimock and vircgroupmock), without my library linked tests using the other mocks will just crash as soon as they try to stat(). So what I can do, is to suppress any output (and checking of accessed paths) until VIR_TEST_FILE_ACCESS var is set (or whatever name we decide on). Michal
https://listman.redhat.com/archives/libvir-list/2016-May/msg00921.html
CC-MAIN-2021-17
refinedweb
364
76.01
Red Hat Bugzilla – Bug 547622 Review Request: python-cloudservers - Python bindings to the Rackspace Cloud Servers API Last modified: 2013-10-19 10:42:52 EDT Spec URL: SRPM URL: Description: This is a client for Rackspace's Cloud Servers API. There's a Python API (the ``cloudservers`` module), and a command-line script (``cloudservers``). Each implements 100% of the Rackspace API. First quick look reveals Package fails on rpmlint $ rpmlint python-cloudservers-1.0a3-1.fc12.src.rpm python-cloudservers.src: E: description-line-too-long ``cloudservers`` module), and a command-line script (``cloudservers``). Each implements 100% of the Rackspace API. 1 packages and 0 specfiles checked; 1 errors, 0 warnings. $ rpmlint python-cloudservers.spec 0 packages and 1 specfiles checked; 0 errors, 0 warnings. This package "BuildRequires" python-distribute … if not installed 'python setup.py build' installs it from pypi during the rpm build process due to: from distribute_setup import use_setuptools; use_setuptools() This is fine for Fedora 13+ (adding BuildRequires: python-setuptools which is actually distribute) but not for anything older. Would need to patch it to use setuptools (only for EPEL or Fedora < 13) if you want to maintain for older/existing releases. ----------------------------------------------------------------------------- Executing(%build): /bin/sh -e /var/tmp/rpm-tmp.g0ymb2 + umask 022 + cd /home/wdierkes/packages/fedora_review/python-cloudservers/BUILD + cd jacobian-python-cloudservers-0c27daa + LANG=C + export LANG + unset DISPLAY + /usr/bin/python setup.py build Downloading Extracting in /tmp/tmpNcSdXS Now working in /tmp/tmpNcSdXS/distribute-0.6.8 Building a Distribute egg in /home/wdierkes/packages/fedora_review/python-cloudservers/BUILD/jacobian-python-cloudservers-0c27daa /home/wdierkes/packages/fedora_review/python-cloudservers/BUILD/jacobian-python-cloudservers-0c27daa/distribute-0.6.8-py2.6.egg ... There are additional errors from rpmlint on the packages: $ rpmlint -i RPMS/noarch/python-cloudservers-1.0a3-1.fc12.noarch.rpm python-cloudservers.noarch: E: explicit-lib-dependency python-httplib2 You must let rpm find the library dependencies by itself. Do not put unneeded explicit Requires: tags. python-cloudservers.src: E: description-line-too-long ``cloudservers`` module), and a command-line script (``cloudservers``). Each implements 100% of the Rackspace API. 1 packages and 0 specfiles checked; 1 errors, 0 warnings. python-cloudservers.noarch: E: version-control-internal-file /usr/share/doc/python-cloudservers-1.0a3/docs/.gitignore You have included file(s) internally used by a version control system in the package. Move these files out of the package and rebuild it. 1 packages and 0 specfiles checked; 3 errors, 0 warnings. I don't think this is going to fly: # The upstream download URL does not end with the tarball name # You can get this tarball by following a link from: # Source0: jacobian-python-cloudservers-0c27daa.tar.gz Being that you are the upstream maintainer it wouldn't be hard to fix. For one, the only download link I see from the github page is to 'download source', but I don't see where I can download specific versions of the source. I see a '1.0' tag, but not a download for 1.0a3. I would recommend making an official release tarbal in a standard form (such as python-cloudservers-1.0a3.tar.gz) and make it available in a location that won't change. Few more things: Macro missing '%' on {_tmppath} BuildRoot: {_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n) ---------- Use %{version}, and %{pyver} macros in %files for easier upgrade-ability when changing versions (and building on other distros): i.e. %{python_sitelib}/python_cloudservers-1.0a3-py2.6.egg-info Should be: %{python_sitelib}/python_cloudservers-%{version}-py%{pyver}.egg-info For the %{pyver} macro, add the following at the top of the spec: %{!?pyver: %global pyver %(%{__python} -c "import sys ; print sys.version[:3]")} ---------- python-prettytable does not exist in Fedora. Please submit a package review for python-prettytable, and have it 'block' this bug. *** This bug has been marked as a duplicate of bug 542436 *** Sorry, this is not a duplicate... bug 542436 is for python-cloudfiles, not cloudservers. spec and srpm links are not valid. Hi, I am not a packager yet, but I'd love to see this package in fedora. I tried downloading the spec/srpm for review but seems like they are gone now. Anyone against the idea of me taking over this one (if the original submitter is not interested anymore, of course)? I'd have to start from scratch if no one has the previous versions, but that's not a problem. Never mind, this appears to be already in fedora 15: Name : python-cloudservers Arch : noarch Version : 1.2 Release : 3.fc15 Size : 36 k Repo : updates Summary : Client library for Rackspace's Cloud Servers API URL : License : BSD Description : This is a client for Rackspace's Cloud Servers API. There's a Python API (the : "cloudservers" module), and a command-line script ("cloudservers"). Each : implements 100% of the Rackspace API. Can someone close this ticket? (I'm not the requestor or assignee) *** This bug has been marked as a duplicate of bug 717680 ***
https://bugzilla.redhat.com/show_bug.cgi?id=547622
CC-MAIN-2017-26
refinedweb
833
58.48
XLATE_PRO_NEXT_BLOCK(3E) XLATE_PRO_NEXT_BLOCK(3E) xlate_pro_disk_next_block - get translation byte stream pointers #include <elf.h> #include <libelf.h> #include <dwarf.h> #include <libdwarf.h> #include <cmplrs/xlate.h> #include <libXlate.h> int xlate_pro_disk_next_block(xlate_table_pro pro_table_ptr, char **data, Elf64_Xword *data_size ); This function gets pointers to the blocks making up the stream of data. The xlate functions do not write the stream to disk. Typically the transformation-tool will use libelf to write the bytes to disk. xlate_pro_disk_next_block gets the contents and size of the next block thru the pointer arguments. pro_table_ptr must be a valid open producer translate table handle and xlate_pro_disk_header must have been called to create the byte stream and count the number of blocks.. It is essential that if the data stream gets written to a data file (an Elf file) for later reading that the data stream be given a proper Elf d_align of 4 for a 32-bit stream and 8 for a 64-bit stream. data The pointed at memory is set to a pointer to a set of bytes which form part of the translation table byte stream. The caller must free(3) the memory pointed-to. data_size The pointed at memory is set to the number of bytes in this block of the byte stream. For an example of use, see xlate_pro_disk_header(3) and libelfutil(5). /usr/include/libXlate.h /usr/include/cmplrs/xlate.h /usr/include/elf.h /usr/include/dwarf.h /usr/include/libdwarf.h /usr/lib/libelfutil.a Page 1 XLATE_PRO_NEXT_BLOCK(3E) XLATE_PRO_NEXT_BLOCK(3E) This function returns XLATE_TB_STATUS_NO_ERROR (0) on success. In case of error, a negative number is returned indicating the error. In case of error nothing is returned thru the pointer arguments. Error codes which may be returned: XLATE_TB_STATUS_INVALID_TABLE means that that the table is not a valid open producer handle. XLATE_TB_STATUS_BLOCK_REQ_SEQ_ERR means that xlate_pro_disk_header has not been called yet, and it must be called before calling xlate_pro_disk_next_block. XLATE_TB_STATUS_ALREADY_DONE means that xlate_pro_disk_next_block was called more times than it should have been since the last call to xlate_pro_disk_header. XLATE_TB_STATUS_ALLOC_FAIL means malloc failed trying to allocate memory for the stream bytes. libelfutil(5), xlate(4), xlate_pro_init(3), xlate_pro_finish(3), xlate_pro_disk_next_block(3), PPPPaaaaggggeeee 2222
http://nixdoc.net/man-pages/IRIX/man3/xlate_pro_disk_next_block.3.html
CC-MAIN-2019-43
refinedweb
361
57.98
CodePlexProject Hosting for Open Source Software I've implemented a filter for BBCodes as proposed in this issue. However, the filter is not getting registered (although it implements IHtmlFilter, so also IDependency), it doesn't show up in the ctor of BodyPartDriver neither anywhere else (I tried a Controller ctor too, no luck: only the default BbcodeFilter is there). The module is enabled. Strange is, it seems that (my) IHtmlFilters are not getting registered at all: I tried with an empty one (named TestFilter, so naming shouldn't be an issue) that was in the same file as the controller I called (and which works otherwise and gets any other dependencies correctly), but it wasn't registered either. I also tried to place the filter in the Services namespace (even Services folder) of the module, even in the same namespace as the built-in BbcodeFilter class, but no luck. Markdown works... What could be the problem? There is an IHtmlFilter for MArkdown which works fine. Just to eliminate possible oversights, is it decorated with an OrchardFeature attribute? Is the feature enabled? Thanks both of you! It turns out I made a rookie mistake by not making the filter class public. This everything works. Now I'm just tweaking the filter a bit and will release the module on the weekend. Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
https://orchard.codeplex.com/discussions/278401
CC-MAIN-2017-09
refinedweb
256
73.47
Kevon Hampton14,136 Points im not sure how to complete this it says i need a space but i included a space already def ruby(rocks) puts "ruby" + "rocks" end puts "ruby" + "rocks" 3 Answers Rogier NitschelmiOS Development Techdegree Student 5,460 Points There is no space in the code you have shown. But you could easily add a space: puts "ruby rocks" puts "ruby" + " rocks" puts "ruby rocks" However - the function signature has a parameter (rocks). Which suggest you have to do something with that. I cannot tell by your question what problem you're supposed to solve, however - I can imagine it could be something like: def ruby(rocks) puts "ruby #{rocks}" end yk721,002 Points rewatch the video just before that challenge, at the time => 1:35 - 1:55 it explains the how and why. Kevon Hampton14,136 Points the question was, Using string concatenation, join the strings "Ruby" and "rocks!" together. Be sure to include a space between "Ruby" and "rocks!". Then print the result using puts. i still cannot figure out how to complete it.
https://teamtreehouse.com/community/im-not-sure-how-to-complete-this
CC-MAIN-2019-30
refinedweb
180
66.67
JetGroovy is giving an incorrect "Cannot assign 'User' to 'Task'" error in one of my domain class methods: Task. */ Task \ clone() { Task newTask = new Task() newTask.name = name newTask.notes = notes newTask.action = action newTask.valid = valid newTask.personal = personal newTask.givens = (byte[]) givens.clone() newTask.givensMode = givensMode newTask.task = (byte[]) task.clone() newTask.taskMode = taskMode newTask.owner = owner // ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ "Cannot assign 'User' to 'Task'" } ]]> Randall Schulz Hehe, everything is correct. You are not returning newTask from clone(), thus an implicit return of the last expression is done which is of type User. Eugene. Still the diagnostics could be better indeed. When will I ever stop using copy-and-modify... I took a clone() method from another somewhat similar domain class but forgot the return statement... RRS Well, bugs like this reassure me in that our inspections are not useless:) Eugene. To be sure. But please keep in mind the point I've made a couple of times previously, that a huge number of "cannot determine type statically" diagnostics create so much "noise" that people tend to miss the hepful ones. Randall Schulz But this are not even warnings! Are you confused by the fact those are included in F2 navigation? As for underlining, you can change visual presentation to whatever you like in "Colors and Fonts/Groovy" I don't think I'm confused at all. Are you missing my point? A lot of repeated warnings about things that are ordinary, everyday, inevitable state of affairs in a Groovy-based language like GSP are counterproductive. Their mere presence and the fact that they tell me nothing useful while producing a flurry of tool-tips as I move the mouse over my code makes them a 100% detriment to productivity. Randall Schulz No, I see your point, hope you see mine. These are not warnings, these are just visual clues to where you should be more cautious, cause we are unable to validate it. If you think tooltips are getting in your way, well, we may try turning them off. But I don't think the feature as a whole must be removed, even in GSP. Eugene. I'm certainly not asking for things to be permanently removed or "turned off," but rather that much more refinement in the analysis is needed. This seems to me to be entirely consistent with the IDEA philosophy. It's just that Groovy and Grails rely far more on dynamic typing and this necessarily demands a more sophisticated approach to generating heuristic diagnostics. Randall Schulz I have a similar problem. I have code such as def foo = { println "bar" } The editor will highlight the closure with a warning of "Cannot assign 'groovy.lang.Closure' to 'java.lang.Object'" Also I cannot create a field of type Closure. Even after the import of groovy.lang.Closure, the editor marks it as an error saying that it "cannot resolve symbol Closure" This appears to have been a hedge case related to Idea's module management. One module had mistakenly included the same content root as the second module. When working in the second module, even though it had its dependencies correctly set up, it was using the dependencies from the first module, and ignoring the Groovy imports, or something like that. Sorry for the turbulence. Eugene, Are you referring to the "Untyped member access" setting in Groovy Colors&Fonts? It looks like this setting allows control over the behavior, however the ("informational") error stripe is always drawn, even when not enabled for "Untyped member access". Seems like a bug to me that should go to idea core... Eugene,
https://intellij-support.jetbrains.com/hc/en-us/community/posts/205998779-Incorrect-Cannot-assign-to-Diagnostic
CC-MAIN-2019-26
refinedweb
601
66.74
Azure Files scalability and performance targets Azure Files offers fully managed file shares in the cloud that are accessible via the SMB and NFS file system protocols. This article discusses the scalability and performance targets for Azure Files and Azure File Sync. The scalability and performance targets listed here are high-end targets, but may be affected by other variables in your deployment. For example, the throughput for a file may also be limited by your available network bandwidth, not just the servers hosting your Azure file shares. We strongly recommend testing your usage pattern to determine whether the scalability and performance of Azure Files meet your requirements. We are also committed to increasing these limits over time. Applies to Azure Files scale targets Azure file shares are deployed into storage accounts, which are top-level objects that represent a shared pool of storage. This pool of storage can be used to deploy multiple file shares. There are therefore three categories to consider: storage accounts, Azure file shares, and files. Storage account scale targets. 1 General-purpose version 2 storage accounts support higher capacity limits and higher limits for ingress by request. To request an increase in account limits, contact Azure Support. Azure file share scale targets 1 The limits for standard file shares apply to all three of the tiers available for standard file shares: transaction optimized, hot, and cool. 2 Default on standard file shares is 5 TiB, see Create an Azure file share for the details on how to create file shares with 100 TiB size and increase existing standard file shares up to 100 TiB. File scale targets 1 Applies to read and write IOs (typically smaller IO sizes less than or equal to 64 KiB). Metadata operations, other than reads and writes, may be lower. 2 Subject to machine network limits, available bandwidth, IO sizes, queue depth, and other factors. For details see SMB Multichannel performance. Azure File Sync scale targets The following table indicates the boundaries of Microsoft's testing and also indicates which targets are hard limits: Note An Azure File Sync endpoint can scale up to the size of an Azure file share. If the Azure file share size limit is reached, sync will not be able to operate. Azure File Sync performance metrics Since the Azure File Sync agent runs on a Windows Server machine that connects to the Azure file shares, the effective sync performance depends upon a number of factors in your infrastructure: Windows Server and the underlying disk configuration, network bandwidth between the server and the Azure storage, file size, total dataset size, and the activity on the dataset. Since Azure File Sync works on the file level, the performance characteristics of an Azure File Sync-based solution is better measured in the number of objects (files and directories) processed per second. For Azure File Sync, performance is critical in two stages: - Initial one-time provisioning: To optimize performance on initial provisioning, refer to Onboarding with Azure File Sync for the optimal deployment details. - Ongoing sync: After the data is initially seeded in the Azure file shares, Azure File Sync keeps multiple endpoints in sync. To help you plan your deployment for each of the stages, below are the results observed during the internal testing on a system with a config Initial one-time provisioning Initial cloud change enumeration: When a new sync group is created, initial cloud change enumeration is the first step that will execute. In this process, the system will enumerate all the items in the Azure File Share. During this process, there will be no sync activity i.e. no items will be downloaded from cloud endpoint to server endpoint and no items will be uploaded from server endpoint to cloud endpoint. Sync activity will resume once initial cloud change enumeration completes. The rate of performance is 20 objects per second. Customers can estimate the time it will take to complete initial cloud change enumeration by determining the number of items in the cloud share and using the following formulae to get the time in days. Time (in days) for initial cloud enumeration = (Number of objects in cloud endpoint)/(20 * 60 * 60 * 24) Initial sync of data from Windows Server to Azure File share:Many Azure File Sync deployments start with an empty Azure file share because all the data is on the Windows Server. In these cases, the initial cloud change enumeration is fast and the majority of time will be spent syncing changes from the Windows Server into the Azure file share(s). While sync uploads data to the Azure file share, there is no downtime on the local file server, and administrators can setup network limits to restrict the amount of bandwidth used for background data upload. Initial sync is typically limited by the initial upload rate of 20 files per second per sync group. Customers can estimate the time to upload all their data to Azure using the following formulae to get time in days: Time (in days) for uploading files to a sync group = (Number of objects in server endpoint)/(20 * 60 * 60 * 24) Splitting your data into multiple server endpoints and sync groups can speed up this initial data upload, because the upload can be done in parallel for multiple sync groups at a rate of 20 items per second each. So, two sync groups would be running at a combined rate of 40 items per second. The total time to complete would be the time estimate for the sync group with the most files to sync. Namespace download throughput When a new server endpoint is added to an existing sync group, the Azure File Sync agent does not download any of the file content from the cloud endpoint. It first syncs the full namespace and then triggers background recall to download the files, either in their entirety or, if cloud tiering is enabled, to the cloud tiering policy set on the server endpoint. *If cloud tiering is enabled, you are likely to observe better performance as only some of the file data is downloaded. Azure File Sync only downloads the data of cached files when they are changed on any of the endpoints. For any tiered or newly created files, the agent does not download the file data, and instead only syncs the namespace to all the server endpoints. The agent also supports partial downloads of tiered files as they are accessed by the user. Note The numbers above are not an indication of the performance that you will experience. The actual performance will depend on multiple factors as outlined in the beginning of this section. As a general guide for your deployment, you should keep a few things in mind: - The object throughput approximately scales in proportion to the number of sync groups on the server. Splitting data into multiple sync groups on a server yields better throughput, which is also limited by the server and network. - The object throughput is inversely proportional to the MiB per second throughput. For smaller files, you will experience higher throughput in terms of the number of objects processed per second, but lower MiB per second throughput. Conversely, for larger files, you will get fewer objects processed per second, but higher MiB per second throughput. The MiB per second throughput is limited by the Azure Files scale targets.
https://docs.microsoft.com/en-us/azure/storage/files/storage-files-scale-targets
CC-MAIN-2021-31
refinedweb
1,228
56.18
Push Notifications in Ionic Apps with Google Cloud Messaging Updated on January 13th to reflect comments and changes with the Google Cloud Messaging Platform In this tutorial we're going to look at how to implement push notifications in an Android App using the Google Cloud Messaging Platform. We will be using the Ionic framework and Cordova to create the app. The server component will be implemented using Node. You can checkout all the code for the project that we will be building from Github. Create a Google Console Project The first step, is to create a new project on the Google Developer Console. You will need an account if you don't have one. Once the project is created, click on the APIs & auth menu found on the left side of the screen and then credentials. This allows you to create a new API key that you can use for the server. Click on Add credentials and select API key. Next you will be asked what kind of key you want to create. Select server key since the key will be primarily used in the server. Don't add an IP address yet and skip the step by not adding anything and click the create button. Note the resulting API key. Still under the APIs & auth menu, click on the APIs link and search for Cloud Messaging for Android. This is the API needed, click and enable it. Setting Up Create a new ionic app that uses the blank template. ionic start ionic-gcm blank Move into the project directory and install Phonegap Builds' Push Plugin. This is used by the app to register the device and to receive push notifications sent by the server. cordova plugin add At the time of writing, sound doesn't work when a notification is received. The default is to vibrate, even if you've set the phone to use sounds. If you want to play a sound, you have to change the GCMIntentService.java file. You can find it on the following path plugins/com.phonegap.plugins.PushPlugin/src/android/com/plugin/gcm Open the file and add the following on lines 12 and 13. import android.content.res.Resources; import android.net.Uri; And from lines 125 to 131. String soundName = extras.getString("sound"); if (soundName != null) { Resources r = getResources(); int resourceId = r.getIdentifier(soundName, "raw", context.getPackageName()); Uri soundUri = Uri.parse("android.resource://" + context.getPackageName() + "/" + resourceId); mBuilder.setSound(soundUri); } Remove line 105: .setDefaults(defaults) You can look at this pull request for reference. If you have already added the android platform, you might need to update the corresponding file at platforms/android/src/com/plugin/gcm/GCMIntentService.java with the same changes. Aside from the Push Plugin, we need to install the cordova whitelist plugin. cordova plugin add cordova-plugin-whitelist This activates the settings included in the config.xml file which can be found in the root directory of the Ionic project. By default, it allows access to every server. If you're planning to deploy this app later, you should to update the following line to match only those servers that your app is communicating with. This improves the security of the app. <access origin="*"/> Building the Project Now we can start to build the project. Requests Service Create a services folder under the www/js directory and create a RequestsService.js file. This will be the service that passes the device token to the server. The device token is needed to send push notifications to a specific device. Add the following to the file. (function(){ angular.module('starter') .service('RequestsService', ['$http', '$q', '$ionicLoading', RequestsService]); function RequestsService($http, $q, $ionicLoading){ var base_url = 'http://{YOUR SERVER}'; function register(device_token){ var deferred = $q.defer(); $ionicLoading.show(); $http.post(base_url + '/register', {'device_token': device_token}) .success(function(response){ $ionicLoading.hide(); deferred.resolve(response); }) .error(function(data){ deferred.reject(); }); return deferred.promise; }; return { register: register }; } })(); Breaking the code down. First we wrap everything in an immediately invoked function expression. This allows encapsulation of the code that's contained inside and avoids polluting the global scope. (function(){ })(); Next we create a new angular service for the starter module. This module was created in the js/app.js file and was called starter, so it's referred to here. The service depends on $http, $q, and $ionicLoading, these are services provided by Angular and Ionic. angular.module('starter') .service('RequestsService', ['$http', '$q', '$ionicLoading', RequestsService]); function RequestsService($http, $q, $ionicLoading){ ... } Inside the RequestsService function, declare the base_url used as the base URL for making requests to the server. It should be an internet accessible URL so that the app can make requests to it. var base_url = 'http://{YOUR SERVER}'; If you do not have a server where you can run the server component, you can use 'ngrok' to expose any port in your localhost to the internet. Download the version for your platform from the project downloads page, extract and run it. I'll show you how to run ngrok later once we get to the server part. Returning to the code. We create a register function that will make a POST request to the server to register the device using the device token from the push plugin. function register(device_token){ var deferred = $q.defer(); //run the function asynchronously $ionicLoading.show(); //show the ionic loader animation //make a POST request to the /register path and submit the device_token as data. $http.post(base_url + '/register', {'device_token': device_token}) .success(function(response){ $ionicLoading.hide(); //hide the ionic loader deferred.resolve(response); }) .error(function(data){ deferred.reject(); }); return deferred.promise; //return the result once the POST request returns a response }; Finally we use the revealing module pattern to expose the register method as a public method of RequestsService. return { register: register }; Add the javascript to index.html right after the link to the app.js file. <script src="js/services/RequestsService.js"></script> Open the plugin/com.phonegap.plugins.PushPlugin/www directory and copy the PushNotification.js file to the www/js folder. Add a link to it in the index.html file right after the link to the css/style.css file. <script type="text/javascript" charset="utf-8" src="js/PushNotification.js"></script> Registering the Device In the app.js file, create a global variable for the Push Plugin. Add this code just after the closing brace for the $ionicPlatform.ready function. pushNotification = window.plugins.pushNotification; To register the device, call the register function. This function accepts 3 arguments. First is the function executed once a notification is received, second is the function executed if an error occured and third are options. The options is an object where you specify configuration for the push notification that will be received. From the code below, you can see the badge (the icon in the notification), sound and alert (the text in the notification) options. The ecb is the event callback function that gets executed every time a notification is received. This is the same function used in the first argument. Lastly, the senderId is the ID of the project that you created earlier on the Google Console. You can find it by clicking on the Overview menu of your project. To register the device, call the register function. This function accepts 3 arguments. First is the function that will be executed once a notification is received, second is the function that will be executed if an error occurred and third is the options. The options is an object where you specify different options for the push notification that will be received. From the code below, you can see that you can enable the badge (the icon in the notification), sound or alert (the text in the notification). The ecb is the event callback function that gets executed every time a notification is received. This is basically the same function used in the first argument. Lastly, the senderId is the project number of the project that you have created earlier on the Google Console. You can find it by clicking on the Overview menu of your project. Add this code to app.js: pushNotification.register( onNotification, errorHandler, { 'badge': 'true', 'sound': 'true', 'alert': 'true', 'ecb': 'onNotification', 'senderID': 'YOUR GOOGLE CONSOLE PROJECT NUMBER', } ); Receiving Notifications The onNotification function should be attached to the window object so that the plugin can find it. The argument passed to this function is the actual notification. You can check which type of notification event occured by extracting the event property. This can have 3 possible values: registered, message, and error. The registered event is fired when the device is registered. The message event when a push notification is received while the app is in the foreground. And the error when an error occured. When the registered event is fired, check if the length of regid is greater than 0. If it is then assume that a correct device token has been returned. Call the register function in the RequestsService service and pass the device_token as an argument. Once it returns a response, inform the user that the device has been registered. Add this code to app.js: window.onNotification = function(e){ switch(e.event){ case 'registered': if(e.regid.length > 0){ var device_token = e.regid; RequestsService.register(device_token).then(function(response){ alert('registered!'); }); } break; case 'message': alert('msg received: ' + e.message); /* { "message": "Hello this is a push notification", "payload": { "message": "Hello this is a push notification", "sound": "notification", "title": "New Message", "from": "813xxxxxxx", "collapse_key": "do_not_collapse", "foreground": true, "event": "message" } } */ break; case 'error': alert('error occured'); break; } }; When an error occurs, notify the user using an alert message. Add this code to app.js: window.errorHandler = function(error){ alert('an error occured'); } Playing Sounds If you want to play a sound when a notification is received, we need to add the mp3 file inside the platforms/android/res/raw folder. Mine is named notification.mp3. Take note of the file name and we will add it to the server side later when pushing a notification. You can download some notification sounds here. Server Side The server is responsible for accepting the device token submitted from the app as well as sending push notifications. Create a server folder inside the root directory of the project then create an ionic-gcm.js file. This is the file that will contain the code for running a node server. This file has three dependencies: express, node-gcm and body-parser. Install those using npm. npm install express node-gcm body-parser Open the ionic-gcm.js file and require those dependencies. var express = require('express'); var gcm = require('node-gcm'); Use express. var app = express(); Create a server that listens for requests on port 3000. var server = app.listen(3000, function(){ console.log('server is running'); }); Set the server to allow all request origins. This allows AJAX requests coming from any IP address. app.use(function(req, res, next){ res.header("Access-Control-Allow-Origin", "*"); res.header("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept"); next(); }); Create a global variable for storing the current device token. This will be updated when the /register route is accessed. And the current value will be used when the /push route is accessed. var device_token; Create a new route for the /register path. This is the route for device registration. You can access the device_token that was passed through the body property in the req object. Note that I haven't added any code related to saving the device token into a database. I expect you to have your own so I put it there as a TODO. For now it uses the global device_token variable for storing the device token. Once you've saved the device_token, send the ok as a response. app.post('/register', function(req, res){ device_token = req.body.device_token; console.log('device token received'); console.log(device_token); /*YOUR TODO: save the device_token into your database*/ res.send('ok'); }); To send push notifications: - Create a route for the /pushpath. - In the callback function, create a new array that will store the device tokens and a variable that stores the number of times to retry sending the message. - Create a new sender instance by calling the Senderfunction. This accepts the API key from the Google console. - Create a new message by calling the Messagefunction in the gcmobject. This object is provided by the node-gcm package. - Add the data to be passed in the push notification by calling the addDatafunction of the message. This function accepts a key-value pair of the data to pass. The required keys are titleand message. The titleis the title of the push notification and the messageis the content. In the example below, the soundkey is also passed. This is the name of the sound file you want to play when the notification is received. - Optionally, you can set the collapseKeyto group notifications. This lets you send notification #1 with a collapseKeyand then minutes later, notification #2 with the same collapseKey. What will happen is that notification #2 will replace notifiction #1. That is if the user still hasn't opened notification #1 when notification #2 arrives. delayWhileIdleis another optional property, if this is set to true, the notification isn't sent immediately if the device is idle. This means that the server waits for devices to become active before it sends the notification. Note that if the collapseKeyis set, the server will only send the latest messages sent containing that collapseKey. Finally there's timeToLivewhich allows you to set the number of seconds that the message will be kept on the server when the receiving device is offline. If you specify this property, you also need to specify the collapseKey. - This is another step that I expect you to implement on your own. Fetching the device_tokenfrom the database. In order to do that, you need to pass a user_idor other unique user identification. This would allow you to fetch the device_tokenby using that unique data as a key. In the example below, the value for the global device_tokenvariable is used instead. So every time a new device is registered, that device will be the one that receives the notification. - Push the device_tokenwhich you got from the database into the device_tokensarray. - Call the sendfunction in the messageobject. This accepts the device_tokens, retry_timesand the function to call once the message is sent. - Send the okresponse. app.get('/push', function(req, res){ var device_tokens = []; //create array for storing device tokens var retry_times = 4; //the number of times to retry sending the message if it fails var sender = new gcm.Sender('THE API KEY OF YOUR GOOGLE CONSOLE PROJECT'); //create a new sender var message = new gcm.Message(); //create a new message message.addData('title', 'New Message'); message.addData('message', 'Hello this is a push notification'); message.addData('sound', 'notification'); message.collapseKey = 'testing'; //grouping messages message.delayWhileIdle = true; //delay sending while receiving device is offline message.timeToLive = 3; //the number of seconds to keep the message on the server if the device is offline /* YOUR TODO: add code for fetching device_token from the database */ device_tokens.push(device_token); sender.send(message, device_tokens, retry_times, function(result){ console.log(result); console.log('push sent to: ' + device_token); }); res.send('ok'); }); Run the server by calling it from the terminal: node ionic-gcm.js If you want to use ngrok to expose this server to the internet, open a new terminal window where you installed ngrok and execute the following command: ngrok http 3000 This tells ngrok to expose port 3000 to the internet and assigns it a publicly accessible URL. Deploying the App Returning to the app. Navigate to the root directory of the app and update base_url in www/js/services/RequestsService.js to match the URL that ngrok provided. var base_url = 'http://{YOUR SERVER}'; Add the android platform: cordova platform add android Don't forget to add the mp3 file to the platforms/android/res/raw directory for the sound to work. Build the app by executing the following command: cordova build android Once that's complete, navigate to the platforms/android/build/outputs/apk/ directory and copy the android-debug.apk file to your Android device. Install it and open it. Once it's opened, it should forward the device token to the server. It should show something similar to the following in the terminal window where you executed node ionic-gcm.js: device token received sjdlf0ojw3ofjowejfowefnosfjlsjfosnf302r3n2on3fon3flsnflsfns0f9un You can now test the push notification by opening the following local URL in a browser: Your device should receive a push notification at this point. Here's a screenshot. Conclusion That's it! In this tutorial, you've learned how to work with the Google Cloud Messaging Platform for Android in order to send push notifications to a Cordova app. If you have any questions, comments or problems then please let me know in the comments below.
https://www.sitepoint.com/push-notifications-in-ionic-apps-with-google-cloud-messaging/
CC-MAIN-2018-43
refinedweb
2,814
50.43
Opened 7 months ago Closed 6 months ago Last modified 6 months ago #30479 closed Bug (fixed) Autoreloader with StatReloader doesn't track changes in manage.py. Description (last modified by ) This is a bit convoluted, but here we go. Environment (OSX 10.11): $ python -V Python 3.6.2 $ pip -V pip 19.1.1 $ pip install Django==2.2.1 Steps to reproduce: - Run a server python manage.py runserver - Edit the manage.pyfile, e.g. add print(): def main(): print('sth') os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'ticket_30479.settings') ... Under 2.1.8 (and prior), this will trigger the auto-reloading mechanism. Under 2.2.1, it won't. As far as I can tell from the django.utils.autoreload log lines, it never sees the manage.py itself. Change History (10) comment:1 Changed 7 months ago by comment:2 Changed 7 months ago by Argh. I guess this is because manage.py isn't showing up in the sys.modules. I'm not sure I remember any specific manage.py handling in the old implementation, so I'm not sure how it used to work, but I should be able to fix this pretty easily. comment:3 Changed 7 months ago by Done a touch of debugging: iter_modules_and_filesis where it gets lost. Specifically, it ends up in there twice: (<module '__future__' from '/../lib/python3.6/__future__.py'>, <module '__main__' from 'manage.py'>, <module '__main__' from 'manage.py'>, ...,) But getattr(module, "__spec__", None) is None is True so it continues onwards. I thought I managed to get one of them to have a __spec__ attr but no has_location, but I can't seem to get that again (stepping around with pdb) Digging into wtf __spec__ is None: Here's the py3 docs on it, which helpfully mentions that The one exception is __main__, where __spec__ is set to None in some cases comment:4 Changed 7 months ago by Tom, will you have time to work on this in the next few days? comment:5 Changed 7 months ago by I'm sorry for assigning it to myself Mariusz, I intended to work on it on Tuesday but work overtook me and now I am travelling for a wedding this weekend. So I doubt it I'm afraid. It seems Keryn's debugging is a great help, it should be somewhat simple to add special case handling for __main__, while __spec__ is None we can still get the filename and watch on that. comment:6 Changed 7 months ago by np, Tom, thanks for info. Keryn, it looks that you've already made most of the work. Would you like to prepare a patch? Thanks for the report. I simplified scenario. Regression in c8720e7696ca41f3262d5369365cc1bd72a216ca. Reproduced at 8d010f39869f107820421631111417298d1c5bb9.
https://code.djangoproject.com/ticket/30479
CC-MAIN-2019-51
refinedweb
459
76.22
This is a C program to find prime numbers in a given range. The program takes the range and finds all the prime numbers between the range and also prints the number of prime numbers. 1. Take the range of numbers between which you have to find the prime numbers as input. 2. Check for prime numbers only on the odd numbers between the range. 3. Also check if the odd numbers are divisible by any of the natural numbers starting from 2. 4. Print the prime numbers and its count. 5. Exit. Here is source code of the C program to calculate the prime numbers in a given range. The C program is successfully compiled and run on a Linux system. The program output is also shown below. #include <stdio.h> #include <stdlib.h> void main() { int num1, num2, i, j, flag, temp, count = 0; printf("Enter the value of num1 and num2 \n"); scanf("%d %d", &num1, &num2); if (num2 < 2) { printf("There are no primes upto %d\n", num2); exit(0); } printf("Prime numbers are \n"); temp = num1; if ( num1 % 2 == 0) { num1++; } for (i = num1; i <= num2; i = i + 2) { flag = 0; for (j = 2; j <= i / 2; j++) { if ((i % j) == 0) { flag = 1; break; } } if (flag == 0) { printf("%d\n", i); count++; } } printf("Number of primes between %d & %d = %d\n", temp, num2, count); } 1. User must take the range as input and it is stored in the variables num1 and num2 respectively. 2. Initially check whether num2 is lesser than number 2.If it is, then print the output as “there are no prime numbers”. 3. If it is not, then check whether num1 is even.If it is even, then make it odd by incrementing the num1 by 1. 4. Using for loop starting from num1 to num2, check whether the current number is divisible by any of the natural numbers starting from 2.Use another for loop to do this.Increment the first for loop by 2, so as to check only the odd numbers. 5. Firstly initialize the variables flag and count to zero. 6. Use the variable flag to differentiate the prime and non-prime numbers and use the variable count to count the number of prime numbers between the range. 7. Print the prime numbers and variable count separately as output. Case:1 Enter the value of num1 and num2 70 85 Prime numbers are 71 73 79 83 Number of primes between 70 and 85 = 4 Case:2 Enter the value of num1 and num2 0 1 There are no primes upto 1 Sanfoundry Global Education & Learning Series – 1000 C Programs. Here’s the list of Best Reference Books in C Programming, Data-Structures and Algorithms
http://www.sanfoundry.com/c-program-prime-numbers-given-range/
CC-MAIN-2017-39
refinedweb
459
78.28
Beans can use the standard event types defined in the java.awt.event and javax.swing.event packages, but they don't have to. Our YesNoPanel class defines its own event type, AnswerEvent. Defining a new event class is really quite simple; AnswerEvent is shown in Example 15-4. package je3.beans; /** * The YesNoPanel class fires an event of this type when the user clicks one * of its buttons. The id field specifies which button the user pressed. **/ public class AnswerEvent extends java.util.EventObject { public static final int YES = 0, NO = 1, CANCEL = 2; // Button constants protected int id; // Which button was pressed? public AnswerEvent(Object source, int id) { super(source); this.id = id; } public int getID( ) { return id; } // Return the button } Along with the AnswerEvent class, YesNoPanel also defines a new type of event listener interface, AnswerListener, that defines the methods that must be implemented by any object that wants to receive notification from a YesNoPanel. The definition of AnswerListener is shown in Example 15-5. package je3.beans; /** * Classes that want to be notified when the user clicks a button in a * YesNoPanel should implement this interface. The method invoked depends * on which button the user clicked. **/ public interface AnswerListener extends java.util.EventListener { public void yes(AnswerEvent e); public void no(AnswerEvent e); public void cancel(AnswerEvent e); }
http://books.gigatux.nl/mirror/javaexamples/0596006209_jenut3-chp-15-sect-4.html
CC-MAIN-2018-43
refinedweb
222
58.58
: name). productCode.) org.galbraiths.clarity.styleClass: and then the property pallette lets you set those values:. I haven't used JFormDesigner a lot but from what I have seen so far it's really a great tool and I'm glad to see its author is bringing innovations to it. Anyway, I'm looking forward to attending your session. And I promise I'll heckle (I just need to find a reason now.) Posted by: gfx on April 13, 2006 at 11:01 AM Nice! I've been thinking of some really cool stuff for Dolphin that will use client properties, so I'm glad to see there is some IDE support. We need to have a big list of the commonly available client properties somewhere. Posted by: joshy on April 13, 2006 at 03:12 PM gfx: I'll be sure to leave some time for Q&A this year ;-) joshy: Sounds like a great idea. Would just be a 30 minute project, too. Maybe someone with some free time reading the comments can do it? Post a blog entry? Please? ;-) Posted by: javaben on April 13, 2006 at 03:27 PM If I use JFormDesigner, how much am I locked into it? I looked at it, but was afraid of being tied to it. Posted by: coxcu on April 14, 2006 at 05:55 AM Very interesting...Is this external CSS mechanism you describe something that is available with Swing or would it be custom code that I would need to write? Posted by: robjkc on April 14, 2006 at 07:43 AM >> (I actually apply the styles via a decoration mechanism that applies to all component hierarchies before they are displayed, but that's another story.) Actually I'm more interested in how you do this. I also use client properties and set them in a way that's visitor patternish. It works OK but I could think of a few ways it could be done. I've never seen anyone else's code that implemented custom client properties. I came to it on my own when I hit a multi-inheritance brick wall. Ray Posted by: raytucson on April 14, 2006 at 08:22 AM coxcu: Unfortunately, there's no standard for a GUI builder file format that allows you to transfer your GUI designed screen from builder to builder. That would be the ideal solution to prevent lock-in, but would also mean all kinds of little problems like builder A supports X, Y, and Z but builder B does not, etc. Having said that, I'm not sure I understand the problem. I have an abstraction layer in my application that loads the product of any GUI builder and presents it to my application as a Swing component hierarchy. Other than four lines of code in one place that loads the screen definition, there is no other JFormDesigner-specific code in my system. As far as the artifacts that JFormDesigner generates, its either Java source code or a JavaBeans XML serialization file, both of which are fairly standard. So, lock-in isn't a real issue for me using JFormDesigner or any other GUI builder -- at the end of the day, they all have to generate a Swing component hierarchy. robjkc: Yeah, its definitely not included in Swing at the moment. At my JavaOne session this year, I'll talk more about the mechanism I use and give attendees source code they can use to add CSS-like decoration to their own apps. raytucson: My framework is a container-based form framework. So, each screen you display is a form, and a container manages the initialization of that form. As part of the initialization, the container will decorate the components in the form. This decoration covers a lot of extensions I add to general Swing mechanisms. For example, I wrap all of the TableCellRenderers in every JTable to provide support for changing column alignment in a generic way (among many other features). In this case, I apply styles to components. One problem I haven't fixed with this approach is what you do for dynamically created components. At the moment, you have to manually ask the decorator to decorate such manually created components. But, that's not a terribly common case for the particular class of application I designed the framework to handle, and there may be some ways to automate the discovery and decorate of dynamically created components in this case -- I haven't looked into it. To put it all in more specific terms, here's some pseudo code that demonstrates how it works: JPanel panel = ... FormContainer container = ClarityManager.installFormContainer(panel); container.displayForm(new MyForm()); The code above would cause a FormContainer to display (and initialize, and as part of the initialization, decorate) the MyForm instance. MyForm looks a bit like: public class MyForm extends Form { public void createAndLayoutComponents(Container container) { useRuntimeForm("MyForm"); } public void attachListenersToComponents() { bindAction("componentName", someAction); } } You can do whatever you like in createAndLayoutComponents to initialize the Form's component hierarchy. The method useRuntimeForm(String) will look for a GUI builder artifact named "MyForm" in a special place in the application and load it. "MyForm" can either be a .jfd file (JFormDesigner's format, which is just the JavaBeans XML serialization output format) or some other GUI builder -- the abstraction layer takes care of all that. So, those are a few more details on how things work in my world. Posted by: javaben on April 14, 2006 at 09:11 AM Very nice. Especially having the ability to see the effect of the style in the form editor. I wonder how you deal with Look and Feel differences? I guess you must be using a different stylesheet for each L&F. The fontSize and font-family values in your JTextField.productCode selector obviously aren't right for Mac OS X. Posted by: wrandelshofer on April 15, 2006 at 01:12 AM wrandelshofer: I typically use a cross-platform look-and-feel precisely so I don't have to worry about managing different styles for different platforms, etc. But, if you did want to do such a thing, you could do some fancy condition stuff, like: AquaLookAndFeel.small { font-size: 10pt; font-family: Helvetica; } WindowsLookAndFeel.small { font-size: 9pt; font-family: Arial; } .productCode { font-size: small; font-family: small; border: 1px solid black; columns: 10; } Posted by: javaben on April 15, 2006 at 01:40 AM Thanks for very interesting article. btw. I really enjoyed reading all of your posts. It’s interesting to read ideas, and observations from someone else’s point of view… makes you think more. So please keep up the great work. Greetings. Firefox下載,Thai Boxing,鋼琴搬運,搬屋公司 Posted by: winbill on December 19, 2007 at 08:26:57 PM
http://weblogs.java.net/blog/javaben/archive/2006/04/finally_client.html
crawl-002
refinedweb
1,134
62.58
Subject: Re: [boost] [type_traits] Rewrite and dependency free version From: Niall Douglas (s_sourceforge_at_[hidden]) Date: 2015-02-03 19:19:25 On 3 Feb 2015 at 23:11, Stephen Kelly wrote: > The point is: either Boost is prepared for declaring 'this group of headers > depends on that group' (and then taking advantage of the things that follow > that declaration), or it is not. I really wish people would stop thinking in terms of headers. I know you don't, but for the others headers == source code. Much much better is to think in terms of namespaces. What _namespaces_ depend on which others ... if you map that, we're getting somewhere. (libclang makes that easy BTW) > I tried to raise it as an issue anyway. Maybe in a year something will > change. It took a long time for anyone in this community to take any notice > of the concept of modularity at all, but now there seem to be a few people > who get it. That took many many months though... Why is that? There has already been very significant progress in persuading people to move psychologically on this. I'd even say 60% of the ground has been covered. I'd suggest keep going. Niall -- ned Productions Limited Consulting Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2015/02/219883.php
CC-MAIN-2021-39
refinedweb
231
76.32
Wren/all Please remember SPJ's request on the Records wiki to stick to the namespace issue. We're trying to make something better that H98's name clash. We are not trying to build some ideal polymorphic record system. To take the field labelled "name": in H98 you have to declare each record in a different module and import every module into your application and always refer to "name" prefixed by the module. DORF doesn't stop you doing any of that. So if you think of each "name" being a different meaning, carry on using multiple modules and module prefixes. That's as easy (or difficult) as under H98. You can declare fieldLabel "name" in one module, import it unqualified into another and declare more records with a "name" label -- contrary to what somebody was claiming. Or you can import fieldLabel "name" qualified, and use it as a selector function on all record types declared using it. It's just a function like any other imported/qualified function, for crying out loud! So if there's 'your' "name" label and 'my' "name", then use the module/qualification system as you would for any other scoped name. Then trying to apply My.name to Your.record will get an instance failure, as usual. (And by the way, there's no "DORFistas", let's avoid personalising this. There are people who don't seem to understand DORF -- both those criticising and those supporting.) AntC ----- Original Message Follows ----- > On 2/25/12 10:18 AM, Gábor Lehel wrote: > >! > > > > _______________________________________________ > Glasgow-haskell-users mailing list > Glasgow-haskell-users at haskell.org >
http://www.haskell.org/pipermail/glasgow-haskell-users/2012-February/021974.html
CC-MAIN-2013-48
refinedweb
269
66.03
As you may know we also release technical blogs about Oracle Solaris on the Observatory blog and on this blog we recently wrote a series of blogs on the DevOps Hands on Lab we did at Oracle OpenWorld this year. One of the requests I got after these blogs was for a higher level overview blog of this DevOps Hands on Lab. So here it is. In general this Hands on Lab was created to show how you can set up a typical DevOps toolchain on (a set of) Oracle Solaris systems. In practice the toolchain will be more complex and probably a combination of different types of systems and Operating Systems. In this example we've chosen to do everything on Oracle Solaris in part to show that this is possible (as this is sometimes not realized), and how you can leverage Oracle Solaris features like Solaris Zones and SMF to your advantage when doing so. In this particular case we also used Oracle WebLogic Server to show how this is installed on Oracle Solaris and to show how install and use the Oracle Maven plugin for WebLogic Server as this is also a very handy tool and works pretty much the same on all platforms. The DevOps toolchain we want to create essentially looks like this: Note: The whole setup here is inside a single VirtualBox instance with Solaris Zones inside it because in the Hands on Lab every participant get a separate laptop and this way we can simulate the full set up within a single system. The general flow goes from left to right with the following main steps: At this point the new version of the application is up and running, which means it can for example be connected to other systems in a larger test of all the different application pieces, or just used for individual function testing. The intent of this whole chain is to allow a developer to push their code into the repository and that after that everything automatically happens and they don't have to do anything by hand or submit tickets to have someone else clear space on a system set it up and install all the bits necessary. This is all predefined and rolled out automatically, saving a lot of time and engineering cycles both on the developer side as well as on the operations side. The other benefit is that the developers can very easily try out different versions quickly after each other this way quickly iterating to a better solution than if things need to The reality of course is slightly more complex than this simple case. This workflow would be used maybe for initial testing and once it has successfully run this new version can be promoted to a different repository where it will be built (possibly by a different Jenkins server) and then deployed to for example systems for stress testing, or pre-production systems for testing in a production like setting. And then finally once it's deemed ready it can be promoted and deployed to the production systems. And all of this can be made easier with this type of automation. So what about DevOps and Oracle Solaris? Well, there are many aspects to this, first as you can see in this Hands on Lab example, all these tools run just like they would on other platforms. This is in part because they are designed to do so and also in part because the languages they're based on also run on Oracle Solaris. We actually ship most of them (Java, Python, Ruby, Perl, …) with Oracle Solaris and adding them is a simple package install. In general we ship a lot of FOSS tools with Oracle Solaris as well as work with FOSS communities to make and keep it working on Oracle Solaris. So besides the languages we include source code management systems like Mercurial, Subversion, Git, and others. We ship JUnit, all the GNU tools, Puppet, and many other tools and packages. Maven or Jenkins mentioned in the example above we don't ship because these constantly update themselves anyway and folks tend to be picky about what version they want to use. We actually use many of these tools in our own development of new Solaris versions. For example we use Mercurial and Git as you may expect for all the different types of code we either write ourselves or bring in from the FOSS communities. But we also heavily use a tool like Jenkins to automate our build and test system. We rely in the same way on these tools to make development simpler, with less mistakes, and quicker. Finally a short word on using the tools in Oracle Solaris for a DevOps environment. In the Lab we only use Solaris Zones and SMF, the first to create isolated OS containers that can function as secure isolated environments with their own resources and namespaces. And the second to turn Jenkins into a Solaris service that you can enable and disable, and that will automatically restart if it for some reason fails or is accidentally stopped. But there are many others you can use too. For example you can use Boot Environments to easily move between different configuration or patch versions. You can use the built-in Network Virtualization to create separate networks, each isolated from each other that can even span different systems. You can use Unified Archives to create an archive of a system, domain, or zone in a certain state which you can easily roll out multiple times across one or more systems/domains/zones to fit growing demand. And there are many others. As stated above the in-depth technical blogs can be found on the Observatory blog, and they are spread over Part 1, Part 2, Part 3, and Part 4. Note: that even though the workflow goes from left to right, the blogs build the toolchain from right to left as it's easier verify each step is working that way.
https://blogs.oracle.com/solaris/devops-on-oracle-solaris-like-a-pro
CC-MAIN-2019-18
refinedweb
1,007
52.63
$ cnpm install nightmare Nightmare is a high-level browser automation library from Segment. The goal is to expose. Under the covers it uses Electron, which is similar to PhantomJS but roughly twice as fast and more modern. ⚠️ Security Warning: We've implemented many of the security recommendations outlined by Electron to try and keep you safe, but undiscovered vulnerabilities may exist in Electron that could allow a malicious website to execute code on your computer. Avoid visiting untrusted websites. ???? Migrating to 3.x: You'll want to check out this issue before upgrading. We've worked hard to make improvements to nightmare while limiting the breaking changes and there's a good chance you won't need to do anything.. Many thanks to @matthewmueller and @rosshinkley for their help on Nightmare. Let's search on DuckDuckGo:) }) You can run this with: npm install --save nightmare node example.js Or, let's run some mocha tests: const Nightmare = require('nightmare') const chai = require('chai') const expect = chai.expect describe('test duckduckgo search results', () => { it('should find the nightmare github link first', function(done) { this.timeout('10s') const nightmare = Nightmare() nightmare .goto('') .type('#search_form_input_homepage', 'github nightmare') .click('#search_button_homepage') .wait('#links .result__a') .evaluate(() => document.querySelector('#links .result__a').href) .end() .then(link => { expect(link).to.equal('') done() }) }) }) You can see examples of every function in the tests here. To get started with UI Testing, check out this quick start guide. npm install npm test Nightmare is intended to be run on NodeJS 4.x or higher. Creates a new instance that can navigate around the web. The available options are documented here, along with the following nightmare-specific options. Throws an exception if the .wait() didn't return true within the set timeframe. const nightmare = Nightmare({ waitTimeout: 1000 // in ms }) Throws an exception if the .goto() didn't finish loading within the set timeframe. Note that, even though goto normally waits for all the resources on a page to load, a timeout exception is only raised if the DOM itself has not yet loaded. const nightmare = Nightmare({ gotoTimeout: 1000 // in ms }) Forces Nightmare to move on if a page transition caused by an action (eg, .click()) didn't finish within the set timeframe. If loadTimeout is shorter than gotoTimeout, the exceptions thrown by gotoTimeout will be suppressed. const nightmare = Nightmare({ loadTimeout: 1000 // in ms }) The maximum amount of time to wait for an .evaluate() statement to complete. const nightmare = Nightmare({ executionTimeout: 1000 // in ms }) The default system paths that Electron knows about. Here's a list of available paths: You can overwrite them in Nightmare by doing the following: const nightmare = Nightmare({ paths: { userData: '/user/data' } }) The command line switches used by the Chrome browser that are also supported by Electron. Here's a list of supported Chrome command line switches: const nightmare = Nightmare({ switches: { 'proxy-server': '1.2.3.4:5678', 'ignore-certificate-errors': true } }) The path to the prebuilt Electron binary. This is useful for testing on different versions of Electron. Note that Nightmare only supports the version on which this package depends. Use this option at your own risk. const nightmare = Nightmare({ electronPath: require('electron') }) A boolean to optionally show the Electron icon in the dock (defaults to false). This is useful for testing purposes. const nightmare = Nightmare({ dock: true }) Optionally shows the DevTools in the Electron window using true, or use an object hash containing mode: 'detach' to show in a separate window. The hash gets passed to contents.openDevTools() to be handled. This is also useful for testing purposes. Note that this option is honored only if show is set to true. const nightmare = Nightmare({ openDevTools: { mode: 'detach' }, show: true }) How long to wait between keystrokes when using .type(). const nightmare = Nightmare({ typeInterval: 20 }) How long to wait between checks for the .wait() condition to be successful. const nightmare = Nightmare({ pollInterval: 50 //in ms }) Defines the number of times to retry an authentication when set up with .authenticate(). const nightmare = Nightmare({ maxAuthRetries: 3 }) A string to determine the client certificate selected by electron. If this options is set, the select-client-certificate event will be set to loop through the certificateList and find the first certificate that matches subjectName on the electron Certificate Object. const nightmare = Nightmare({ certificateSubjectName: 'tester' }) Gets the versions for Electron and Chromium. Sets the useragent used by electron. Sets the user and password for accessing a web page using basic authentication. Be sure to set it before calling .goto(url). Completes any queue operations, disconnect and close the electron process. Note that if you're using promises, .then() must be called after .end() to run the .end() task. Also note that if using an .end() callback, the .end() call is equivalent to calling .end() followed by .then(fn). Consider: nightmare .goto(someUrl) .end(() => 'some value') //prints "some value" .then(console.log) Clears all queued operations, kills the electron process, and passes error message or 'Nightmare Halted' to an unresolved promise. Done will be called after the process has exited. Loads the page at url. Optionally, a headers hash can be supplied to set headers on the goto request. When a page load is successful, goto returns an object with metadata about the page load, including: url: The URL that was loaded code: The HTTP status code (e.g. 200, 404, 500) method: The HTTP method used (e.g. "GET", "POST") referrer: The page that the window was displaying prior to this load or an empty string if this is the first page load. headers: An object representing the response headers for the request as in {header1-name: header1-value, header2-name: header2-value} If the page load fails, the error will be an object with the following properties: message: A string describing the type of error code: The underlying error code describing what went wrong. Note this is NOT the HTTP status code. For possible values, see details: A string with additional details about the error. This may be null or an empty string. url: The URL that failed to load Note that any valid response from a server is considered “successful.” That means things like 404 “not found” errors are successful results for goto. Only things that would cause no page to appear in the browser window, such as no server responding at the given address, the server hanging up in the middle of a response, or invalid URLs, are errors. You can also adjust how long goto will wait before timing out by setting the gotoTimeout option on the Nightmare constructor. Goes back to the previous page. Goes forward to the next page. Refreshes the current page. Clicks the selector element once. Mousedowns the selector element once. Mouseups the selector element once. Mouseovers the selector element once. Mouseout the selector element once. Enters the text provided into the selector element. Empty or falsey values provided for text will clear the selector's value. .type() mimics a user typing in a textbox and will emit the proper keyboard events. Key presses can also be fired using Unicode values with .type(). For example, if you wanted to fire an enter key press, you would write .type('body', '\u000d'). If you don't need the keyboard events, consider using .insert()instead as it will be faster and more robust. Similar to .type(), .insert() enters the text provided into the selector element. Empty or falsey values provided for text will clear the selector's value. .insert() is faster than .type() but does not trigger the keyboard events. Checks the selector checkbox element. Unchecks the selector checkbox element. Changes the selector dropdown element to the option with attribute [value= option] Scrolls the page to desired position. top and left are always relative to the top left corner of the document. Sets the viewport size. Injects a local file onto the current page. The file type must be either js or css. Invokes fn on the page with arg1, arg2,.... All the args are optional. On completion it returns the return value of fn. Useful for extracting information from the page. Here's an example: const selector = 'h1' nightmare .evaluate(selector => { // now we're executing inside the browser scope. return document.querySelector(selector).innerText }, selector) // <-- that's how you pass parameters from Node scope to browser scope .then(text => { // ... }) Error-first callbacks are supported as a part of evaluate(). If the arguments passed are one fewer than the arguments expected for the evaluated function, the evaluation will be passed a callback as the last parameter to the function. For example: const selector = 'h1' nightmare .evaluate((selector, done) => { // now we're executing inside the browser scope. setTimeout( () => done(null, document.querySelector(selector).innerText), 2000 ) }, selector) .then(text => { // ... }) Note that callbacks support only one value argument (eg function(err, value)). Ultimately, the callback will get wrapped in a native Promise and only be able to resolve a single value. Promises are also supported as a part of evaluate(). If the return value of the function has a then member, .evaluate() assumes it is waiting for a promise. For example: const selector = 'h1'; nightmare .evaluate((selector) => ( new Promise((resolve, reject) => { setTimeout(() => resolve(document.querySelector(selector).innerText), 2000); )}, selector) ) .then((text) => { // ... }) Waits for ms milliseconds e.g. .wait(5000). Waits until the element selector is present e.g. .wait('#pay-button'). Waits until the fn evaluated on the page with arg1, arg2,... returns true. All the args are optional. See .evaluate() for usage. Adds a header override for all HTTP requests. If header is undefined, the header overrides will be reset. Returns whether the selector exists or not on the page. Returns whether the selector is visible or not. Captures page events with the callback. You have to call .on() before calling .goto(). Supported events are documented here. This event is triggered if any javascript exception is thrown on the page. But this event is not triggered if the injected javascript code (e.g. via .evaluate()) is throwing an exception. Listens for window.addEventListener('error'), alert(...), prompt(...) & confirm(...). Listens for top-level page errors. This will get triggered when an error is thrown on the page. Nightmare disables window.alert from popping up by default, but you can still listen for the contents of the alert dialog. Nightmare disables window.prompt from popping up by default, but you can still listen for the message to come up. If you need to handle the confirmation differently, you'll need to use your own preload script. Nightmare disables window.confirm from popping up by default, but you can still listen for the message to come up. If you need to handle the confirmation differently, you'll need to use your own preload script. type will be either log, warn or error and arguments are what gets passed from the console. This event is not triggered if the injected javascript code (e.g. via .evaluate()) is using console.log. Similar to .on(), but captures page events with the callback one time. Removes a given listener callback for an event. Takes a screenshot of the current page. Useful for debugging. The output is always a png. Both arguments are optional. If path is provided, it saves the image to the disk. Otherwise it returns a Buffer of the image data. If clip is provided (as documented here), the image will be clipped to the rectangle. Saves the current page as html as files to disk at the given path. Save type options are here. Saves a PDF to the specified path. Options are here. Returns the title of the current page. Returns the url of the current page. Returns the path name of the current page. Gets a cookie by it's name. The url will be the current url. Queries multiple cookies with the query object. If a query.name is set, it will return the first cookie it finds with that name, otherwise it will query for an array of cookies. If no query.url is set, it will use the current url. Here's an example: // get all google cookies that are secure // and have the path `/query` nightmare .goto('') .cookies.get({ path: '/query', secure: true }) .then(cookies => { // do something with the cookies }) Available properties are documented here: Gets all the cookies for the current url. If you'd like get all cookies for all urls, use: .get({ url: null }). Sets a cookie's name and value. This is the most basic form, and the url will be the current url. Sets a cookie. If cookie.url is not set, it will set the cookie on the current url. Here's an example: nightmare .goto('') .cookies.set({ name: 'token', value: 'some token', path: '/query', secure: true }) // ... other actions ... .then(() => { // ... }) Available properties are documented here: Sets multiple cookies at once. cookies is an array of cookie objects. Take a look at the .cookies.set(cookie) documentation above for a better idea of what cookie should look like. Clears a cookie for the current domain. If name is not specified, all cookies for the current domain will be cleared. nightmare .goto('') .cookies.clear('SomeCookieName') // ... other actions ... .then(() => { // ... }) Clears all cookies for all domains. nightmare .goto('') .cookies.clearAll() // ... other actions ... .then(() => { //... }) Proxies are supported in Nightmare through switches. If your proxy requires authentication you also need the authentication call. The following example not only demonstrates how to use proxies, but you can run it to test if your proxy connection is working: import Nightmare from 'nightmare'; const proxyNightmare = Nightmare({ switches: { 'proxy-server': 'my_proxy_server.example.com:8080' // set the proxy server here ... }, show: true }); proxyNightmare .authentication('proxyUsername', 'proxyPassword') // ... and authenticate here before `goto` .goto('') .evaluate(() => { return document.querySelector('b').innerText.replace(/[^\d\.]/g, ''); }) .end() .then((ip) => { // This will log the Proxy's IP console.log('proxy IP:', ip); }); // The rest is just normal Nightmare to get your local IP const regularNightmare = Nightmare({ show: true }); regularNightmare .goto('') .evaluate(() => document.querySelector('b').innerText.replace(/[^\d\.]/g, ''); ) .end() .then((ip) => { // This will log the your local IP console.log('local IP:', ip); }); By default, Nightmare uses default native ES6 promises. You can plug in your favorite ES6-style promises library like bluebird or q for convenience! Here's an example: var Nightmare = require('nightmare') Nightmare.Promise = require('bluebird') // OR: Nightmare.Promise = require('q').Promise You can also specify a custom Promise library per-instance with the Promise constructor option like so: var Nightmare = require('nightmare') var es6Nightmare = Nightmare() var bluebirdNightmare = Nightmare({ Promise: require('bluebird') }) var es6Promise = es6Nightmare .goto('') .then() var bluebirdPromise = bluebirdNightmare .goto('') .then() es6Promise.isFulfilled() // throws: `TypeError: es6EndPromise.isFulfilled is not a function` bluebirdPromise.isFulfilled() // returns: `true | false` You can add your own custom actions to the Nightmare prototype. Here's an example: Nightmare.action('size', function(done) { this.evaluate_now(() => { const w = Math.max( document.documentElement.clientWidth, window.innerWidth || 0 ) const h = Math.max( document.documentElement.clientHeight, window.innerHeight || 0 ) return { height: h, width: w } }, done) }) Nightmare() .goto('') .size() .then(size => { //... do something with the size information }) Remember, this is attached to the static class Nightmare, not the instance. You'll notice we used an internal function evaluate_now. This function is different than nightmare.evaluate because it runs it immediately, whereas nightmare.evaluate is queued. An easy way to remember: when in doubt, use evaluate. If you're creating custom actions, use evaluate_now. The technical reason is that since our action has already been queued and we're running it now, we shouldn't re-queue the evaluate function. We can also create custom namespaces. We do this internally for nightmare.cookies.get and nightmare.cookies.set. These are useful if you have a bundle of actions you want to expose, but it will clutter up the main nightmare object. Here's an example of that: Nightmare.action('style', { background(done) { this.evaluate_now( () => window.getComputedStyle(document.body, null).backgroundColor, done ) } }) Nightmare() .goto('') .style.background() .then(background => { // ... do something interesting with background }) You can also add custom Electron actions. The additional Electron action or namespace actions take name, options, parent, win, renderer, and done. Note the Electron action comes first, mirroring how .evaluate() works. For example: Nightmare.action( 'clearCache', (name, options, parent, win, renderer, done) => { parent.respondTo('clearCache', done => { win.webContents.session.clearCache(done) }) done() }, function(done) { this.child.call('clearCache', done) } ) Nightmare() .clearCache() .goto('') //... more actions ... .then(() => { // ... }) ...would clear the browser’s cache before navigating to example.org. See this document for more details on creating custom actions. nightmare.use is useful for reusing a set of tasks on an instance. Check out nightmare-swiftly for some examples. If you need to do something custom when you first load the window environment, you can specify a custom preload script. Here's how you do that: import path from 'path' const nightmare = Nightmare({ webPreferences: { preload: path.resolve('custom-script.js') //alternative: preload: "absolute/path/to/custom-script.js" } }) The only requirement for that script is that you'll need the following prelude: window.__nightmare = {} __nightmare.ipc = require('electron').ipcRenderer To benefit of all of nightmare's feedback from the browser, you can instead copy the contents of nightmare's preload script. By default nightmare will create an in-memory partition for each instance. This means that any localStorage or cookies or any other form of persistent state will be destroyed when nightmare is ended. If you would like to persist state between instances you can use the webPreferences.partition api in electron. import Nightmare from 'nightmare'; nightmare = Nightmare(); // non persistent paritition by default yield nightmare .evaluate(() => { window.localStorage.setItem('testing', 'This will not be persisted'); }) .end(); nightmare = Nightmare({ webPreferences: { partition: 'persist: testing' } }); yield nightmare .evaluate(() => { window.localStorage.setItem('testing', 'This is persisted for other instances with the same paritition name'); }) .end(); If you specify a null paritition then it will use the electron default behavior (persistent) or any string that starts with 'persist:' will persist under that partition name, any other string will result in in-memory only storage. Nightmare is a Node.js module, so you'll need to have Node.js installed. Then you just need to npm install the module: $ npm install --save nightmare Nightmare is a node module that can be used in a Node.js script or module. Here's a simple script to open a web page: import Nightmare from 'nightmare'; const nightmare = Nightmare(); nightmare.goto('') .evaluate(() => { return document.title; }) .end() .then((title) => { console.log(title); }) If you save this as cnn.js, you can run it on the command line like this: npm install --save nightmare node cnn.js Nightmare heavily relies on Electron for heavy lifting. And Electron in turn relies on several UI-focused dependencies (eg. libgtk+) which are often missing from server distros. For help running nightmare on your server distro check out How to run nightmare on Amazon Linux and CentOS guide. There are three good ways to get more information about what's happening inside the headless browser: DEBUG=*flag described below. { show: true }to the nightmare constructor to have it create a visible, rendered window where you can watch what is happening. To run the same file with debugging output, run it like this DEBUG=nightmare node cnn.js (on Windows use set DEBUG=nightmare & node cnn.js). This will print out some additional information about what's going on: nightmare queueing action "goto" +0ms nightmare queueing action "evaluate" +4ms Breaking News, U.S., World, Weather, Entertainment & Video News - CNN.com All nightmare messages DEBUG=nightmare* Only actions DEBUG=nightmare:actions* Only logs DEBUG=nightmare:log* Ross Hinkley's Nightmare Examples is a great resource for setting up nightmare, learning about custom actions, and avoiding common pitfalls. Nightmare Issues has a bunch of standalone runnable examples. The script numbers correspond to nightmare issue numbers. Nightmarishly good scraping is a great tutorial by Ændrew Rininsland on getting up & running with Nightmare using real-life data. Automated tests for nightmare itself are run using Mocha and Chai, both of which will be installed via npm install. To run nightmare's tests, just run make test. When the tests are done, you'll see something like this: make test ․․․․․․․․․․․․․․․․․․ 18 passing (1m) Note that if you are using xvfb, make test will automatically run the tests under an xvfb-run wrapper. If you are planning to run the tests headlessly without running xvfb first, set the HEADLESS environment variable to 0. WWWWWW||WWWWWW W W W||W W W || ( OO )__________ / | \ /o o|.
https://npm.taobao.org/package/nightmare
CC-MAIN-2020-16
refinedweb
3,392
60.51
Hi, they do have an API ( ) but I couldn't find an updated python library for it (py-stockexchange reads 1.1 API version whereas the latest is 2.1). Nonetheless, their API return json data, hence you can very easily parse it with python (or whatever other language supporting json). To see the kind of json returned (or tune the query) you can visit This is a very simple example of how to put the pieces together in python (far from being usable in production, but already useful to avoid manual searches): [elisiano@pc-elisiano ~/Projects ]$ cat couchdb_stack_overflow.py #!/usr/bin/env python2 import urllib2 from StringIO import StringIO import gzip import json url="""""" req=urllib2.Request(url) req.add_header('Accept-Encoding', 'gzip') # should be the default res=urllib2.urlopen(req) buf=StringIO(res.read()) f=gzip.GzipFile(fileobj=buf) data=json.loads(f.read()) ### print only unanswered questions for question in data['items']: if not question['is_answered']: print "%s => %s" % (question['title'], question['link']) [elisiano@pc-elisiano ~/Projects ]$ ./couchdb_stack_overflow.py couchdb map/reduce view: counting only the most recent items => couchDB sorting complex key => How to specify individual database location in couchdb? => Query Ad-Hoc (Temporary) Views with ektorp => Google closure on CouchDB => couchdb conflict identical document => What database(s) for storing user data, and also support targeting queries? => CouchDB Security in a Lightweight Stack? => CouchDB: synchronize between slave databases => CouchDB "virtual" database, that combines 2 databases into 1 => CouchDB didn't start in Windows XP? Anybody has same experince? => How to retrieve the couchDB data by given limit(start_limit,end_limit) using cradle in node.js? => On Sat, 2013-03-09 at 11:08 -0800, Mark Hahn wrote: > Is this automated? If not then it should be. I assume stackoverflow has > an api. > > > On Sat, Mar 9, 2013 at 6:17 AM, Noah Slater <nslater@apache.org> wrote: > > > Dear community, > > > > Here are the latest StackOverflow questions about CouchDB. These might be a > > good opportunity to earn some StackOverflow reputation and help out the > > wider CouchDB community at the same time! > > > > CouchDB Security in a Lightweight Stack? > > > > > > > > CouchDB didn't start in Windows XP? Anybody has same experince? [closed] > > > > > > > > Thanks, > > > > -- > > NS > >
http://mail-archives.apache.org/mod_mbox/couchdb-user/201303.mbox/%3C1362996146.10903.29.camel@localhost.localdomain%3E
CC-MAIN-2015-22
refinedweb
362
57.67
Scanner have method .hasNext...() for every .next...() so you can not only check for EOF but also to changes in the input (from numbers to strings for example) But as far as I know, online-judge doesn't likes Scanner so you can't use it. Search found 23 matches - Sat Oct 13, 2007 7:01 am - Forum: Java - Topic: Some things you have to know about Java Support. - Replies: 2 - Views: 3619 - Sat Oct 13, 2007 6:57 am - Forum: C++ - Topic: Always WA, even with correct output... (what am I missing?) - Replies: 6 - Views: 3663 More importantly, no there is no "presentation error" so every problem outputting the correct solutions will get a WA which may be confusing. Always check that there is exactly 1 newline character after the end of output. Another easily-to-overlook mistake is to print an empty character after printi... - Sat Oct 13, 2007 6:22 am - Forum: Volume 103 (10300-10399) - Topic: 10364 - Square - Replies: 47 - Views: 15100 I'm also trying to use backtracking but I got TLE. This is my reasoning: Check if you have already found a solution If not, check if you have already use all elements of the vector If not, cycle over the elements that haven't been used check if the element plus an accumulated total is exactly the le... - Sun Aug 26, 2007 2:39 am - Forum: Other words - Topic: Outside Problem - Ball Bearings - Replies: 0 - Views: 1663 Outside Problem - Ball Bearings I was looking to Ball Bearings () I tried the obvious: calculate the inner circumference and divide by the number of balls plus the space required: int howMany(double D, double d, double s) { return (jnt)( ( 3.141592653589793*(D-d) )/(d+s) ); } Which ... - Mon Jul 23, 2007 11:11 pm - Forum: Volume 112 (11200-11299) - Topic: 11215 - How Many Numbers? - Replies: 11 - Views: 4989 Yeah it will. An so it will test:Yeah it will. An so it will test:rujialiu wrote:I'm not sure I understand your algorithm correct. Will it try something like (1+1)*(1+1)? 1+(1*1)+1 and (1+1)+1 1+(1+1) and all possible combinations, it's just a backtracking of all the possible cases but it seems horribly slow! - Sat Jul 21, 2007 5:26 pm - Forum: Java - Topic: Problem submitting Java - Replies: 1 - Views: 2905 Problem submitting Java I Ja... - Sat Jul 21, 2007 5:06 pm - Forum: Volume 112 (11200-11299) - Topic: 11215 - How Many Numbers? - Replies: 11 - Views: 4989 I been thinking about this problem and the algorithm I thinking is straightforward but it seems too slow: Create an array "ops" with the four operations: +, -. *, / Read an array "numbers" and store the n numbers of input Create a function to create all possible sets of size n-1 where each element b... - Mon Oct 16, 2006 2:30 am - Forum: Volume 111 (11100-11199) - Topic: 11110 - Equidivisions - Replies: 33 - Views: 19211 Could someone provide tricky test cases? I checked all the possible flaws here discussed but my code doesn't fall on those. However I still get WA! Thanks! I checked all the possible flaws here discussed but my code doesn't fall on those. However I still get WA! Thanks!
https://onlinejudge.org/board/search.php?st=0&sk=t&sd=d&sr=posts&author_id=15036&start=15
CC-MAIN-2020-29
refinedweb
542
69.62
import "ronoaldo.gopkg.net/aetools/vmproxy" Package vmproxy provides tools to proxy App Engine requests to on-demand Compute Engine instances. Google App Engine is a PaaS cloud infrastructure that scales automatically, and is very cost-effective. One nice features of App Engine is the ability to scale apps to 0 instances. This is a perfect fit for low-traffic websites, or to run sporadic background tasks, so you only pay for the time you are serving requests. However, App Engine runs your apps on a sandboxed environment. This limits what you can do with your application instances, to a confined subset of supported languages and features. To remove this limitation, you have to either move to Compute Engine virtual machines (IaaS) or use a Docker Container cluster (Google Container Engine) to host your applications. Both are ideal to improve your DevOps experiences and you can pick the best fit for you use case. There is a new option available, that boils down to running Docker containers and VMs, but leaveraging most other App Engine features, called Managed VMs. The problem with the previous alternatives is that you can't scale to zero. You need at least one VM aways on. For some use cases, this is a deal breaker. This package attempts to solve this by allowing you to easily launch VMs on-demand, and proxy requests from App Engine to yor VM. The requests handled by a vmproxy.VM, are routed to a configured Compute Engine instance. If the instance is not up, a new instance is created. You must specify the instance name, so we don't create multiple instances. The thadeoff you do by using this package is that the very first request will launch a new virtual machine, and this may take several seconds depending on your VM initialization. It is not the scope of this tool to provide any scalability features, such as load-balacing multiple VMs. This is a simple proxy, that routes requests to VMs, bringing them up on demmand. It is intended to serve very small, backend, and non-user-facing traffic, as loading requests here take several tens of seconds. ATTENTION! The default behavior of the vmproxy.VM is to launch *PREEMPTIBLE* VMs, and you must explicity disable this with the NotPreemptible flag set to `true`. Compute Engine instances are terminated by the App Engine instance /_ah/stop handler (must be mapped by the user), or by the Compute Engine when it preempts your instance. This package is designed to handle requests as a backend module, configured with Basic Scaling [1]. Here is a basic usage of this script. startupScript = ` apt-get update && apt-get upgrade --yes; apt-get install nginx --yes` nginx = &vmproxy.VM{ Path: "/", Instance: vmproxy.Instance{ Name: "backend", Zone: "us-central1-a", MachineType: "f1-micro", StartupScript: startupScript, // NotPreemptible: true // Uncomment to use non-preemptible VMs. }, } http.Handle("/", nginx) References [1] [2] compute.go doc.go vmproxy.go const ( // DefaultImageName, currently points to Debian Jessie. // TODO(ronoaldo): discover latest debian-8 VM name when launching. DefaultImageName = "debian-8-jessie-v20150818" // DefaultMachineType used to launch an instance. DefaultMachineType = "n1-standard-1" // ResourcePrefix is the prefix URL to build resource URIs, // such as image, disks and instance URIs. ResourcePrefix = "" ) type Instance struct { // Name is the VM unique Name. // Mandatory, and must be unique to the project. Name string // Compute Engine Zone, where the VM will launch. // Mandatory. Zone string // Image to use to boot the instance. // Defaults to debian-8-backports if empty. Image string // Machine type to use. Defaults to n1-standard-1. MachineType string // Optional instance tags. Defaults to http-server. // Use this to setup firewall rules. Tags []string // Metadata to add to the instance description. Metadata map[string]string // Optional startup script URL to be added to the VM. StartupScript string StartupScriptURL string // BootDiskSize in base-2 GB BootDiskSize int64 // Marks the instance as a preemptible VM. NotPreemptible bool // Scopes to be used when creating the instance. // No scopes by default. Scopes []string } Instance represents basic information about a single Compute Engine VM. type VM struct { // VM instance configuration. Instance Instance // Path to forward requests to. Mandatory. Path string // Path used to check if the VM is ready to serve traffic. // Defaults to Path. HealthPath string // Port to forward requests to. Defaults to 80 if 0. Port int // contains filtered or unexported fields } VM manages and proxies requests from App Engine to the configured Compute Engine VM. Delete put's the instance in TERMINATED state and remove it. All attached disks marked for deletion are also removed. IsRunning returns true if the instance is running PublicIP returns the current instance IP. The value is cached in-memory, so it may return stale results. ServeHTTP handles the HTTP request, by forwarding it to the target VM. If the VM is not up, it will be launched. Start launches a new Compute Engine VM and wait until the health path is ready. References: Stop puts the instance in the TERMINATED state, but does not delete it. Package vmproxy imports 18 packages (graph). Updated 2017-09-26. Refresh now. Tools for package owners.
https://godoc.org/ronoaldo.gopkg.net/aetools/vmproxy
CC-MAIN-2017-51
refinedweb
853
59.3
CodePlexProject Hosting for Open Source Software Hi I see references to forms and ui elements in the SharpMap namespace, mapImage (a windows user control?). But when I look in the SharpMap.dll I don't see any of this. How do I get access to the windows controls? hi have a look in the sharmap.ui projet the control is there. I don't have a sharpmap.ui project - i have done the binary downloads of SharpMap-0.9-Trunk-2010.10.21 and SharpMap.Extensions-0.9-Trunk-2010.10.21 what do I need to download to have access to ui objects? Please download the latest code and compile the sharpmap solution yourself. Hth FObermaier Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://sharpmap.codeplex.com/discussions/270341
CC-MAIN-2017-13
refinedweb
153
87.72
Results 1 to 4 of 4 - Join Date - Aug 2002 - 111 - Thanks - 0 - Thanked 0 Times in 0 Posts xml ????? I am testing xml in windows and Mozilla but results come differently. Why????????? Which one should I use it for xml. I just started using xml. If you have any simple code or tutorial. could you tell me about it. bye for now If you have unstyled XML, Internet Explorer applies its own XSLT transformation to it to generate a friendly tree interface. This has several distinct advantages and disadvantages that I'd rather not get into at the moment. Mozilla on the other hand simply applies no stylesheet if you don't specify one. i.e. Everything is inline and unstyled. With later 1.2 builds you can tell it to apply the tree transformation however, as it started coming with one that someone created. I typically use Mozilla for all of my XML work, because it actually supports stuff. IE does not support namespaces to the extent it knows to how render content in an XHTML namespace within a document, which makes it pretty much useless. Mozilla on the other hand has no problem rendering a XUL document with inline SVG and MathML for example. After that basic requirement, Moz has simple XLink support, RDF support, superior CSS1/2 (and parts of CSS3) support, and a good XSLT transformation engine (among others). (IE also has an excellent XSLT engine, but this is the only thing it really does have). The list continues on and on - Mozilla is the only platform offering such extensive support for rendering XML markup. - Join Date - Aug 2002 - 111 - Thanks - 0 - Thanked 0 Times in 0 Posts Do you mind explaining to me about xml with simple codes for me? because I would like to test it in Mozilla's platform. Also I would like to know when I should use xml. Thanks - Join Date - May 2002 - Location - Hayward, CA - 1,477 - Thanks - 1 - Thanked 24 Times in 22 Posts Well, what do you intend to use XML for? XML is primarily a format for defining languages -- XHTML is one XML language which is also HTML 4.01. RDF is an XML language for describing data (metadata, basically). You can create your own generic XML language to contain data specific to your application, whatever that is. The most important question is what do you plan to do?. Answer that question, and we can tell you whether XML is right for you or not."The first step to confirming there is a bug in someone else's work is confirming there are no bugs in your own." June 30, 2001 author, Verbosio prototype XML Editor author, JavaScript Developer's Dictionary
http://www.codingforums.com/xml/10306-xml.html?s=6581e88ba719d3ae26542c6ecb778c97
CC-MAIN-2016-07
refinedweb
455
72.87
Introduction We will see that C# allows suspending the verification of code by the CLR to allow developers to directly access memory using pointers. Hence with C#, you can complete, in a standard way, certain optimizations which were only possible within unmanaged development environments such as C++. These optimizations concern, for example, the processing of large amounts of data in memory such as bitmaps. Pointers and unsafe code C++ does not know the notion of code management. This is one of the advantages of C++ as it allows the use of pointers and thus allows developers to write optimized code which is closer to the target machine. This is also a disadvantage of C++ since the use of pointers is cumbersome and potentially dangerous, significantly increasing the development effort and maintenance required. Before the .NET platform, 100% of the code executed on the Windows operating system was unmanaged. This means the executable contains the code directly in machine instructions which are compatible with the type of processor (i.e. machine language code). The introduction of the managed execution mode with the .NET platform is revolutionary. The main sources of hard to track bugs are detected and resolved by the CLR. Amongst these: The CLR knows how to manipulate three kinds of pointers: Since it allows to directly manipulating the memory of a process through the use of an unmanaged pointer, unsafe code is particularly useful to optimize certain processes on large amounts of data stored in structures.: unsafe{...} Let us mention that if a method accepts at least one pointer as an argument or as a return value, the method (or its class) must be marked as unsafe, but also all regions of code calling this method must also be marked as..NET types that support pointers For certain types, there is a dual type, the unmanaged pointer type which corresponds to the managed type. A pointer variable is in fact the address of an instance of the concerned type. The set of types which authorizes the use of pointers limits itself to all value types, with the exception of structures with at least one reference type field. Consequently, only instances of the following types can be used through pointers: primitive types; enumerations; structures with no reference type fields; pointers.Declaring pointers A pointer might point to nothing. In this case, it is extremely important that its value should be set to null (0). In fact, the majority of bugs due to pointers come from pointers which are not null but which point to invalid data. The declaration of a pointer on the FooType is done as follows: FooType * pointeur; For example: long * pAnInteger = 0; Note that the declaration... int * p1,p2; ... makes it so that p1 is a pointer on an integer and p2 is a pointer.Indirection and dereferencing operators In C#, we can obtain a pointer on a variable by using the address of operator &. For example: long anInteger = 98;long * pAnInteger = &anInteger; We can access to the object through the indirection operator *. For example: long anInteger = 98;long * pAnInteger = &anInteger;long anAnotherInteger = *pAnInteger;// Here, the value of 'anAnotherInteger' is 98. The sizeof operator The sizeof operator allows obtaining the size in bytes of instances of a value type. This operator can only be used in unsafe mode. For example: int i = sizeof(int) // i is equal to 4int j = sizeof(double) // j is equal to 8 Pointer arithmetic A pointer on a type T can be modified through the use of the '++' and '--' unary operator. The '-' operator can also be used with pointers. The comparison can also be used on two pointers of a same or different type. The supported comparison operators are:== != < > <= >=Pointer casting Pointers in C# do not derive from the Object class and thus the boxing and unboxing does not exist on pointers. However, pointers support both implicit and explicit casting. Implicit casts are done from any type of pointer to a pointer of type void*. Explicit casts are done from: Double pointers Let us mention the possibility of using a pointer on a pointer (although somewhat useless in C#). Here, we talk of a double pointer. For example: long aLong = 98;long * pALong = &aLong;long ** ppALong = &pALong ; It is important to have a naming convention for pointers and double pointers. In general the name of a pointer is prefixed with 'p' while the name of a double pointer is prefixed with 'pp'. Pinned object The garbage collector has the possibility of physically moving the objects for which it is responsible. Objects managed by the garbage collector are generally reference type's instances while pointed objects are value type's instances. If a pointer points to a value type field of an instance of a reference type, there will be a potential problem as the instance of the reference type can be moved at any time by the garbage collector. The compiler forces the developer to use the fixed keyword in order to tell the garbage collector not to move reference type instances which contain a value field pointed to by a pointer. The syntax of the fixed keyword is the following: class Article { public long Price = 0;}unsafe class Program { unsafe public static void Main() { Article article = new Article(); fixed ( long* pPrice = &article.Price ) { // Here, you can use the pointer 'pPrice' and the object // referenced by 'article' cannot be moved by the GC. } // Here, 'pPrice' is not available anymore and the object // referenced by 'article' is not pinned anymore. }} If we had not used the fixed keyword in this example, the compiler would have produced an error as it can detect that the object referenced by the article may be moved during execution. We can pin several objects of a same type in the same fixed block. If we need to pin objects of a several types, you will need to use nested fixed blocks. You must pin objects the least often as possible, for the shortest duration possible. When objects are pinned, the work of the garbage collector is impaired and less efficient. Variables of a value type declared as local variable in a method do not need to be pinned since they are not managed by the garbage collector.Pointers and arrays In C#, the elements of an array made from a type which can be pointed to can be accessed by using pointers. Let us precise that an array is an instance of the System.Array class and is stored on the managed heap by the garbage collector. Here is an example which both shows the syntax but also the overflow of the array (which is not detected at compilation or execution!) due to the use of pointers: using System;public class Program { unsafe public static void Main() { // Create an array of 4 integers. int [] array = new int[4]; for( int i=0; i < 4; i++ ) array[i] = i*i; Console.WriteLine( "Display 6 items (oops!):" ); fixed( int *ptr = array ) for( int j = 0; j< 6 ; j++ ) Console.WriteLine( *(ptr+j) ); Console.WriteLine( "Display all items:" ); foreach( int k in array ) Console.WriteLine(k); }} Here is the display: Display 6 items (oops!):014902042318948Display all items:0149 Note that it is necessary to only pin the array and not each element of the array. This confirms the fact that during execution, the value type elements of an array are store in contiguous memory.Fixed arrays C#2 allows the declaration of an array field composed of a fixed number of primitive elements within a structure. For this, you simply need to declare the array using the fixed keyword and the structure using the unsafe keyword. In this case, the field is not of type System.Array but of type a pointer to the primitive type (i.e. the FixedArray field is of type int* in the following example): Example: unsafe struct Foo{ public fixed int FixedArray[10]; public int Overflow;}unsafe class Program { unsafe public static void Main() { Foo foo = new Foo(); foo.Overflow = -1; System.Console.WriteLine( foo.Overflow ); foo.FixedArray[10] = 99999; System.Console.WriteLine( foo.Overflow ); }} This example displays: Understand that FixedArray[10] is a reference to the eleventh element of the array since the indexes are zero based. Hence, we assign the 99999 value to the Overflow integer.Allocating memory on the stack with the stackalloc keyword C# allows you to allocate on the stack an array of elements of a type which can by pointed to. The stackalloc keyword is used for this, with the following syntax: public class Program { unsafe public static void Main() { int * array = stackalloc int[100]; for( int i = 0; i< 100 ; i++ ) array[i] = i*i; }} None of the elements of the array are initialized, which means that it is the responsibility of the developer to initialize them. If there is insufficient memory on the stack, the System.StackOverflowException exception is raised. The size of the stack is relatively small and we can allocate arrays containing only a few thousand elements. This array is freed implicitly when the method returns.Strings and pointers The C# compiler allows you to obtain a pointer of type char from an instance of the System.String class. You can use this feature to circumvent managed string immutability. Let us remind that managed string immutability allows to considerably ease their use. However, this can have a negative impact on performance. The System.StringBuiler class is not always the proper solution and it can also be useful to directly modify the characters of a string. The following example shows how to use this feature to write a method which converts a string to uppercase: public class Program { static unsafe void ToUpper( string str ) { fixed ( char* pfixed = str ) for ( char* p = pfixed; *p != 0; p++ ) *p = char.ToUpper(*p); } static void Main() { string str = "Hello"; System.Console.WriteLine(str); ToUpper(str); System.Console.WriteLine(str); }} Delegates and unmanaged function pointers You can invoke a function defined in a native DLL by the intermediate of a delegate fabricated from an unmanaged function pointer. In fact, using the GetDelegateForFunctionPointer() and GetFunctionPointerForDelegate() static methods of the Marshal class, the notion of delegates and function pointers becomes interchangeable: using System;using System.Runtime.InteropServices;class Program { internal delegate bool DelegBeep(uint iFreq, uint iDuration); [DllImport("kernel32.dll")] internal static extern IntPtr LoadLibrary(String dllname); [DllImport("kernel32.dll")] internal static extern IntPtr GetProcAddress(IntPtr hModule,String procName); static void Main() { IntPtr kernel32 = LoadLibrary( "Kernel32.dll" ); IntPtr procBeep = GetProcAddress( kernel32, "Beep" ); DelegBeep delegBeep = Marshal.GetDelegateForFunctionPointer(procBeep , typeof( DelegBeep ) ) as DelegBeep; delegBeep(100,100); }} This article is extracted from Practical .NET2 and C#2 by Patrick Smacchia. Patrick Smacchia is a .NET MVP involved in software development for over 15 years. He is the author of Practical .NET2 and C#2, a .NET book conceived from real world experience with 647 compilable code listings. After gr... Read more ©2016 C# Corner. All contents are copyright of their authors.
http://www.c-sharpcorner.com/UploadFile/PatrickSmacchia/CSharp2UnsafeCode02162006063859AM/CSharp2UnsafeCode.aspx?ArticleId=1d6d828d-4b8b-45dc-86f2-e3c6718bacc9
CC-MAIN-2016-07
refinedweb
1,824
54.52
A way of encoding 8-bit characters using only ASCII (American Standard Code for Information Interchange) printable characters similar to UUENCODE. UUENCODE embeds a filename where BASE64 does not. You will see BASE64 used in encoding digital certificates, in encoding user:password string in an Authorization: header for HTTP (Hypertext Transfer Protocol) . The spec is described in RFC (Request For Comment) 2045. Don. There are actually three kinds of base64: BASE64 is a scheme where 3 bytes are concatenated, then split to form 4 groups of 6-bits each; and each 6-bits gets translated to an encoded printable ASCII character, via a table lookup. An encoded string is therefore longer than the original by about 1/3. The = character is used to pad the end out to an even multiple of four. Base 64 armouring uses only the characters A-Z, a-z, 0-9 and +/=. This makes it suitable for encoding binary data as SQL (Standard Query Language) strings, that will work no matter what the encoding. Unfortunately + / and = all have special meaning in URLs (Uniform Resource Locators). I have written source code for encoding/decoding BASE64 that you can download. Oracle has an undocumented method called sun.misc. BASE64Encoder. encode. There is a non-public class in Java 1.4+ called java.util.prefs.Base64. JavaMail MimeUtility.decode can encode and decode a number of encodings. Starting with JDK (Java Development Kit) 1.8, Java now has official Base64 support built in.
https://www.mindprod.com/jgloss/base64.html
CC-MAIN-2019-51
refinedweb
245
57.87
Hi guys! I have a page (AddSomething), this page insert some data in a database and I store the data on a session too. This is like a shopping cart, the products view and some products addition. I have 2 actions methods, update_view and insert_data, the first update the data on the session and shows the same page, the second insert the data on the database and shows a proper message. I want to subclass the AddSomething class to a EditSomething class, the data shown is the same but the actions should differ. I may reimplement the insert_data method to do the update of the data reater then the insertion... I may reimplement the products menu (where I list which product can I insert) to redirect to proper page. The problem is that when I subclass the AddSomething and try to access the insert_data or the update_view methods, it give back the AddSomething page even if I reimplement the methods: ... def update_view(self): for k, v in self.request().fields().items(): self.session().setValue(k, v) self.writeHTML() ... What can I do to fix it? Thanks for help... ===== -- Michel Thadeu Sabchuk Curitiba/PR _______________________________________________________ Yahoo! Mail agora com 100MB, anti-spam e antivírus grátis! Hugh! It was my fault :) Sorry, when I forgot to change the form action. Is this cases (when subclassing the page), I aways use the self.__class__.__name__ as action, this case I forgot to use... sorry the message, seeya ===== -- Michel Thadeu Sabchuk Curitiba/PR _______________________________________________________ Yahoo! Mail agora com 100MB, anti-spam e antivírus grátis!
http://sourceforge.net/p/webware/mailman/webware-discuss/thread/20040709134453.96124.qmail@web40405.mail.yahoo.com/
CC-MAIN-2015-48
refinedweb
260
74.49
GNU IDN Library - Libidn Introduction. Table of Contents - Introduction - News - Try it - Documentation - Downloading - Support - Development - Bugs - Related implementations - How to use it? - Libidn2 News Note that new releases are only mentioned here if they introduce a major feature or is significant in some other way. Read the help-libidn mailing list if you seek more frequent announcements. - 2012-01-10: An infloop bug was fixed for the pr29 functions. The library has been relicensed to dual-GPLv2+|LGPLv3+. See the Libidn 1.24 announcement. - 2011-05-04: Quality Assurance improvements: we publish clang-analyzer reports for the library. - 2011-04-20: An IDNA2008 implementation is announced called libidn2. - 2008-10-07: Quality Assurance improvements: we publish cyclomatic code Complexity charts and self-test code coverage charts. - 2007-07-31: Version 1.0 is released, to indicate that Libidn is now considered stable. It has been used in production for several years with only minor issues found. - 2007-05-31: Libidn is now developed in git instead of cvs, there is a public savannah git repository. - 2006-06-07: Translation of error messages are working, and the library has been ported to Windows using MinGW. - 2005-12-03: Version 0.6.0 include a native C# port, contributed by Alexander Gnauck. - 2004-11-08: GNU/Linux distribution Fedora Core 3 includes Libidn version 0.5.6. - 2004-10-02: Version 0.5.6 include functions (e.g., idna_strerror) to translate from return codes to human readable text strings. - 2004-06-26: Version 0.5.0 include a module to detect "problem sequences" for normalization as discussed in PR-29. - 2004-06-01: Version 0.4.8 include a native Java port, thanks to Oliver Hitz. - 2004-04-30: People interested in the specifications behind libidn may be interested in a proposed change to NFKC by the Unicode Consortium. I have posted a message to the IDN WG mailing list asking for opinions on this, but apparently the list moderator is ignoring it. - 2004-03-27: Recently a patch to GNU Libc has been incorporated, extending the getaddrinfoAPI based on my writeup. The API is being standardized. - 2004-02-28: A NetBSD package exists. - 2004-02-28: Version 0.4.0 includes an experimental API for (parts of) the TLD functionality described in draft-hoffman-idn-reg. - 2004-01-30: A Perl module Net::LibIDN that provide Perl bindings for Libidn is available, thanks to Thomas Jacob. The page also include a patch that add TLD specific awareness to Libidn. - 2004-01-06: A FreeBSD ports package is available, thanks to Kirill Ponomarew. - 2004-01-01: Savannah had problems last month, and still isn't operating fully. CVS has been moved to a private machine, a read-only mirror of it will hopefully be available via Savannah in the future. - 2003-10-29: A project with the goal of providing PHP bindings of the Libidn API has been started by Turbo Fredriksson. - 2003-10-11: Precompiled binaries for Mandrake 9.2 available built as part of glibc, and as a RPM package, thanks to Oden Eriksson. - 2003-10-02: Version 0.3.1 fixes all problems discovered during IDNConnect. - 2003-06-26: Precompiled binaries for Cygwin available from, thanks to Gerrit P. Haase. - 2003-02-26: Version 0.1.11 includes a command line tool and a Emacs Lisp interface. - 2003-02-21: Debian includes libidn, thanks to Ryan M. Golbeck. - 2003-02-12: Version 0.1.7 uses official IDNA ACE prefix 'xn--'. - 2003-01-28: Version 0.1.5 can be built as an add-on to GNU Libc, available are detailed instructions and example code demonstrating the new getaddrinfo() API. - 2003-01-08: Added a simple patch demonstrating support for IDN in the GNU InetUtils pingutility. - 2003-01-05: Version 0.1.0 released with Punycode and IDNA. - 2003-01-03: Libidn is an official GNU project. - 2002-12-26: Moved project to savannah. Initiated renaming of library from "libstringprep" to "libidn" as the next release will implement Punycode and IDNA too. - 2002-12-13: Version 0.0.8 is ported to 20+ platforms, including Microsoft Windows. - 2002-11-07: Version 0.0.2 is now used by GNU SASL. - 2002-11-05: Initial release of version 0.0.0. Information on what is new in the library itself is found in the NEWS file (live version). Try it A web interface to libidn is available online. Try libidn before you buy it. A simple IDN web server is also available. Documentation Refer to the Libidn Manual web page for links to the manual in all formats; however, quick links to the most popular formats: You may also be interested in a preliminary document with Nameprep and IDNA test vectors. See also the various standard texts: - IDNA specification - Punycode specification - Stringprep specification - Standard profiles - Expired profiles - TLD specification - IANA Registry for Stringprep Profiles Downloading Libidn can be found on [via HTTP] and [via FTP]. It can also be found on one of our FTP mirrors; please use a mirror if possible. All official releases are signed with an OpenPGP key with fingerprint 0xB565716F. Support. If you are interested in paid support for Libidn, or sponsor the development, please contact me. If you provide paid services for Libidn, and would like to be mentioned here, also contact me. If you find Libidn useful, please consider making a donation. No amount is too small! Development There is a Savannah Libidn project page. You can check out the sources by using git as follows: $ git clone git://git.savannah.gnu.org/libidn.git The online git interface is available. Notifications of each commit is sent to libid Libidn autobuild page. For every release, we publish cyclomatic code complexity charts for the package. There is also self-test code coverage charts available. Finally, clang-analyzer output is also available. Bugs Report all problems to bug-libidn@gnu.org, but please read the manual on how to report bugs first. Related implementations The following is a list of links to other free IDN, or otherwise related, implementations. The list is not conclusive, suggestions appreciated. Projects using GNU Libidn include: - GNU Emacs, in the Gnus news reader. - GNU Libc - GNU Shishi - GNU SASL - jabberd - Mutt mail reader. - Elinks web browser - Gloox, a Jabber/XMPP library - KDE, for all domain name lookups - Net::LibIDN, perl bindings - LibIDN Ruby bindings - cURL - PHP IDNA Extension Projects using libidn2 include: Let us know about more projects that use GNU Libidn! How to use it? Read data from user, convert it to UTF-8 and then pass it to stringprep(). Example code below (it is included in the distribution as example.c). To simplify compiling, use libtool and pkg-config. More information and more examples are included in the manual. See also the other example*.c files in the source distribution on how to use other features of the library (punycode, IDNA). #include <stdio.h> #include <stdlib.h> #include <string.h> /* * Compiling using libtool and pkg-config is recommended: * * $ libtool cc -o example example.c `pkg-config --cflags --libs libidn` * $ ./example * Input string encoded as `ISO-8859-1': ª * Before locale2utf8 (length 2): aa 0a * Before stringprep (length 3): c2 aa 0a * After stringprep (length 2): 61 0a * $ * */ int main(int argc, char *argv[]) { char buf[BUFSIZ]; char *p; int rc, i; printf("Input string encoded as `%s': ", stringprep_locale_charset ()); fflush(stdout); fgets(buf, BUFSIZ, stdin); printf("Before locale2utf8 (length %d): ", strlen(buf)); for (i=0; i < strlen(buf); i++) printf("%02x ", buf[i] & 0xFF); printf("\n"); p = stringprep_locale_to_utf8 (buf); if (p) { strcpy(buf, p); free(p); } else printf("Could not convert string to UTF-8, continuing anyway...\n"); printf("Before stringprep (length %d): ", strlen(buf)); for (i=0; i < strlen(buf); i++) printf("%02x ", buf[i] & 0xFF); printf("\n"); rc = stringprep(buf, BUFSIZ, 0, stringprep_nameprep); if (rc != STRINGPREP_OK) printf("Stringprep failed with rc %d...\n", rc); else { printf("After stringprep (length %d): ", strlen(buf)); for (i=0; i < strlen(buf); i++) printf("%02x ", buf[i] & 0xFF); printf("\n"); } return 0; } Libidn2 Libidn2 is an implementation of the IDNA2008 specifications (RFC 5890, RFC 5891, RFC 5892, RFC 5893). Libidn2 is a standalone library, without any dependency on Libidn. Libidn2 is believed to be a complete IDNA2008 implementation, but has yet to be as extensively used as the original Libidn library. Libidn2 uses GNU libunistring for Unicode processing and GNU libiconv for character set conversion. Libidn2 can be downloaded from [via HTTP] and [via FTP]. It can also be found on one of our FTP mirrors; please use a mirror if possible. The following documentation of libidn2 exists: - Libidn2 HTML Manual, generated by Texinfo - Libidn2 PDF Manual, generated by Texinfo - API Manual, generated by GTK-DOC You may browse the source code git repository. For Quality Assurance, we publish code coverage report and clang static analyzer output. Initial development of Libidn2 has been sponsored by DENIC.
http://www.gnu.org/software/libidn/index.html
CC-MAIN-2015-32
refinedweb
1,485
59.4
cue pointsirtony Sep 7, 2010 4:50 AM Hello. I have not worked with cue points before and I'm sure this has a simple solution. I have a very short video I have converted to flv with 2 cuepoints. I'm trying to get a movie clip to run at the first cue point, but my code isn't working right. I'm working in actionscipt 2 and flash cs4 and here is the script: cueListener.cuePoint = function(eventObject:Object):Void { if (eventObject.info.name=="recycle"){ mc_recycle.gotoandPlay(1); } if (eventObject.info.name=="ASCuePt2"){ stop(); } } I want the video to only run once and not rerun, so I put the stop in. Can someone tell me what I'm doing wrong? Thanks! 1. Re: cue pointskglad Sep 7, 2010 7:34 AM (in response to irtony) are you using as2? did you instantiate cueListener? are you using an flvplayback component? if yes to all, have you assigned your component to the listener? that should look something like: flv.addEventListener("cuePoint",cueListener); 2. Re: cue pointsirtony Sep 7, 2010 7:54 AM (in response to kglad) I'm using actionscript 3 on this. I have a flvplayback component and it does show the cuepoints. but whenever I try to get the movie clip to play it throws an error. should it be an event cue point or a navigation cue point? 3. Re: cue pointskglad Sep 7, 2010 8:02 AM (in response to irtony) 4. Re: cue pointsirtony Sep 7, 2010 8:16 AM (in response to kglad) ok. Can you talk me through this? I've never worked with flash as3 and video before. I have a movie with a cuepoint. I add it to the stage with a flvplayback component. I add an instance name of myVideo to the playback. I want a movie clip to run when the cuepoint is hit. The movie clip has an instance name of trashcan. The cuepoint name is recycle2. I have looked at all the tutorials and explanations on adobe.com and elsewhere and I am thoroughly confused. Frankly, the creators of AS3 should be publicly flogged for making this so difficult. Thanks. 5. Re: cue pointskglad Sep 7, 2010 8:24 AM (in response to irtony) use: import fl.video.MetadataEvent; myVideo.addEventListener(MetadataEvent.CUE_POINT,f); function f(e:MetadataEvent):void{ if(e.info.name=="recycle2"){ trashcan.play(); } } - -
https://forums.adobe.com/thread/716191
CC-MAIN-2018-09
refinedweb
399
77.43
Since I have been using Code Project as a reference for several years now, I finally decided to make a few contributions of my own. I have been writing some web part components for SharePoint for about two years and while it is not something I do on a regular basis I have made a few accidental discoveries that I will share with the readers of Code Project. Some of the Microsoft SharePoint Documentation is not absolutely clear, and clarifying this is one of my goals for this article. I also decided that one of the best, most stable, and visible resources on the web for Microsoft Programmers is Code Project and thus far Code Project has lacked many articles on SharePoint Technologies Web Parts. That being one of my main interests, I decided I would help fill that void as best I can manage. This particular article will focus on deployment of Web Part Components and cover the usage of the STSADM.exe utility that is an essential element of SharePoint Management. I will also cover the basics of creating a cab file in Visual Studio. There is an article here on Code Project that lays out the methods of building cab file manually. In my previous article (Part I) I covered elements of creating a basic Web Part. There is no code included with this article as it is about adding details to the requirements of effectively deploying SharePoint Web Parts. The main and most important utility for working with SharePoint is the stsadm.exe utility. STSADM fulfills several roles, some of which can be accomplished in the SharePoint web interface. One thing that the web interface can't do for you is install Web Parts so you are left with two choices in that regard. You can deploy Web Parts via an MSI (Microsoft Installer) file or you can do it via specially constructed cab files. The cab file install is the simplest and most easily created installation method. MSI would require an msi editor such as the infamous and very basic Orca (from the MS SDK), the Wise for MSI authoring environment which is pretty easy or equally as easy, the Install shield Installer environment. There are other msi authoring environments, but I am mostly not acquainted with them and won't bother to list them. STSADM So let's settle in on using the cab builder that comes with Visual Studio 2003. If you haven't used it yet, you will if you are going to develop Web Parts. Assuming you have begun a Web Part Component project and this is included in a VS Solution, go to the root of the solution and add a new project, then go to Setup and Deployment Projects and click on the Cab template. You should choose an appropriate folder location in the file system. I usually add it to the root of my project so I don't have to search around to find the cab when I actually go to deploy it. The Cab project is now an empty project. You will eventually need to add the Primary Output and Content files from the main Web Part Project. First though you would have to fulfill other requirements of a properly defined Web Part Deployment Cab, you will need at least two more items. You need to add two items to your Web Part Project: The Manifest (Manifest.xml) is required for a functional cab installation and the filename.dwp which supplies information to SharePoint itself. <!--Sample Manifest.xml file--> <?xml version="1.0"?> <!-- You need only one manifest per CAB project for Web Part Deployment.--> <!-- This manifest file can have multiple assembly nodes.--> <WebPartManifest xmlns=""> <Assemblies> <Assembly FileName="AKWebPart.dll"> <!-- Use the <ClassResource> tag to specify resources like image files or JScript files that your Web Parts use. --> <!-- Note that you must use relative paths when specifying resource files. --> <!-- <ClassResources> <ClassResource FileName="Resource.jpg"/> </ClassResources> --> <SafeControls> <SafeControl Namespace="AKWebPart" TypeName="*" /> </SafeControls> </Assembly> </Assemblies> <DwpFiles> <DwpFile FileName="AKWebPart.dwp"/> </DwpFiles> </WebPartManifest> As you can see in the above example Manifest you have an entry to the dll (in this case AKWebPart.dll) and another to the .dwp file (AKWebPart.dwp). Additionally you have an entry to the main Assemblies Namespace. The cab when generated will add an additional file .osd which actually directs the setup api and the stsadm.exe utility as to how to handle the installation of your cab file. Assemblies <!--Example Generated OSD File --> <?XML version="1.0" ENCODING='UTF-8'?> <!DOCTYPE SOFTPKG SYSTEM ""> <?XML::namespace href= as="MSICD"?> <SOFTPKG NAME="setupakwebpart" VERSION="1,0,0,0"> <TITLE> setupakwebpart </TITLE> <MSICD::NATIVECODE> <CODE NAME="AKWebPart"> <IMPLEMENTATION> <CODEBASE FILENAME="AKWebPart.dll"> </CODEBASE> </IMPLEMENTATION> </CODE> </MSICD::NATIVECODE> </SOFTPKG> The DWP is the next critical element in a properly formed cab deployment file an example: <!--Sample Web Part dwp file --> <?xml version="1.0" encoding="utf-8"?> <WebPart xmlns="" > <Title>Sample Web Part</Title> <Description>A demonstration web part</Description> <Assembly>AKWebPart</Assembly> <TypeName>AKWebPart.AKWebPart</TypeName> <!-- Specify initial values for any additional base class or custom properties here. --> </WebPart> In the DWP file you can see that there is a reference to the XML Namespace for Web Parts V2 You will see a Place for Title and Description, both of these are useful in helping you locate your Web Part in a Web Part Gallery that can get filled very quickly. You would do well to consider giving good names to the web parts you might create. The Assembly is the Namespace of the component and the TypeName references the class name of your Web Part prefixed by the namespace. When you get to actually deploying a web part, you will see why this is important, so don't get bored yet. Title Description Assembly TypeName So let's assume you have successfully compiled your Web Part and have generated the cab file, you will need to perform the next step in the process. you will need to copy your completed cab to the intended development server for some testing. you can accomplish this through several methods by using a unc copy such as //servername/C$/Program Files/Microsoft Shared/web server extensions/60/bin as one of the possibilities. This is the location of the stsadm.exe utiliity and this is what will help get your new cab file containing your new web part installed on the test server. STSADM is critical to deploying your new part. It has a number of parameters required in order to fully install a Web Part. Some examples: stsadm -o addwppack -filename yourcab.cab -url Simply put -o is the operation to be performed -filename is your new cabfile and -url is the Web servers name in URL format. stsadm has a few other parameters that are useful to know when installing and removing Web Parts. -o -filename -url stsadm To Remove a Web Part: stsadm -o deletewppack -name yourcab.cab -url To get a list of Web Parts installed after the initial deployment of SharePoint, the built-in default web parts don't show in stsadm use. List: stsadm -o enumwppacks -url Make note that the parameters change subtly from one operation to the other Adding parts addwppack -filename, Deleting Parts deletewppack -name, Listing Part enumwppacks. They all contain subtle differences so typing errors can occur. addwppack -filename deletewppack -name enumwppacks After a successful deployment using addwppacks, you will need to perform two more operations to get to test your new Web Part. addwppacks First you have to reset IIS in order to make your new parts visible in SharePoint. Here you get to use the venerable iisreset command simple enough. Run iisreset and you will see the message Attempting Stop....... after successfully stopping you will see Attempting Start..... and with only a little luck, you will succeed in restarting the server. iisreset You will need to go into SharePoints Web interface to make your new Web Part accessible to you and your users and do the following: Click on the appropriate check Box and then click on the Populate Gallery button. There is the possibility that you could author the installation with a product like Wise, Install Shield, Visual Studio Setup Project or other MSI authoring utilities, but I am not prepared to tackle that one at the moment. Perhaps I can add another article in the future regarding Web Part Installs via MSI. This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here jabailo wrote:Is the information in this article relevent for WSS 3.0 ? jabailo wrote:Can I use VS.NET 2003 to program a Web Part for WSS 3.0 or do I have to upgrade to 2005? General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/13685/Fundamentals-of-SharePoint-Web-Parts-Part-II-Deplo
CC-MAIN-2015-48
refinedweb
1,521
62.68