text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
written by Eric J. Ma on 2020-08-21 | tags: data science software engineering software skills
At this year's SciPy 2020, a talk and proceedings paper caught my eye: "Software Engineering as Research Method". (Here are links to the paper and the talk.) In it, the authors, Mridul Seth and Sebastian Benthall, detailed the benefits that software skills bring to the academic and industrial research communities, both from the perspective of making scientific progress and from the perspective of pedagogy.
I've been doing some reflection of my own on how software skills have helped me tremendously, as a data scientist. Here's my thoughts on them, using the example of random number generators and probability distributions.
In building scientific software,
we are recognizing pragmatically useful categories around us,
and formalizing them in a language.
Let's look at a really simple example of drawing
1s and
0s
from a Bernoulli distribution.
If we were to write a Bernoulli draws generator in Python code, we might implement it as follows:
from random import random p = 0.5 draws = [] for i in range(num_draws): draws.append(int(random() > p))
This is purely imperative style programming. Without going too deep (pun intended) into definitions, by using mostly built-in primitives of the Python language, we are operating at a fairly low-level paradigm.
Now, let's imagine a world where the Bernoulli distribution object does not exist. I have a collaborator who wants to build on top of Bernoulli distributions to make other things. They would have to copy/paste this imperative block of code into their programs.
What can we do to alleviate this, then?
One thing we can try is to encapsulate it in a function! The implementation might look like this:
from random import random def bernoulli(num_draws, p): draws = [] for i in range(num_draws): draws.append(int(random() > p)) return p
But wait, what are the semantics of
p?
Is it the probability of obtaining class 1, or class 0?
The behaviour isn't documented in the class,
so let's add some of it in.
from random import random def bernoulli(num_draws: int, p: float) -> List[int]: """ Return a sequence of Bernoulli draws. :param p: The probability of obtaining class 0. :num_draws: The number of Bernoulli draws to make. Should be greater than zero. :returns: A list of 1s and 0s. """ draws = [] for i in range(num_draws): draws.append(int(random() > p)) return draws
Now, we can do a few things with this implementation. Firstly, we can write a test for it. For example, we can assert that numbers drawn from the Bernoulli are within the correct support:
def test_bernoulli(): draws = bernoulli(10, 0.3) assert set(draws).issubset([1, 0])
Secondly, we can compose it in bigger programs, such as a Binomial generator:
def binomial(num_draws: int, n: int, p: float) -> List[int]: draws = [] for i in range(num_draws): num_ones = n - sum(bernoulli(n, p)) draws.append(num_ones) return draws
Please excuse my lack of docstrings for a moment!.
The probability density function is unique property of a probability distribution; it's also an invariant property. As such, it is something we can instantiate once and forget about later. Perhaps using an object-oriented paradigm might be good here. Let's see this in action:
class Bernoulli: def __init__(self, p): self.p = p def pdf(self, data: int): if data not in [0, 1]: raise ValueErrror("`data` must be either 0 or 1!") if data == 0: return p return 1 - p def rvs(self, n: int): draws = [] for i in range(num_draws): draws.append(int(random() > p)) return draws
Now, with this object, we've made using the category of Bernoulli distributions much easier!
b = Bernoulli(0.5) draws = b.rvs(10) # draw 10 numbers pdfs = [b.pdf(d) for d in draws] # calculate pdf of data
An object-oriented paradigm here also works well,
because the way
.rvs() and
.pdf() have to be implemented
for each probability distribution is different,
but the inputs to those functions are similar
across the entire category of things
called "probability distributions". useful and verifiable models of the world is a core activity of the quantitative sciences.
By exploring and structuring probability distributions in a logical fashion, my pedagogical exploration of probability distributions as a modelling tool is reinforced with the following main ideas:
And by structuring a probability distribution as an object with class methods, our mental model of the world of probability distributions is made much clearer.
Yes, research discipline, and in both ways. Here’s what I mean.
Discipline (a la "rigour") in thought means thinking clearly about how the thing we are working with fits most naturally in relation to other things. Nudging ourselves to make better software makes us better thinkers about a problem class. Bringing this level of discipline gives us an opportunity to organize the little subject matter discipline that we inhabit.
As a discipline (i.e. "way of thought"), well-written software also enables us to build on top of others' thoughtfully laid out tooling. As such we are
Contrary to a generation of us that might think that "choice in everything is the ultimate", a stable base is what a field really!
As such, shared stability in the base gives us confidence to build on top of building software.
The act of organizing the world around us into categories with properties and relationships is made concrete when we build software. By organizing the conceptual world into layers of abstractions and sequences of actions, we bring a clarity of thought to the flow of work. I would argue that this can only help with the construction of a consistent and productive view of a field..
Some of the work I engage in at NIBR
involves fairly non-standard models.
These are the models for which
a
scikit-learn
scikit-learn's API for model construction,
and leveraging
jax for speed and expressiveness. more generally correct.
We invest time up-front working on documentation. By documentation, I mean at least the following:
Investing time up-front for documentation is actually a great thing to do. By putting in workflow examples, we actually leave for ourselves copy/paste-able workflows that can greatly speed up the ramp-up phase of a new project that leverages what we have built.!
Whenever I go to my tribe of data scientists at work and advocate for better software skills, I often get a myriad of responses. A small sliver agree with me and actively put it into practice. A much larger slice agree, other factors stop them from getting started; these might be lack of knowledge, or too much external pressure to deliver results, and as such no action is taken..
|
https://ericmjl.github.io/blog/2020/8/21/software-engineering-as-a-research-practice/
|
CC-MAIN-2022-33
|
refinedweb
| 1,126
| 55.54
|
Resampling from daily to monthly - lagging issue
I am facing a lagging issue when resampling my daily data feed to a monthly time frame.
I added a regular daily feed to my cerebro instance. I then used
cerebro.resampledatato add a monthly datafeed to it. I would like to use the monthly data feed to compute a SMA indicator. Trades will be executed on the daily data feed.
I realized that the resampled monthly data feed is only available at the first trading day of a new month. That looks a bit weird to me. I would expect it to be available at the close of any given month.
Take a look at the screenshot. I created a writer to analyze the pricing. My daily data starts at 2019-12-20. The first monthly closing data point can be seen in line 5. However, this is already 2020-01-02 for the daily data feed.
Am I missing something?
@run-out I am totally with you. What bothers me is the fact, that the very first value is not available at line 4. At the close of 2019-12-30, I am able to resample the daily data into the monthly time frame.
Let me elaborate a little bit on this issue. I need to compute trading signals as per month end (based on the monthly data feed), execution will take place on the opening of the first trading day of the next month (based on the daily data feed). Now, when computing an SMA on monthly basis, I do know the new value as soon as I do have a closing price for the month (in contrast to get that value one day later),
Let's say, I want to generate a long signal, if my monthly closing price is above the monthly SMA. If I were to use monthly data exclusively, backtrader would generate this long signal as per close of the last trading day in the month (i.e. closing price of the monthly candle). And, equally important, this closing price will be part of the latest SMA value.
However, this is not the case when resampling the data, as can be seen in the screenshot. As a result, on the last trading day, I am comparing the correct closing value with an outdated SMA value.
As a general rule, the monthly OHLC values on a daily basis should be the same for the whole month with the exception of the last trading day of the month. Here, we get the next/new OHLC value.
Here's backtrader's result with more lines:
All the columns that I marked yellow should be moved upward by one row. Here's what I expect:
Are you with me?
@andi I get what you are saying and this is not an uncommon misunderstanding when starting to use backtrader. If you come for a world of pandas or spreadsheets, what you are saying is right. All of the ohlcv data gets used to calculate anything in that day, and shows up in the same line as the close.
However, when using backtrader, once the close of the bar is past, then that period is essentially over, and any indicators will be available for the next day, otherwise the indicator would be available in the same bar of the close from which it is calculated, which of course is impossible.
Both ways are correct, the data is accurate, and there is no error. It's simply a difference of when you use the date. I like to say:
When using pandas or spreadsheets, you are in the morning, when using backtrader, you are at night.
Hope that makes some sense.
@run-out Let me try to get hold of this topic from a different perspective.
Let's assume I am operating on monthly data exclusively, i.e. only one data feed/no resampling. Let's say, I want to issue a buy order right after the monthly close if the closing value is greater than 100. It is my understanding that backtrader will issue this order right after the close and it will get executed with the opening price of the next bar, i.e. the very first price of the following trading day (which is then actually the opening price of the next monthly candle). Is this understanding correct? That would be a real-life scenario.
Now let's compare this to a situation where I use a daily feed as well as a resampled monthly feed. I again want to get my long order executed with the opening price of the first trading day of the new month. However, if I base my trading decision on monthly data (closing > 100), I can't take the decision after the close on the last trading day, because the resampled monthly data is not updated yet. It simply doesn't reflect the actual monthly closing price.
So my question is, how can there be a difference between outright monthly data and resampled monthly data. Shouldn't it be equal??
I tried getting this to work using boundoff and rightedge but to no avail. This should have solved the problem, but my data was unchanged. I'm sure I'm missing something simple?
cerebro = bt.Cerebro() start = time.time(), rightedge=False)
You could also pursue cheat on open to access the next days data lines.
Let us know how you do.
@run-out I have got no access to my machine right now. As you know, I am just starting with backtrader, i. e. I am not familiar with rightegde and boundaries. However, if
rightedge=Falsedidn't do the trick, you could try
boundoff=1(just guessing on my side, trial & error) .
Anyway, I think it's a bit weird, that we have to scratch our heads about upsampling. In my view, all the standard settings should lead to the result that I laid out previously. I am wondering if it is possible that the resampling method has a bug? On the other hand, backtrader seems to be a very mature framework and I would be surprised that I should be the first one who stumbles upon this issue.
Using cheat-on-open doesn't sound right either. I don't know if this would solve the issue at hand. However, it may lead to other "issues" down the road. I don't want to cheat, but I would like to have a realistic setup.
What would you think would happen if I don't resample the daily feed but regularly add the monthly data as a second feed?
@andi This is interesting. I am testing boundoff with minute data and it's working fine. However, with daily data it is not doing what is expected. @dasch , I noticed you had some dealings with this, can you shed some light on why boundoff doesn't work with daily data?
import datetime import backtrader as bt class Strategy(bt.Strategy): def __init__(self): self.mn_ind = bt.If(self.datas[1] > self.datas[1](-1), 1, 0) self.traded = False def next(self): if self.mn_ind[0] == 1 and not self.traded: self.buy(self.datas[0]) self.traded = True print( f"{self.datas[0].datetime.datetime()}, " f"daily: {self.datas[0].close[0]}, " f"month: {self.datas[1].close[0]}, " f"ind: {self.mn_ind[0]}" ) if __name__ == "__main__": cerebro = bt.Cerebro() data = bt.feeds.GenericCSVData( dataname="data/dev.csv", dtformat=("%Y-%m-%d %H:%M:%S"), timeframe=bt.TimeFrame.Minutes, compression=1, date=0, high=1, low=2, open=3, close=4, volume=6, ) cerebro.adddata(data) cerebro.resampledata( data, name="minutely", timeframe=bt.TimeFrame.Minutes, compression=10, boundoff=3, rightedge=True, ) #, # compression=1, # boundoff=2, # rightedge=True, # ) cerebro.addstrategy(Strategy) # Execute cerebro.run()
@run-out so for the initial question, the feed will forward as soon as a date appears that is over the boundry of the current period. So only when 2020 starts, the data forwards. Possibilities to overcome this: add some kind of filter for datas or use replay for data.
for the boundoff and rightedge i will try to ellaborate in a further message.
@andi said in Resampling from daily to monthly - lagging issue:
What would you think would happen if I don't resample the daily feed but regularly add the monthly data as a second feed?
If I add another regular monthly data feed (instead of resampling my daily feed), all problems are gone. All prices behave as I previously laid out.
One is basically free to download the data with monthly periodicity or write your own resample function. I came up with something like this:
def datafeed_to_monthly( df: pd.DataFrame, ): """ Resamples a daily data feed to a monthly time frame. The monthly datetime index will excactly reflect the datetime index of the original datetime index. For example, if on the daily datetime index, the last trading day is 26th of May, this will be reflected in the resampled data. Parameters ---------- df Daily `pandas.DataFrame` comprising OHLC/OHLCV data. The dates must be the index of type `datetime`. Returns ------- pandas.DataFrame Resampled monthly OHLC/OHLCV data. """ df["date"] = df.index if len(df.columns) == 5: mapping = dict( date="last", open="first", high="max", low="min", close="last", ) else: mapping = dict( date="last", open="first", high="max", low="min", close="last", volume="sum", ) return df.resample("BM").agg(mapping).set_index("date")
As a result, I come to the conclusion that the implementation in the
resamplemethod is not what I would expect. I would probably consider it to be incorrect, at least with regards to resample daily to monthly. However, I am happy to discuss this interpretation.
@andi said in Resampling from daily to monthly - lagging issue:
As a result, I come to the conclusion that the implementation in the resample method is not what I would expect. I would probably consider it to be incorrect, at least with regards to resample daily to monthly. However, I am happy to discuss this interpretation.
@andi said in Resampling from daily to monthly - lagging issue:
I do know the new value as soon as I do have a closing price for the month (in contrast to get that value one day later),.
So backtrader will know about the date being the last one as soon as a date from new period appears. in your image with expected values you have all values shifted by -1. so id 4 will actually be your id 5.
You may fill some data with a datafiller filter.
See docu:
@dasch said in Resampling from daily to monthly - lagging issue:.
I am probably not fully aware of how the resampling works in backtrader. Your description implies (to me), that the resampling takes place at runtime, maybe in the
nextmethod? If this is the case, I can follow your description.
However, I thought the resampling takes place before I added this feed to
cerebro. If backtrader resamples before running the strategy and adds the resampled feed, the program knows in advance, when it is the last day of the month, because it knows the complete daily feed. Am I correct with that assumption?
Anyway, in real life we all know if the 30th of a month will be the last trading day or not. And if this is the case, we know the monthly closing price as soon as the closing bell rings. At this stage I can use that price for any computation and issue an order, which will then be executed at the opening of the next bar.
So again, in my view, the implementation of the
resamplemethod is ecomically incorrect.
|
https://community.backtrader.com/topic/5114/resampling-from-daily-to-monthly-lagging-issue/1
|
CC-MAIN-2022-40
|
refinedweb
| 1,944
| 66.03
|
A Video Sharing App in 48 Hours
Last week, I was invited to an exclusive hackathon to build apps for musicians. The app team I was assigned to was tasked with building a video upload site for Bounce videos. Bounce is a style of music that originated in New Orleans. The app would be called BounceDotCom.com and there were plans to have Big Freedia, the Queen of Bounce, promote it. I knew the organizer could make things happen, so I jumped at the chance.
On the team was me, Brad Huber, and Doron Sherman, from Cloudinary. We had about 48 hours to make something happen. I showed up Monday evening, after the team had begun work, to figure out the plan and how I could help. There was a basic backend in Rails. I was going to come in early the next day and get to work on the frontend in JavaScript and React.
Now, people may know that I prefer ClojureScript myself over JavaScript. But I'm also a pragmatist. Although I think I could have done the job in ClojureScript in probably less time and code, I know that finding another ClojureScripter would be a difficult. It would tie the app to me. Any updates would depend on my schedule. Doing it in pure JavaScript would give much more flexibility, particularly for something where resources are tight and the future is unknown.
The next morning, I got to work setting up the React frontend so I could test it. I used create-react-app to get started. It comes with a dev setup so you can automatically reload your code as you save it. I'm a big fan of fast feedback. The save-and-reload workflow is not as good as you get in ClojureScript, but good enough for a small project like this. In ClojureScript, you don’t lose your current state when new code is reloaded, so there’s much less clicking.
My main focus at first was to get video uploads to work. I knew this would be the biggest challenge. Uploading files from multiple devices and posting to an API I was not familiar with was not something I wanted to mess around with on a short timeline. Plus, the app would be worthless without it. If people couldn't upload a video, the main concept of the site would not exist. Doron was a big help, providing the documentation when I needed it. Cloudinary offers many different solutions, including posting the video yourself or going through one of their widgets. For a 48-hour project, I chose a widget. There was no way I was going to trust that I could do it better in 48 hours.
When you’re working under sane conditions, spending a day to research your best option is well worth the investment. However, hackathon's are not sane. You want to quickly find something acceptable and move on. I found three different widgets that looked like they might work for our use case and our stack. In the end, the one that worked first was super easy. Just include this in the HTML:
<script src="//widget.cloudinary.com/global/all.js" type="text/javascript">
Here’s what it looks like:
And on mobile:
That will fetch a script with everything you need. Here is how I show the widget.
window.cloudinary.openUploadWidget({ cloud_name: CLOUD_NAME, upload_preset: 'default', }, (err, result) => { });
You get the Cloud Name from your Cloudinary account. I had to create a preset in the dashboard that allowed for unsigned uploads so anyone could upload a video from their phone using only the frontend.
To display the videos, I tried cloudinary-react and it worked very easily.
import {Image, Video, Transformation, CloudinaryContext} from 'cloudinary-react'; <Video cloudName={CLOUD_NAME} publicId={VIDEO_ID} poster={`{CLOUD_NAME}/video/upload/${VIDEO_ID}.jpg`} ref={(v)=>this.video = ReactDOM.findDOMNode(v)} onPlay={()=>this.setState({playing:true})} onEnded={()=>this.setState({playing:false})} onPause={()=>this.setState({playing:false})} />
The component worked right out of the box, but we did have to fix some issues. It worked fine on desktop, but on my iPhone, the poster wasn't showing up. That's why I added the poster attribute manually in that code snippet. Problem solved. Luckily, Cloudinary is smart. If you ask for the .jpg file, it will give it to you and generate it if it needs to. If you ask for the .png, it does the same. It works better than you expect, because most services don't do this kind of transformation on the fly. But Cloudinary does, and it works the way you want it to work.
Notice that I set up a React ref for the video. I wanted to be able to stop and start the video in my scripts, so I needed a direct reference to the video element. The react-cloudinary components render out to regular HTML video elements.
How do I know?
I read the code. Yep, it's readable code. And when you're on the super tight deadline of a hackathon, you don't have time to read inefficient English text documentation. You go straight to the render() method. Code doesn't lie.
Another thing I learned from the code was that if the react-cloudinary components don’t understand a prop, they just pass them right down into the video element. So I could put onPlay, onEnded, and onPause callbacks right in there. A really nice touch.
Then I hit a snag.
The Cloudinary upload widget lets you upload videos and images. But there's no way to limit it to only videos. Well, not that I could find. If you said "only .mp4 files", it still lets you take a picture and try to upload it. Then it fails and you lose your picture. For our use case, that is a terrible user experience. People are having fun at a party, they take a really awesome picture they want to share with their friends, but instead the app drops the photo on the floor. The uploader works fine, but our app was never meant to host images. I could write a custom uploader, but I didn’t want to spend the time.
So what did I do?
I made an executive decision: We would support photos. This required a small backend change that I could not make, since I am clueless when it comes to Rails and Ruby. We needed to record whether it was an image or a video along with the media id. I made the changes to the frontend that I could, and allowed images to be displayed, and recorded an issue for the necessary backend changes in the backlog.
Here’s how you display an image from Cloudinary:
<Image cloudName={CLOUD_NAME} publicId={IMAGE_ID} />
Super easy. The only thing I wish for here is that you could tell whether it was an image or a video from the id. I could query the API, but we wanted to minimize the number of requests to keep latency low. Plus, on mobile, each HTTP request is just another chance to fail.
So, by lunchtime, what did we have?
We could list videos (with inline playing and nice thumbnails) and upload new videos. We had a login system so we could identify users. Was it a nice app yet? No. Was it completely easy and straightforward? No, I can't say it was. But it was mostly forward progress. I mean, that was basically four hours of work to get video uploading with transcoding to multiple formats. Oh, and remember, it worked on desktop, iPhone, and Android with the same code. Not bad. And by “not bad”, I mean wow! A video app in a morning! I did not imagine this was possible before I met Cloudinary.
After lunch we started to put it on the open internet so we could have some kind of deployment pipeline. Until then, it was just me serving it from my laptop. We had some snags with that, too. For example, Heroku decided to upgrade its DNS to support SSL which did not allow us to add custom domains for a few hours. But in the end, we had everything hosted on Heroku.
At this point, it was 6 p.m. I had been adding a bunch of stuff to the backend backlog, since as I said, I don't know Rails. Lucky for us, Brad Huber, my teammate, knew plenty. I had to run but I would be back. I was hopeful to have all of my backend requests finished when I returned.
When I came back, it was on again. It was after 10 p.m. Some of my changes had been implemented, but not all. One of the things I requested was to be able to store arbitrary JSON data with users and with videos. In a hackathon, you just don't have time to mess around with designing a data model, and you certainly want to remove any reason to coordinate with the backend team. They have better things to do than add this field and that field to the model. It's much better to just let the frontend store whatever they want.
The break from coding had given me a new perspective on the app. I had been thinking about it mostly as a desktop web app. And our backend reflected that. It required users to register and login to upload a video. But after taking a break and seeing some issues logging in on some phones, I decided we needed to focus 100 percent on the main use case: the app would be demoed at a party the next night. People would want to pull out their phones, film some badass dancing, and upload it to share. They don't care about logging in. If they had to do that, they wouldn't have as much fun.
So how did we solve it?
We got rid of the login. You go to BounceDotCom.com, you click a button, record some video, and upload it. It shows up. You rock. That night, we recruited a couple of designers to draw some designs and implement it.
And then we passed the point of no return.
I hadn't eaten dinner. There was some food left I could scavenge from. And then Doron offered me a bubble tea. Great, I thought. It looked milky and those tapioca balls could sustain me. I started drinking it. And then I realized, too late, that it was coffee. I'm super sensitive to caffeine, especially that late at night. I doubted I would sleep that night.
And I didn't.
I stayed up all night coding on this app. There were several things I needed to do. We wanted a strong viral component, so I added Facebook sharing. To do that you need some Open Graph metadata in your HTML and some JavaScript for Like buttons. I hacked on that through the night. But Cloudinary made this really easy. Here's a snippet from the HTML template:
<meta property="og:image" content="{{&image}}" /> {{#video}} <meta property="og:video" content="{{&video}}" /> {{/video}} That {{&image}} and {{&video}} get replaced on the backend by this: if(pid && type === 'image') { image = `{CLOUD_NAME}/image/upload/${pid}.jpg`; } if(pid && type === 'video') { image = `{CLOUD_NAME}/video/upload/${pid}.jpg`; video = `{CLOUD_NAME}/video/upload/${pid}.mp4`; }
That is, we can generate image URLs and video URLs pretty easily for Facebook to use. Liking and sharing work pretty well. And it was thanks to Cloudinary's ease of use.
And then there was a surprise:
Travis Laurendine, the organizer, showed me that if you send a link over iMessage, it embedded the video right in there. Hello!! That was totally unexpected.
I crashed pretty hard around 10 a.m. I took a four-hour nap. When I woke up, I loaded the app to find it purple and beautiful, thanks to those designers. I fixed some CSS and added a play button. Everything was coming together. I worked on it a little that afternoon, but nothing so intense as before.
So what became of our demo?
In the end, the demo party never happened. It rained pretty hard and Jazz Fest kind of took over everything. But the app is there, still running and waiting.
With the main functionality of the app working, what’s next?
We have plans to migrate away from Heroku and onto a serverless cloud service. We don't really do much on the backend that couldn't be done cheaper and better on Google Cloud Platform or AWS. Using Lambda and Cloudinary, we basically have no overhead.
Low overhead is important for an app like this: if it doesn't take off, it costs next to nothing. But if it does, it will scale effortlessly. The other thing we might do is rewrite the uploading code. We're using the Cloudinary widget and we might want more control of the user experience. We'll want something customized where you click a button and it opens the camera, ready to record. However, I think that it will be complicated to get something working so well on all devices. It will have to wait. The Cloudinary widget works very well. It just does more than we need and those extra features could get confusing at a party.
I have to emphasize again that no one on the team had used Cloudinary before, except Doron, our contact at Cloudinary. Any app has engineering decisions that need to be made. Cloudinary’s employees, documentation, and code helped us stay on track. I am still surprised by how much we figured out and built in less than a day. The tools they give you, including the libraries, dashboard, and APIs, are where it really shines.
I look forward to hacking on this app in the future. And I’ll be dreaming up new ways to put Cloudinary to use.
|
https://cloudinary.com/blog/bounce_hacking_jazzfest_with_social_videos
|
CC-MAIN-2017-39
|
refinedweb
| 2,333
| 76.11
|
I had gourmet-0.14.3 (I love the app, BTW) running on a previous debian (sid) install. After a hardware failure, I had to reinstall, and went with etch 4.0r6.
I have a backup of my old $HOME/.gourmet, but have not restored it yet to keep it from being a source of errors.
I know absolutely nothing about sqlalchemy (if it is even the source of the error) so it may be something simple, but a cursory google search hasn't enlightened me. I have the required packages as listed on the linux install page.
Starting gourmet gives me the following without so much as the splash screen:
Traceback (most recent call last):
File "/usr/bin/gourmet", line 34, in ?
import gourmet.GourmetRecipeManager
File "/usr/share/gourmet/gourmet/GourmetRecipeManager.py", line 6, in ?
import recipeManager
File "/usr/share/gourmet/gourmet/recipeManager.py", line 17, in ?
from backends.db import *
File "/usr/share/gourmet/gourmet/backends/db.py", line 21, in ?
from sqlalchemy import Integer, Binary, String, Float, Boolean, Numeric, Table, Column, ForeignKey, Text
ImportError: cannot import name Text
Any suggestions?
Thanks.
|
https://sourceforge.net/p/grecipe-manager/discussion/371768/thread/81bbd9a7/
|
CC-MAIN-2018-05
|
refinedweb
| 184
| 60.82
|
This document describes the interface to the command line driver.
Additional options for recent updates of Comeau C/C++ can be found at.
The command line driver is invoked by a command of the form
como [options] ifileto compile the single input file ifile. Various file name suffixes are allowed on input files including .c, .C, .cc, .cpp, .CPP, .cxx, and .CXX. On a limited numer of platforms, some other suffixes might be allowed. And of course, object file or library extensions such as .o, .obj, .OBJ, .a, .lib or .LIB are ok, and usually follow the naming conventions for the respective platform being used. On a limited number of platforms, if - (hyphen) is specified for ifile, stdin will be used as the input file.
The options follow below.
Additional options for recent updates of Comeau C/C++ can be found at.
Note that whether strict mode (see below) is the default or not depends upon how you installed Comeau C++, at least for UNIX platforms. Under Windows, the default mode is not strict mode, and furthermore is --microsoft, which allows compatibility with the MS header files. For most platforms, when you invoke Comeau C++, it will tell you what mode it is in.
--preprocess
-E Do preprocessing only. Write preprocessed text to the preprocessing output file, with comments removed and with line control information.
-c Compile only. Do not link. Instiantiate only those templates directly "owned" by this file. Do not confuse this with the --c (note two dashes not one) option which specificies C90 mode. So if you want to compile and link a C90 source you'd write: como --c c.c and to compile but not link a C90 source you'd write: como -c --c c.c or como --c -c c.c and so on.
--no_line_commands
-P Do preprocessing only. Write preprocessed text to the preprocessing output file, with comments removed and without line control information.
-C Keep comments in the preprocessed output. This should be specified after either --preprocess or --no_line_commands; it does not of itself request preprocessing output.
--old_line_commands When generating source output (e.g., with the C-generating back end), put out #line directives in the form used by the Reiser cpp, i.e., "# nnn" instead of "#line nnn".
--dependencies
-M Do preprocessing only. Instead of the normal preprocessing output, generate on the preprocessing output file a list of dependency lines suitable for input to the UNIX make program. Note that when implicit inclusion of templates is enabled, the output may indicate false (but safe) dependencies unless --no_preproc_only is also used.
--trace_includes
-H Do preprocessing only. Instead of the normal preprocessing output, generate on the preprocessing output file a list of the names of files #included.
--define_macro name [ = def ]
-D name [ = def ] Define macro name as def. If "= def " is omitted, define name as 1.
--undefine_macro name
-Uname Remove any initial definition of the macro name. --undefine_macro options are processed after all --define_macro options in the command line have been processed.
--include_directory dir
-Idir Add dir to the list of directories searched for #includes. Files whose names are not absolute pathnames and that are enclosed in "..." will be searched for in the following directories, in the order listed:
symbol-id name ref-code file-name line-number column-number
is written, where ref-code is D for definition, d for declaration (that is, a declaration that is not a definition), M for modification, A for address taken, U for used, C for changed (but actually meaning "used and modified in a single operation," such as an increment), R for any other kind of reference, or E for an error in which the kind of reference is indeterminate. symbol-id is a unique decimal number for the symbol. The fields of the above line are separated by tab characters.
--list lfile
-Llfile Generate raw listing information in the file lfile. This information is likely to be used to generate a formatted listing. The raw listing file contains raw source lines, information on transitions into and out of include files, and diagnostics generated by the front end. Each line of the listing file begins with a key character that identifies the type of line, as follows:
N: a normal line of source; the rest of the line is the text of the line.
X: the expanded form of a normal line of source; the rest of the line is the text of the line. This line appears following the N line, and only if the line contains non-trivial modifications (comments are considered trivial modifications; macro expansions, line splices, and trigraphs are considered non-trivial modifications).
S: a line of source skipped by an #if or the like; the rest of the line is text. Note that the #else, #elif, or #endif that ends a skip is marked with an N.
L: an indication of a change in source position. The line has a format similar to the # line-identifying directive output by cpp, that is to say
L line-number "file-name"
key
where key is 1 for entry into an include file, 2 for exit from an include file, and omitted otherwise. The first line in the raw listing file is always an L line identifying the primary input file. L lines are also output for #line directives (key is omitted). L lines indicate the source position of the following source line in the raw listing file.
R, W, E, or C:
an indication of a diagnostic (R for remark, W for warning, E for error, and C for catastrophic error). The line has the form
S "file-name" line-number column-number message-text
where S is R, W, E, or C, as explained above. Errors at the end of file indicate the last line of the primary source file and a column number of zero. Command-line errors are catastrophes with an empty file name ("") and a line and column number of zero. Internal errors are catastrophes with position information as usual, and message-text beginning with (internal error). When a diagnostic displays a list (e.g., all the contending routines when there is ambiguity on an overloaded call), the initial diagnostic line is followed by one or more lines with the same overall format (code letter, file name, line number, column number, and message text), but in which the code letter is the lower case version of the code letter in the initial line. The source position in such lines is the same as that in the corresponding initial line.
--timing
-# Generate compilation timing information. This option causes the compiler to display the amount of CPU time and elapsed time used by each phase of the compilation and a total for the entire compilation.
--pch Automatically use and/or create a precompiled header file - for details, see the "Precompiled Headers" section in this chapter. If --use_pch or --create_pch (manual PCH mode) appears on the command line following this option, its effect is erased.
--create_pch file-name If other conditions are satisfied (see the "Precompiled Headers" section), create a precompiled header file with the specified name. If --pch (automatic PCH mode) or --use_pch appears on the command line following this option, its effect is erased.
--use_pch file-name Use a precompiled header file of the specified name as part of the current compilation. If --pch (automatic PCH mode) or --create_pch appears on the command line following this option, its effect is erased.
--pch_dir directory-name The directory in which to search for and/or create a precompiled header file. This option may be used with automatic PCH mode (--pch) or with manual PCH mode (--create_pch or --use_pch).
--pch_messages
--no_pch_messages Enable or disable the display of a message indicating that a precompiled header file was created or used in the current compilation.
--pch_mem size The number of 1024-byte units of preallocated memory in which to save precompiled-header state (for platforms where we have chosen not to use memory mapping or where it was not reasonably available). Contact us if you feel that you need to use this option.
--restrict
--no_restrict Enable or disable recognition of the restrict keyword.
--long_lifetime_temps
--short_lifetime_temps
Select the lifetime for temporaries: "short" means to end of full expression; "long" means to the earliest of end of scope, end of switch clause, or the next label. "short" is standard C++, and "long" is what cfront uses (the cfront compatibility modes select "long" by default).
--microsoft
--microsoft_16
--no_microsoft Enable or disable recognition of Microsoft extensions, as appropriate and applicable. --microsoft enables 32-bit mode. --microsoft_16 enables 16-bit mode (though only a few Microsoft 16-bit extensions are supported). When Microsoft extensions are recognized, language features that are not recognized by the Microsoft compiler are disabled by default. In most cases these features can then be enabled through use of other command line options (for example, --bool). Note that these options are only recognized under MS-Windows, or if you have a custom port of Comeau C/C++.
--far_data_pointers
--near_data_pointers
--far_code_pointers
--near_code_pointers Set the default size for pointers in 16-bit Microsoft mode, i.e., the memory model. Ignored in other modes.
--wchar_t_keyword
--no_wchar_t_keyword Enable or disable recognition of wchar_t as a keyword. This option is valid only in C++ mode. The front end can be configured to define a preprocessing variable when wchar_t is recognized as a keyword. This preprocessing variable may then be used by the standard header files to determine whether a typedef should be supplied to define wchar_t.
--bool
--no_bool Enable or disable recognition of bool. This option is valid only in C++ mode. The front end can be configured to define a preprocessing variable when bool is recognized. This preprocessing variable may then be used by header files to determine whether a typedef should be supplied to define bool.
--typename
--no_typename Enable or disable recognition of typename. This option is valid only in C++ mode.
--implicit_typename
--no_implicit_typename
Enable or disable implicit determination, from context, whether a template parameter dependent name is a type or nontype. The default value is implicit inclusion and can be changed via a custom porting arrangement. This option is valid only in C++ mode.
--special_subscript_cost
--no_special_subscript_cost
Enable or disable a special nonstandard weighting of the conversion to the integral operand of the [] operator in overload resolution. This is a compatibility feature that may be useful with some existing code. The special cost is enabled by default in cfront 3.0 mode. With this feature enabled, the following code compiles without error:
struct A { A(); operator int *(); int operator[](unsigned); }; int main() { A a; a[0];//Ambiguous,but allowed with this //option, operator[] is chosen return 0; }
|
http://www.comeaucomputing.com/4.0/docs/userman/options.html
|
CC-MAIN-2015-18
|
refinedweb
| 1,779
| 55.03
|
open faced cube / Box
i am trying to create an open fced box (has all sides except the front)
is there a simple way to do this
seems like it hsould be trivial but i am new to Java3d and trying my best
thanks
benkammy
thats excellent thank you.
is there a way using this to pick the sides that are missing. as in change the way the box appears to lie.
thanks again
benkammy
The easiest way is to use a quad array, adding a quad for each face. If you want the inside to be visible, you need to add quads for the internal face too. The winding order gives the direction of the quad - I think of them as 'rings' of vertices, 00, 01, 11, 10. If you wish to use appearance, you will need to add normals too.
[code]
import javax.media.j3d.GeometryArray;
import javax.media.j3d.QuadArray;
import javax.media.j3d.Shape3D;
import javax.vecmath.Color3f;
import javax.vecmath.Point3f;
public class OpenBox extends Shape3D {
public OpenBox () {
this(1f, 1f, 1f, true);
}
public OpenBox (final float xSize, final float ySize, final float zSize, boolean internal) {
final Point3f[] vertices = new Point3f[8];
// set up vertices at +/- xSize, +/-ySize, +/-zSize
for (int i = 0; i < 8; i++) {
vertices[i] = new Point3f(
(((i & 0x01) << 1) - 1) * xSize,
( (i & 0x02) - 1) * ySize,
(((i & 0x04) >> 1) - 1) * zSize);
}
// quad array for the faces
final int faces = internal ? 10 : 5;
final int vertexCount = faces * 4;
final QuadArray quads = new QuadArray(vertexCount,
GeometryArray.COORDINATES |
GeometryArray.COLOR_3);
// build each face by indexing into the vertex array
for (int face = 0; face < faces; face++) {
final int a = ring0[face];
final int b = ring1[face];
final int c = ring2[face];
quads.setCoordinate(face * 4 + 0, vertices[a]);
quads.setCoordinate(face * 4 + 1, vertices[a + b]);
quads.setCoordinate(face * 4 + 2, vertices[a + b + c]);
quads.setCoordinate(face * 4 + 3, vertices[a + c]);
quads.setColor(face * 4 + 0, colors[face]);
quads.setColor(face * 4 + 1, colors[face]);
quads.setColor(face * 4 + 2, colors[face]);
quads.setColor(face * 4 + 3, colors[face]);
}
setGeometry(quads);
}
// define each 'ring' to walk through the vertex array on a Gray code
private static final int[] ring0 = { 0, 4, 0, 1, 0, 0, 4, 0, 1, 0, };
private static final int[] ring1 = { 2, 1, 4, 2, 1, 1, 2, 2, 4, 4, };
private static final int[] ring2 = { 1, 2, 2, 4, 4, 2, 1, 4, 2, 1, };
// some pretty colors
private static final Color3f[] colors = {
new Color3f(1.0f, 0.0f, 0.0f),
new Color3f(0.0f, 1.0f, 0.0f),
new Color3f(1.0f, 1.0f, 0.0f),
new Color3f(0.0f, 0.0f, 1.0f),
new Color3f(1.0f, 0.0f, 1.0f),
new Color3f(1.0f, 0.5f, 0.5f),
new Color3f(0.5f, 1.0f, 0.5f),
new Color3f(1.0f, 1.0f, 0.5f),
new Color3f(0.5f, 0.5f, 1.0f),
new Color3f(1.0f, 0.5f, 1.0f),
};
}
[/code]
how would i go about changing the appearance of these sides because looking at the code i only see 4 but know there are 5 sides.
i want to add appearances i have made so would i use
quads.setAppearance(appearanceName);
hope you can help - you came up massive trumps before thanks
benkammy
|
https://www.java.net/node/647789
|
CC-MAIN-2015-18
|
refinedweb
| 548
| 66.64
|
The.
thanks for the review! great to see those surveys every once in a while...
i wonder if build a user guide could help with the documentation. the todo and forum pages are useful, but they are more "search" based than a handbook would be. there is the walkthrough, but it could cover a lot more ground and i am not sure the wiki format is appropriate for this...
Okay, I'll need to go through it first and remove all the snide comments to myself though :P
I'll throw up a HTML version up in a temporary place, if Joey approves, I'll ask if there is a namespace on CPAN that I can use as a more permanent home (or Joey can do it if he'd prefer control over it, I don't mind).
|
https://git-annex.branchable.com/devblog/day_360__results_of_2015_user_survey/#comment-6b9b1cc715f51f6b30ab71c07f82db79
|
CC-MAIN-2022-27
|
refinedweb
| 138
| 82.78
|
Hello everyone, welcome back to programminginpython.com. Here I will show you how to implement QuickSort Algorithm in Python. In previous posts, I have covered Insertion Sort, Merge Sort, Selection Sort, and Bubble Sort. Now let?s learn to implement one more sorting algorithm that is QuickSort Algorithm.
QuickSort Algorithm in Python? programminginpython.com
Quicksort is an in-place sorting algorithm, which means it does not require any extra/temporary list to perform sorting, everything will be done on the original list itself. Here in this sorting technique we will select a pivot element and arrange all the items to the right are greater than pivot and elements to the left are lesser than the pivot. Again we do the same step for the left and right elements of the pivot as sublists until all the elements are sorted.
Quicksort when implemented well it is one of the best sorting algorithms, In fact, the sort function provided in most of the language libraries is the implementation of Quicksort itself.
Time Complexity Of QuickSort
Best CaseO (n log n)
Average Case O(n log n)
Worst Case O(n2)
Algorithm:
-.
Program:
def partition(sort_list, low, high): i = (low -1) pivot = sort_list[high] for j in range(low, high): if sort_list[j] <= pivot: i += 1 sort_list[i], sort_list[j] = sort_list[j], sort_list[i] sort_list[i+1],sort_list[high] = sort_list[high], sort_list[i+1] return (i+1) def quick_sort(sort_list, low, high): if low < high: pi = partition(sort_list, low, high) quick_sort(sort_list, low, pi-1) quick_sort(sort_list, pi+1, high)lst = size = int(input(“Enter size of the list: “))for i in range(size): elements = int(input(“Enter an element”)) lst.append(elements)low = 0high = len(lst) – 1quick_sort(lst, low, high)print(lst)
That is it guys, we have now successfully implemented Quick Sort Algorithm.
Feel free to look at some other algorithms here or some programs on lists here or have a look at all the programs on python here.
QuickSort Algorithm in Python
YouTube:
GitHub:
|
https://911weknow.com/quicksort-algorithm-in-python
|
CC-MAIN-2021-04
|
refinedweb
| 334
| 58.72
|
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
How to create a method that will display values from database to a selection type field?
def _sel_warehouse(self, cr, uid, context=None): obj = self.pool.get('stock.warehouse') ids = obj.search(cr, uid, []) res = obj.read(cr, uid, ids, ['id','name'], context) res = [(r['id'],r['name']) for r in res] return res
This is the closest code that I found but it lacks the flexibility of a sql query. Is there any other approach or do I have to add some line of code to get the desired output? Ex. query that I would like to use "select name from stock_location where usage = 'internal'".
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
|
https://www.odoo.com/forum/help-1/question/how-to-create-a-method-that-will-display-values-from-database-to-a-selection-type-field-32608
|
CC-MAIN-2018-13
|
refinedweb
| 161
| 66.94
|
.
Fuentes ES 321 – Economic Evaluation of Industrial Projects Problem Set 2 No. 1 A certain fluidized‐bed combustion vessel has an investment cost of $100,000, a life of 10 years and negligible market (resale) value. Annual cost of materials, maintenance, and electric power for the vessel are expected to total $8,000. A major relining of the combustion vessel will occur during the fifth year at a cost of $20,000; during this year, the vessel will not be in service. If the interest is 15% per year, what is the lump sum equivalent cost of this project at the present time? Solution: Let Investment Cost, C0 = $100,000 Life, L = 10 years Annual cost of materials, etc AC = $8,000 Cost at year 5, C5 = $20,000 Interest rate, i = 0.15 AC 0 1 2 3 4 5 6 7 8 9 10 AC AC AC AC AC AC AC AC AC AC C0 C5 Let EC = lump sum equivalent cost of the project AC ⎡ 1 − (1 + i ) ⎤ + ( C5 − AC )(1 + i ) −10 −5 EC = C0 + i ⎣ ⎦ = $146,116.27
neonpoint@yahoo.com No. 2 The heat loss through the exterior walls of a certain poultry processing plant is estimated to cost the owner $3,000 next year. A salesman from Superfiber Insulation, has told you, the plant manager, that he can reduce the heat loss by 80% with the installation of %15,000 worth of Superfiber now. If the cost of heat loss rises by $200 per year (gradient) after the next year and the owner plans to keep the building for 15 more years, what would you recommend if the interest rate is 12%? Solution: A. For Present worth of expenses (losses) without insulation: Let First term of arithmetic gradient series A1 = $3000 Gradient G = $ 200 Interest rate i = 0.12 Number of periods n = 15 0 1 2 3 4 5 6 . . . . . 15 A1 A +G 1 A1+2G A1+3G A1+4G A1+5G A1+14G G ⎡1 − (1 + i ) ⎤ A −n n + 1 ⎡1 − (1 + i ) ⎤ −n PWE = ⎢ − n ⎥ i ⎢⎣ i (1 + i ) ⎥⎦ i ⎣ ⎦
= 27, 216.63 B. Present worth of savings, PWS: Only 80% will be saved, so PWS = 0.8 ( 27, 216.67 ) = 21, 773.30 C. Recommendation: Since the present worth of all savings is greater than the investment on the insulation, then Install the Superfiber.
neonpoint@yahoo.com No. 3 You are the manager of a large oil refinery. As part if the refining process, a certain heat exchanger (operated at high temperatures and with abrasive materials flowing through it) must be replaced every year. The replacement and downtime costing the first year is $175,000. Therefore, it is expected to increase due to inflation at a rate of 8% per year for five years, at which time this particular heat exchanger will no longer be needed. If the company’s cost of capital is 18% per year, how much could you afford to spend for a higher‐quality heat exchanger so that this annual replacement and downtime cost can be eliminated? Solution: Let First term in the geometric gradient series A1 = $175,000 Inflation rate f = 0.08 Interest rate i = 0.18 i − f 0.18 − 0.08 Convenience rate (for f ≠ i) icr = = 1+ f 1 + 0.08 = 0.09259 0 1 2 3 4 5 A1 A1(1+f) A1(1+f)2 A1(1+f)3 A1(1+f)4 The Present Worth of all Costs: A ⎡1 − (1 + icr )− n ⎤ PW = 1 ⎢ ⎥ 1+ f ⎣⎢ icr ⎦⎥ 175, 000 ⎡1 − (1.09259 ) ⎤ −5
= ⎢ ⎥ 1.08 ⎣⎢ 0.09259 ⎦⎥ = $626, 050.52 Recommendation : The company could afford a high‐quality heat exchanger worth up to $ 626,050.53
neonpoint@yahoo.com No. 4 A small company purchased now for $23,000 will lose $1,200 each year for the first four years. An additional $8,000 invested in the company during the fourth year will result in a profit of $5,500 each year from the fifth through the fifteenth year. After 15 years the company can be sold for $33,000. a) Determine the IRR and b) Calculate the FW if MARR = 12%. Solution: Cash Flow Diagram: let i = interest rate = internal rate of return (IRR) SV B B B B B B B B B B B 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 A A A A C0 C4 Where: C0 = $23,000 A = $1,200 B = $5,500 C4 = $8,000 SV = $33,000 Equation of values: using a focal date at (4) A⎡ B 0 = −C0 (1 + i ) − (1 + i ) − 1⎤⎦ + ⎡⎣1 − (1 + i ) ⎤⎦ + SV (1 + i ) − C4 4 4 −11 −11 i ⎣ i In solving for i = IRR, we get IRR = 0.10011112 or 10.011% b) for the future worth at (15) using i = 0.12: A⎡ B FW = −C0 (1 + i ) − (1 + i ) − 1⎤ (1 + i ) + ⎡(1 + i ) − 1⎤ + SV − C4 (1 + i ) 15 4 11 11 11
i ⎣ ⎦ i ⎣ ⎦ = $27, 070.36
neonpoint@yahoo.com No. 5 A food processing plant consumed 600,000 kwh of electric energy annually and pays an average of P2.00 per kwh. A study is being made to generate its own power to supply the plant energy required, and that the power plant installed would cost P2,000,000; annual operation and maintenance, P800,000; Other expenses, P100,000 per year. Life of the plant is 15 years; salvage value at the end of life is P200,000; annual taxes and insurance, 6% of the first cost; and rate of interest is 15%. Using the sinking fund method for depreciation, determine if the power plant is justifiable. Solution: By the Annual Worth Pattern (AWP) a) Annual Revenue (savings) = (P2/kwh)(600,000 kwh) = P 1,200,000 b) Annual Expenses = depreciation + annual operation, etc + other expenses + interest on money [ 2, 000, 000 − 200, 000] 0.15 + 800, 000 + 100, 000 + 0.06 2, 000, 000 + 0.15 2, 000, 000 AE = ( ) ( ) (1.15) − 1 15
= P 1,357,830.70 c) Since the annual expenses are greater than the annual savings, the purchase of the power plant is NOT JUSTIFIED.
neonpoint@yahoo.com No. 6 The Anirup Food Processing Company is presently using an outdated method of filling 25‐pound sacks of dry dog food. To compensate for weighing inaccuracies inherent to this packaging method, the process engineer at the plant has estimated that each sack is overfilled by 1/9 pound on the average. A better method of packaging is now available that would eliminate overfilling (and underfilling). The production quota for the plant is 300,000 sacks per year for the next six years and a pound of dog food costs this plant $0.15 to produce. The present system has no salvage value and will last another four years, and the new method has an estimated life of four years with a salvage value equal to 10% of its investment, I. The present packaging operation expense is $1,200 per year more to maintain than the new method. If the MARR is 12% per year for this company, what amount, I, could be justified for the purchase of the new packaging method? Solution: By the Annual Worth Pattern Let I = justified investment 1 a) Annual Savings = ( 300, 000 )( 0.15 ) + 1200 9 = 6200 b) Annual Expenses = capital recovery cost
= [ I − 0.1I ] 0.12 + 0.12 I (1.12 ) − 1 4 = 0.308311I c) Equating the savings to the expenses: 6200 = 0.308311I I = $20,109.56
neonpoint@yahoo.com No. 7 A manufacturing firm has considerable excess capacity in its plant and seeking ways to utilize it. The firm has been invited to submit a bid to become a subcontractor on a product that is not competitive with the one it produces but that, with the addition of $75,000 in new equipment, could readily be produced in its plant. The contract would be for five years at an annual output of 20,000 units. In analyzing probable costs, direct labor is estimated at $1.00 per unit and new materials at $0.75 per unit. In addition, it is discovered that in each new unit, one pound of scrap material can be used from the present operation, which is now selling for $0.30 per pound of the scrap. The firm has been charging overhead at 150% of prime cost, but it is believed that for this new operation the incremental overhead, above maintenance, taxes, and insurance on the new equipment, would not cost 60% of the direct labor cost. The firm estimates that the maintenance expenses on this equipment would not exceed $2,000 per year, and annual taxes and insurance would average 5% of the investment cost. (Note: Prime cost = direct labor + direct materials cost). While the firm can see no clear use for the equipment beyond the five years of the proposed contract, the owner believes it could be sold for $3,000 at that time. He estimates that the project will require $15,000 in working capital (which would be fully recovered at the end of the fifth year), and he wants to earn at least a 20% before‐tax annual rate of return on all capital utilized. a. What unit price should be bid? b. Suppose that the purchaser of the product wants to sell it at a price that will result in a profit of 20% of the selling price. What should be the selling price? Solution: Let CU = unit bid price of the product PART A: a) Annual revenue = 20,000(CU) b) Annual expenses = depreciation + labor cost + maintenance + taxes & insurance + material cost (new and scrap) + overhead + recovery cost of working capital+ interest on money
depreciation = [ 75, 000 − 3, 000] 0.20 = 9, 675.34 (1.20 ) − 1 5
neonpoint@yahoo.com No. 8 The prospective operation for oil in the outer continental shelf by a small, independent drilling company has produced a rather curious patter of cash flows as follows: End of Year Net Cash Flow 0 − $ 520,000 1 – 10 + 200,000 10 − 1,500,000 The $1,500,000 expense at the end of the tenth year will be incurred by the company in dismantling the drilling rig. a) Over the 10‐year period, plot PW versus the interest rate (i) in an attempt to discover whether multiple rate of return exist. b) Based on the projected net cash flows and results in part (a), what would you recommend regarding the pursuit of the project? Customarily, the company expects to earn at least 20% per year on invested capital before taxes. Use the ERR method. Solution: A A A A A A A A A A Cash Flow Diagram: 0 1 2 3 4 5 6 7 8 9 10 C0 C10 a. Equation to solve for the present worth with interest i: A⎡ 1 − (1 + i ) ⎤ − C10 (1 + i ) −10 −10 PW = −C0 + i ⎣ ⎦ 200, 000 ⎡ 1 − (1 + i ) ⎤ − 1,500, 000 (1 + i ) −10 −10 PW = −520, 000 + i ⎣ ⎦ Note: See separate paper for chart (graph) of PW vs. i . At PW = 0, i = 0.2877632 b. Solving for ERR (i’), using ∈ = 0.20 200, 000 ⎡ 520, 000 (1 + i ' ) + 1,500, 000 = (1.2 ) − 1⎤⎦ 10 10
0.2 ⎣ i = 0.21653 or 21.653% > 20% Recommendation: Project is justifiable.
neonpoint@yahoo.com No. 9 Alpha Company is planning to invest in a machine the use of which will result in the following: − Annual revenues of $10,000 in the first year and increases of $5,000 each year, up to year 9. From year 10, the revenues will remain constant ($52,000) for an indefinite period. − The machine is to be overhauled every 10 years. The expense for each overhaul is $40,000. If Alpha Company expects a present worth of at least $100,000 at a MARR of 10% for this project, what is the maximum investment that Alpha should be prepared to make? Solution: Let PR = Present Worth of all revenues 10, 000 ⎡ 5, 000 ⎡1 − (1.10 )−9 9 ⎤ 52, 000 1 − (1.10 ) ⎤ + (1.10 ) −9 −9 PR = ⎢ − ⎥ + 0.1 ⎣ ⎦ 0.1 ⎢⎣ 0.1 (1.10 ) ⎥⎦ 0.1 9
= $375, 228.26 Let PE = Present Worth of all expenses I = investment 40, 000 PE = I + (1.10 ) −1 10
= I + 25098.16 Since the net present worth should at least be P 100,000 PR − PE ≥ 100, 000 375228.26 − ( I + 25098.16 ) ≥ 100, 000 I ≤ $250,130.10 Maximum investment should only be $250,130.10.
neonpoint@yahoo.
|
https://ru.scribd.com/document/367818187/Economy-Prob-Set-2
|
CC-MAIN-2020-45
|
refinedweb
| 2,110
| 70.63
|
.
When you’re finished today your application will: have a link that triggers an action, which asks our server for a string, then uses the updated application state to display the message to the client. A very similar process to this one will be repeated numerous times going forward as we build features into our project. This procedure will often vary in its complexity so getting a good understanding of the basics is very important.
You can follow and view the complete source code for this tutorial here!
Installing new dependencies and creating directories
We’re going to be using quite a few new libraries today so first things first let’s get them installed and talk a bit about what they do.
npm install --save axios react-redux redux redux-thunk
axios axios allows us to make XMLHttp requests to our server. It also supports promises which are crucial to making the powerful and dynamic applications that we have in mind.
redux Redux is a predictable and simple way to manage your application state. It has a small API and allows for some incredible and powerful features like time-travel, hot reloading, and more.
react-redux Brings in the react bindings that we need to use Redux in our application. By default, Redux is not built for React, or any framework really.
redux-thunk Thunk is middleware that allows us to use actions to return a function, delaying the dispatch of those actions, or only dispatching them if certain conditions are met.
Next, before we can get started let’s lay the groundwork for some new files we’ll be creating. In your
src directory create both an
actions and a
reducers folder. We’ll leave them empty for now.
Pit stop: how Redux and React work together
I found this great diagram below that visualizes the way that Redux flows and interacts with our application. Wrapping your head around what we’ll be creating next can be tricky, so let’s start with understanding what it is we’re making.
As you can see the application’s front-end consists of 5 parts:
The UI This is what the client sees and interacts with. When they do something in the application (like update their username) it triggers an action.
Actions The action then does whatever it’s designed to do. Afterward, it sends the results to the reducers. In this example, it’d make an API request to update the username and then it would receive a confirmation message.
Reducers Once the action has completed, it dispatches that information to the reducer by declaring an “action type” and attaching a payload. The action type tells the reducers what has happened, for example
UPDATE_USERNAME, and the payload would be the confirmation message.
Store and State The store contains the application state, which now consists of the confirmation message. Since the state has changed, it then renders the message back to the UI letting the user know it was successful.
Now that we’ve got this out of the way let’s go ahead and start wiring up our actions and reducers.
Putting together the reducer
Let’s start by creating the reducer we’re going to use for this example. Using a bit of forward thinking, we’re going to call this reducer our
auth_reducer and in the next tutorial, we’ll use it to handle our login and authorization actions. For today it’s just going to work with a test action.
Create your first action-type by creating the file
types.js in your
actions directory. In this file we are going to export simply a constant that describes the action we’ll be performing:
export const TEST_ACTION = 'test_action';
After you’ve saved your
types.js file, create another one titled
auth_reducer.js in your
reducers directory and open it up. Now we need to import our action-type that we just made:
import { TEST_ACTION } from '../actions/types';
Next, define the initial state that will be stored. In this case, the message will be empty:
const INTIAL_STATE = { message: ''};
We’re only going to be receiving the “hello world” message from our API today, so our initial state object will only consist of a single item. Lastly, we export a function that contains a switch statement and returns the state.
export default function (state = INTIAL_STATE, action) { switch(action.type) { case TEST_ACTION: return { ...state, message: action.payload.message }; } return state; }
This function takes our initial state and the action and tests to see what action was performed. If it’s our currently uncreated
TEST_ACTION it updates our state with the payload from our API.
Combining reducers and exporting them
To finalize setting up our reducer we need to import it into a new file titled
index.jsinside the
reducers directory. This file collects all of our reducers into a singular location, combines them with the
combineReducers function and exports them as the ‘root reducer’.
import { combineReducers } from 'redux'; import AuthReducer from './auth_reducer'; const rootReducer = combineReducers({ auth: AuthReducer, }); export default rootReducer;
Create the action that will send information to the reducer
Now we need to make the action itself and tell it to dispatch to the reducer. Create a new file titled
index.js in the
actions directory and import axios (to make our HTTP requests) and the action-type we made. Also, define a constant that will contain the root API URL we set in our last tutorial. This will make our axios calls smaller and easier as our API routes get more complex.
import axios from 'axios'; import { TEST_ACTION } from './types'; const API_URL = '';
Axios is simple enough to set up, and we’ll be using promises to dispatch the action to the reducers after the get request has been completed. We’ll also include a quick catch to handle any errors we might get.
export function testAction() { return function(dispatch) { axios.get(`${API_URL}/helloworld`) .then(response => { dispatch({ type: TEST_ACTION, payload: response.data }); }) .catch((error) => { console.log(error); }) } }
As you can see, we use
axios.get to specify the request. This function takes, at the minimum, the URL parameter which should point to the route we want to call on our server.
.then takes the response and uses an arrow function to dispatch the
type and
payload to our reducers.
.catch handles any errors we may run into.
Creating a link to trigger the action
Open up the
Dashboard file you created previously so that you can call the action we just made. Make sure you import your
testAction at the top.
import { testAction } from '../actions/index.js';
Create a new method that will call the test action you just imported:
handleClickHello() { this.props.testAction(); }
Now add a link to the render method that will call
handleClickHello on click.
render() { return ( <div> <h4>This is the dashboard</h4> <a onClick={this.handleClickHello.bind(this)}>Knock Knock</a> </div> ); }
Connect your component to the application state
It’s important to give our Dashboard access to our store and make calling data from it easy. So let’s go ahead and do that now. Import the
connect function from
react-reduxinto your
dashboard file.
import { connect } from 'react-redux';
Now at the bottom, outside the scope our component, we’re going to create a function that will map our application state to the components props.
function mapStateToProps(state) { return { auth: state.auth }; }
We only need to map out the
auth state (from our src/reducers/index.js file) because, well, that’s all we have. Lastly, we need to connect our component to our store:
export default connect(mapStateToProps, { testAction })(Dashboard);
Be sure to remove the
export default from our component so that it reads:
class Dashboard extends Component { //component }
Lastly, let’s throw in a reference to the message itself inside the component so that we can see whatever message is returned to us from the server. You can now manipulate and call the message like any other variable in your component, referencing it through
this.props.auth.
render() { return ( <div> <h4>This is the dashboard</h4> <a onClick={this.handleClickHello.bind(this)}>Knock Knock</a> <h3>{this.props.auth.message}</h3> </div> ); }
Supplying the provider and store to our application
The final thing we need to wire up now is our application itself. We need to create the store which will hold our reducers and we also need to wrap our app in the
Providerfunction from
react-redux. We need to import all these new libraries, so let’s start there. In your
src/index.js file update your dependencies to reflect this:
import { createStore, applyMiddleware } from 'redux'; import { Provider } from 'react-redux'; import React from 'react'; import ReactDOM from 'react-dom'; import reduxThunk from 'redux-thunk'; import { Router, browserHistory } from 'react-router'; import reducers from './reducers/index';
Next we need to create the store using functions from
redux:
const createStoreWithMiddleware = applyMiddleware(reduxThunk)(createStore); const store = createStoreWithMiddleware(reducers);
And lastly, wrap the application in the Provider and set the store:
ReactDOM.render( <Provider store={store}> <Router history={browserHistory} routes={routes} /> </Provider>, document.querySelector('#app'));
Testing your project
We’ve done a lot of work here today. Let’s see if it has all come together. Start both your server and your client application, and navigate to (the dashboard). Now, as long as everything is set up correctly when you click the “knock knock” link, below it, you should see the “hello world” message that our server returned. Congratulations, your client is talking to your very own API!
State of the tutorial series
This segment is a major turning point in the series. We’ve covered most of the fundamentals to creating your own project, including the basics of working in React, making your own RESTful API, and managing application state, stores and redux. You could for all intents and purposes take the information you’ve learned in the last 6 articles and create an entire React based website or app. So, as we move forward, the topics of discussion will be more advanced, covering things such as JWT authentication and mongoDB management. We’ll move quickly through creating new components, reducers, routes, and actions. But if you have any questions you can always ask in the comments or refer back to these posts.
I am really excited to see the outcome of our app, and I expect there to be another 4 or 5 parts before we see the final product come together. I hope to see you there, and thanks for reading!
|
https://www.davidmeents.com/blog/manage-state-connect-to-api-redux-axios/
|
CC-MAIN-2017-26
|
refinedweb
| 1,754
| 54.83
|
import "gopkg.in/src-d/go-git.v4/utils/merkletrie/noder"
Package noder provide an interface for defining nodes in a merkletrie, their hashes and their paths (a noders and its ancestors).
The hasher interface is easy to implement naively by elements that already have a hash, like git blobs and trees. More sophisticated implementations can implement the Equal function in exotic ways though: for instance, comparing the modification time of directories in a filesystem.
NoChildren represents the children of a noder without children.
Equal functions take two hashers and return if they are equal.
These functions are expected to be faster than reflect.Equal or reflect.DeepEqual because they can compare just the hash of the objects, instead of their contents, so they are expected to be O(1).
Hasher interface is implemented by types that can tell you their hash.
type Noder interface { Hasher fmt.Stringer // for testing purposes // Name returns the name of an element (relative, not its full // path). Name() string // IsDir returns true if the element is a directory-like node or // false if it is a file-like node. IsDir() bool // Children returns the children of the element. Note that empty // directory-like noders and file-like noders will both return // NoChildren. Children() ([]Noder, error) // NumChildren returns the number of children this element has. // // This method is an optimization: the number of children is easily // calculated as the length of the value returned by the Children // method (above); yet, some implementations will be able to // implement NumChildren in O(1) while Children is usually more // complex. NumChildren() (int, error) }
The Noder interface is implemented by the elements of a Merkle Trie.
There are two types of elements in a Merkle Trie:
- file-like nodes: they cannot have children.
- directory-like nodes: they can have 0 or more children and their hash is calculated by combining their children hashes.
Path values represent a noder and its ancestors. The root goes first and the actual final noder the path is referring to will be the last.
A path implements the Noder interface, redirecting all the interface calls to its final noder.
Paths build from an empty Noder slice are not valid paths and should not be used.
Children returns the children of the final noder in the path.
Compare returns -1, 0 or 1 if the path p is smaller, equal or bigger than other, in "directory order"; for example:
"a" < "b" "a/b/c/d/z" < "b" "a/b/a" > "a/b"
Hash returns the hash of the final noder of the path.
IsDir returns if the final noder of the path is a directory-like noder.
Last returns the final noder in the path.
Name returns the name of the final noder of the path.
NumChildren returns the number of children the final noder of the path has.
String returns the full path of the final noder as a string, using "/" as the separator.
Package noder imports 3 packages (graph) and is imported by 9 packages. Updated 2019-08-04. Refresh now. Tools for package owners.
|
https://godoc.org/gopkg.in/src-d/go-git.v4/utils/merkletrie/noder
|
CC-MAIN-2019-47
|
refinedweb
| 512
| 64.61
|
21058/how-can-i-build-a-recursive-function-in-python
I'm wondering whether you meant "recursive". Here is a simple example of a recursive function to compute the factorial function:
def factorial(n):
if n == 0:
return 1
else:
return n * factorial(n - 1)
The two key elements of a recursive algorithm are:
If there is list =[1,2,4,6,5] then use ...READ MORE
You can use sleep as below.
import time
print(" ..
Use the shutil module.
copyfile(src, dst)
Copy the contents ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in.
|
https://www.edureka.co/community/21058/how-can-i-build-a-recursive-function-in-python
|
CC-MAIN-2022-40
|
refinedweb
| 112
| 57.98
|
Stash::Manip - routines for manipulating stashes
version 0.02
my $stash = Stash::Manip->new('Foo'); $stash->add_package_symbol('%foo', {bar => 1}); # $Foo::foo{bar} == 1 $stash->has_package_symbol('$foo') # false my $namespace = $stash->namespace; *{ $namespace->{foo} }{HASH} # {bar => 1}.
Creates a new
Stash::Manip
Stash::Manip->new('Foo')->add_package_symbol('%foo')
will create
%Foo::foo.
Removes all package variables with the given name, regardless of sigil.
Returns whether or not the given package variable (including sigil) exists.
Returns the value of the given package variable (including sigil).).
No known bugs.
Please report any bugs through RT: email
bug-stash-manip at rt.cpan.org, or browse to.
Class::MOP::Package - this module is a factoring out of code that used to live here
You can find this documentation for this module with the perldoc command.
perldoc Stash::Manip
You can also look for information at:
Jesse Luehrs <doy at tozt dot net>
Mostly copied from code from Class::MOP::Package, by Stevan Little and the Moose Cabal.
This software is copyright (c) 2010 by Jesse Luehrs.
This is free software; you can redistribute it and/or modify it under the same terms as perl itself.
|
http://search.cpan.org/dist/Stash-Manip/lib/Stash/Manip.pm
|
CC-MAIN-2016-44
|
refinedweb
| 192
| 50.33
|
Python Programming/Basic Syntax< Python Programming(Redirected from Python Programming/Basic syntax)
Case SensitivityEdit
All variables are case-sensitive. Python treats 'number' and 'Number' as separate, unrelated entities.
Spaces and tabs don't mixEditEdit()
ScopeEdit
In a large system, it is important that one piece of code does not affect another in difficult to predict ways. One of the simplest ways to further this goal is to prevent one programmer's choice of a name from blocking another's use of that name. The concept of scope was invented to do this. A scope is a "region" of code in which a name can be used and outside of which the name cannot be easily accessed. There are two ways of delimiting regions in Python: with functions or with modules. They each have different ways of accessing from outside the scope useful data that was produced within the scope. With functions, that way is to return the data. The way to access names from other modules leads us to another concept.
NamespacesEdit's namespace. To demonstrate this, first we can use the type() function to show what kind of object __builtins__ is:
>>> type(__builtins__) <type 'module'>
Since it is a module, it has a namespace. We can list the names within the __builtins__ namespace, again using the dir() function (note that the complete list of names has been abbreviated):
>>> dir(__builtins__) ['ArithmeticError', ... 'copyright', 'credits', ... 'help', ... 'license', ... 'zip'] >>>
Namespaces are a simple concept. A namespace is a particular place in which names specific to a module reside. Each name within a namespace is distinct from names outside of allows
|
https://en.m.wikibooks.org/wiki/Python_Programming/Basic_syntax
|
CC-MAIN-2016-30
|
refinedweb
| 268
| 62.48
|
Greg Wolff wrote:
>
> Stefano,
>
> Thanks for the reply. I'm more confident now that we can find some way to
> work together. It is taking longer than expected to understand what are the
> real technical differences in our approaches and what is just a difference
> of terminology.
>
> Stefano Mazzocchi <stefano@apache.org> writes:
> > [... ]
> > No, at this stage, a complete web-site using Cocoon is a management
> > nightmare. We have to define that damn sitemap... yes, Donald, I'll
> > trade the xml-ized SQLproc docs with a sitemap proposal :)
> >
>
> We keep configuration information local to a directory (in an XML file, of
> course) and allow properties and other configuration information to be
> inherited from parent directories.
>
> > [... ]
> > There was a sentence on the PIA web site (I could not find it anymore,
> > maybe you guys removed it) that said something like "we want to separate
> > code from pages, unlike Apache's proposed XSP which continue the process
> > of including code with content."
>
> This may have come from a draft of our white paper. We do think that
> embedding arbitrary program code in XML documents detracts from
> maintainability and makes security very much harder. With our approach, you
> can define a "safe" tagset (e.g. no active tags that side effect the
> file system) to use in processing documents of uncertain origin. My
> understanding of XSP is that, like JSP, arbitrary Java code could be
> embedded in XML documents.
Ummm, not to be rude, but have you actually read the XSP Layer 1
Specification? You really should, as I think you aren't understanding
how it works.
Again, not to be rude, but you should really read this before you try to
guess at what it does. Makes it lots easier ;-) The "arbitrary" Java
code is defined in XSL/XSP style sheets (called logicsheets), not XML
documents. There is nothing in an XSP document that is not 100% 1.0 XML
compliant. The example looks like:
<?xml version=1.0?>
<test>
<para>You have been here <count/> times.</para>
</test>
That's it. No code, nothing to let anyone know what else is going on.
>
> In a nutshell, here's the difference:
>
> PIA:
> <define element="foo" handler="org.risource.dps.handler.Foo">
> <doc> The action for foo the specified in a java class called Foo </doc>
> </define>
>
> XSP:
> <xsp:foo
> <xsp:logic>
> private static int counter = 0;
> private synchronized int currentCount() {
> return ++counter;
> }
> </xsp:logic>
> </xsp:foo>
Your XSP here is in a logicsheet, separate from content. I actually
think this is an even better approach, because if there was ever a need
for C/C++/Perl/whatever, an appropriate stylesheet could be created with
that code in it, and the XML document is absolutely untouched. If you
have problems with code in the stylesheet/logicsheet, then I think you
are in the wrong market - a stylesheet changes a _lot_, every time the
HTML spec changes, every time look/feel changes, etc. It is meant to be
malleable, because your nice clean XML document is still unchanged.
Maybe you thought the xsp namespace tags were in the XML itself? No.
>
> The Foo class referenced above might very well have exactly the same logic
> as the xsp:logic tag, but keeping it separate allows it to be treated like
> other (dangerous) code. It can be edited and compiled with normal Java
> development tools, can be maintained by the system administrator, and
> doesn't require that the developer has the ability to execute arbitrary
> code in the server process in order to operate properly.
Same for a logicsheet, except the logicsheet can be compiled and cached
completely, without having to check anything but itself to see if a
change has occurred. Over time, your PIA methodology has to continually
check a .class file, which is more (albeit not much, but still more)
overhead and less performance than the stylesheet. IMHO, XSP takes the
good from JSP and discards the bad (the language dependence in the
actual document).
>
> > Than you claimed that "XSP was not a
> > serious approach to ease of web-app maintenance".
> >
>
> I don't think we ever claimed that. In any case, it is a serious
> misunderstanding that I would like to apologize for.
>
> > It's no secret that I got pissed off big time about that sentence. Ask
> > Brian that was proposing me to peer-review the PIA native port for
> > SourceXchange.
> >
> > On the other hand, your kind question changed all this: I'll be more
> > than willing to collaborate with you guys, and explaining why the
> > proposed XSP design ideas are very similar to yours.
> >
>
> Happy to hear that.
>
> > XSP is separated into levels. [... ]
>
> I may need to see some basic examples before I understand how these levels
> are meant to work together.
Again, you should _really_ read the spec. If you have, maybe re-read
it, as it helps here a lot.
>
> >
> > Yes, I totally admit Cocoon's embedding of PIs to link one part to the
> > other sucks big time.
> >
> > Any help in this direction is _very_ appreciated.
> >
>
> In our case, actions are bound to individual tags in a tagset, and tagsets
> are bound to documents (usually based on the file extension) in a
> configuration file. I think the rough equivalent in cocoon would be to have
> a configuration file that lists the processors which should operate on a
> class of documents.
>
> Note that this ability to easily change the processing applied to particular
> documents (without modifying the documents) comes in very handy. For
> example, we use it to pretty-print the underlying XML when someone wants to
> view-source.
This is just a change in processor instruction in Cocoon, also I think
very easy, and getting easy with the improved 2.0 model.
>
> [... ]
>
> > Greg,
> >
> > I've been in close contact with Brian about this and I repeat to you
> > directly what I told him. I modestly think the native port of a XML
> > publishing framework written in Java is today a waste of time for the
> > following reasons:
>
> I appreciate the frankness and agree that porting the entire framework would
> not be productive. However, the barriers to deploying Java in embedded
> devices are more numerous than I care to list here. We're primarily
> interested in having a C based module that can process and serve
> the kinds of XML pages we use in our PIA based Web applications.
> Eventually this might expand into a full publishing framework, but the goal
> is to have something that works well enough to deploy today.
I agree with Stefano here, but good luck... :-)
>
> We don't plan to abandon the Java version of the technology.
>
> > [... ]
> > So, I picture you spending time in porting to native, while we spend
> > time improving our speed and flexibility and by the time you reach the
> > stability point you want, you are behind and you loose all the power you
> > have today to influence this new field.
> >
>
> Well, the reason for doing it through sourceXchange is so we can spend our
> time on the fun stuff :-)
>
> > [... ]
> > I think that it makes a lot of sense, at this point, to confront and
> > help each other, given that Cocoon itself is already enough flexible to
> > stand major architectural changes with small impact on his components.
> >
> > Am I making sense at all?
>
> More than most.
>
> -Greg
-Brett
|
http://mail-archives.apache.org/mod_mbox/cocoon-dev/199912.mbox/%3C3858292D.1E516A46@algx.net%3E
|
CC-MAIN-2017-17
|
refinedweb
| 1,218
| 61.67
|
Useful things missing from vcl_algorithm, etc. More...
#include <vcl_functional.h>
#include <vcl_vector.h>
#include <vcl_ostream.h>
Go to the source code of this file.
Useful things missing from vcl_algorithm, etc.
Added quite a few little functors mainly to do with iterating through maps for example a version of the non-standard select1st and select2nd 30 April 2004 Martin Roberts
Definition in file mbl_stl.h.
Produces a first order sequence from the supplied unary function.
The value produced at a given step is a function of the previous value. E.g. the following is equivalent to using mbl_stl_increments
mbl_stl_sequence(A.begin(), A.end(), vcl_bind1st(vcl_plus<unsigned>(), 1u), 0u);
Definition at line 52 of file mbl_stl.h.
|
http://public.kitware.com/vxl/doc/release/contrib/mul/mbl/html/mbl__stl_8h.html
|
crawl-003
|
refinedweb
| 115
| 61.02
|
Tutorial: Flashing LED using GPIO Output
In this example we'll cover how to build a very simple circuit consisting of an LED and resistor connected up to the GPIO port on your Raspberry Pi. This is a simple exercise will demonstrate visual confirmation that the GPIO port is doing what your python program tells it to do.
This exercise requires just a few simple components, available from ModMyPi:
- Medium Breadboard
- Male to Female Jumper Wires
- LED (Light Emitting Diode) any colour you wish
- 270Ω or higher resisitor, found in ModMyPi’s Ridiculous Resistor Kit
Step 1 – Assembling the Circuit
We will start by assembling the circuit on the breadboard. A Schematic of the circuit is shown below:
. Insert the LED so that the Anode(the longer leg) is in one row of the breaboard and the Cathode(shorter leg) is in another.
2. Insert one end of the resistor into the same row as the LED's Cathode and the other end into another row.
3. Connect a jumper wire from the same row as the LED's Anode (the long leg) to GPIO P17 [Pin 11] on your Raspberry’s pin GPIO port.
4. Connect another jumper wire from the row containing only one leg of the resistor to GPIO GND [Pin 6] on your GPIO port.
When finished your circuit should look similar to the one below:
By default the output pins on the GPIO ports are switched off so the LED will not light up. However you can check your circuit is working correctly by moving the wire in GPIO P17 [Pin 11] to GPIO 3.3V [Pin 1]. If the circuit is correctly wired the LED should light up! If not double check the LED is the right way round! Just remember to move the wire back to GPIO P17 [Pin 11] before you continue.
Step 2 – Making a program in Python
We want the LED to do something more interesting than just turn on! Start by opening a new project Python project. For this exercise we will need to use GPIO Python library which allows you to easily configure and read-write the input/output pins on the Pi’s GPIO port within a Python script. Instructions on how to install the GPIO Python library can be found here.
With the GPIO library is
Since we want the LED to flash on and off we will need to add a time module to allow Python to understand the concept of time. Add the following:
import time
Next we need to set our GPIO pin numbering, as either the BOARD numbering or the BCM numbering. BOARD numbering refers to the physical pin numbering of the headers. BCM numbering refers to the channel numbers on the Broadcom chip. Either will do, but for this exercise I will be using BOARD numbering. If you’re confused, use a GPIO cheat sheet to clarify which pin is which!
GPIO.setmode(GPIO.BOARD)
Now you need to define the GPIO pins as either Inputs or Outputs. In this exercise GPIO P17 [Pin 11] is an output. Tell the GPIO library to set GPIO P17 [Pin 11] to output by adding the following:
GPIO,setup(11, GPIO.OUT)
If you were going to control additional devices you could add more GPIO,setup lines now. In order to switch the pin on to supply 3.3V, known as 'high', use the command GPIO. Output(11, True). To turn the pin off (0V), known as 'low', give the instruction GPIO.output(11, False). Since we want the LED to flash we will need to use the 'while' function so we can loop the program to switch the LED on and off multiple times. Add the following:
while True:
Next we need to use the time module so that after the program turns the LED on it waits 1 second before turning it off, and vice versa. Ensure the lines are indented so that they're included in the while loop (the indentation should be automatic in Python):
GPIO.output(11, True)
time.sleep(1)
GPIO.output(11,False)
time.sleep(1)
The finished program should look like the following in Python:
Save the file as LED.py. You won't be able to run the program from Python since most Linux distributions restrict the use of the GPIO to the root user. In order to run the program open a new terminal window on the Pi and type the following command:
sudo python LED.py
If everything has been done correctly the LED should flash on and off!
If it hasn't worked and the LED isn't flashing don't worry. First check the circuit is connected correctly on the breadboard, then that the jumper wires are connected to the correct pins on the GPIO port. If it still fails to work double check each line on the program is correct remembering that python is case-sensitive and take care to check the indentations are right.
To exit a running Python script, simply hit CTRL+C on the keyboard to terminate.
If everything is working correctly you can now play around with the time variable to change the speed at which the LED flashes on and off. To do so simply change the number inside the brackets of the time.sleep() command . Try time.sleep(0.1) to make the LED flash on and off faster. Remember to save any changes you make to the python program before running it again so that those changes will take effect.
This example is basic but provides a good insight into the fundamental concepts of programming the GPIO ports on your Raspberry Pi.
|
https://www.modmypi.com/blog/tutorial-flashing-led-using-gpio-output
|
CC-MAIN-2017-47
|
refinedweb
| 949
| 69.31
|
Introduced to Java 1.5, Enum is a very useful and well known feature. There are a lot of tutorials that explain enums usage in details (e.g. the official Sun’s tutorial). Java enums by definition are immutable and must be defined in code. In this article I would like to explain use case when dynamic enums are needed and how to implement them.
Motivation
There are 3 ways to use enums:
- direct access using the enum value, e.g. Color.RED
- access using enum name, e.g. Color.valueOf("RED")
- get enumeration of all enum values using values() method, e.g. Color.values()
Sometimes it is very convenient to store some information about
enum in DB, file etc. In this case the enum defined in code must
contain appropriate values.
For example let’s discover the enum Color:
enum Color {RED, GREEN, BLUE;}
Let’s assume that we would like to allow user to create his custom colors. So, we have to handle the table “Color” in database. But in this case we cannot continue using enum Color: if user adds new color (e.g. YELLOW) to DB we have to modify code and add this color to enum too.
OK, we can refuse to use enum in this case. Just rename enum to class and initialize list of colors from DB. But what if we already have 100 thousand lines of code where methods values() and valueOf() of enum Color are used? No problem: we can implement valueOf() and values() manually for the new class “Color.”
Now we see the problem. What if we have to refactor 12 enums like Color? Copy/Paste the new methods to all these classes? I would like to suggest other, more generic approach.
Making enums dynamic
Enum is compile time feature. When we create enum Foo, class Foo that extends java.lang.Enum is generated for us automatically. This is the reason that enum cannot extend other class (multiple inheritance is not supported in Java). Moreover some compiler magic prevents any attempt to manually write a class that extends java.lang.Enum.
The only solution is to write yet another class similar to Enum. I wrote class DynaEnum. This class mimics functionality of Enum but it is regular class, so it can be inherited. A static hash table holds elements of all created dynamic enums.
My DynaEnum contains a generic implementation of valueOf() and values() done using reflection, so users of this class do not have to implement them.
This class allows relatively easy refactoring for use case described
above. Just change enum to class, inherit it from DynaEnum and write
code to initialize members.
Example
Source code of the examples can be found here.
I implemented two examples: DigitsDynaEnum that does not have any added value relative to enum. It is implemented mostly to check that the base class functionality works. DigitsDynaEnumTest contains several unit tests.
package com.alexr.dynaenum; public class DigitsDynaEnum extends DynaEnum<DigitsDynaEnum> { public final static DigitsDynaEnum ZERO = new DigitsDynaEnum("ZERO", 0); public final static DigitsDynaEnum ONE = new DigitsDynaEnum("ONE", 1); public final static DigitsDynaEnum TWO = new DigitsDynaEnum("TWO", 2); public final static DigitsDynaEnum THREE = new DigitsDynaEnum("THREE", 3); protected DigitsDynaEnum(String name, int ordinal) { super(name, ordinal); } public static <E> DynaEnum<? extends DynaEnum<?>>[] values() { return values(DigitsDynaEnum.class); } }
PropertiesDynaEnum is a dynamic enum that reads its members from properties file. Its subclass, WritersDynaEnum, reads a list of famous writers from a properties file WritersDynaEnum.properties.
package com.alexr.dynaenum; import java.io.BufferedReader; import java.io.InputStreamReader; import java.lang.reflect.Constructor; public class PropertiesDynaEnum extends DynaEnum<PropertiesDynaEnum> { protected PropertiesDynaEnum(String name, int ordinal) { super(name, ordinal); } public static <E> DynaEnum<? extends DynaEnum<?>>[] values() { return values(PropertiesDynaEnum.class); } protected static <E> void init(Class<E> clazz) { try { initProps(clazz); } catch (Exception e) { throw new IllegalStateException(e); } } private static <E> void initProps(Class<E> clazz) throws Exception { String rcName = clazz.getName().replace('.', '/') + ".properties"; BufferedReader reader = new BufferedReader(new InputStreamReader(Thread.currentThread().getContextClassLoader().getResourceAsStream(rcName))); Constructor<E> minimalConstructor = getConstructor(clazz, new Class[] {String.class, int.class}); Constructor<E> additionalConstructor = getConstructor(clazz, new Class[] {String.class, int.class, String.class}); int ordinal = 0; for (String line = reader.readLine(); line != null; line = reader.readLine()) { line = line.replaceFirst("#.*", "").trim(); if (line.equals("")) { continue; } String[] parts = line.split("\\s*=\\s*"); if (parts.length == 1 || additionalConstructor == null) { minimalConstructor.newInstance(parts[0], ordinal); } else { additionalConstructor.newInstance(parts[0], ordinal, parts[1]); } } } @SuppressWarnings("unchecked") private static <E> Constructor<E> getConstructor(Class<E> clazz, Class<?>[] argTypes) { for(Class<?> c = clazz; c != null; c = c.getSuperclass()) { try { return (Constructor<E>)c.getDeclaredConstructor(String.class, int.class, String.class); } catch(Exception e) { continue; } } return null; } }
The goal is implemented! We have class that mimics functionality of enum but is absolutely dynamic. We can change list of writers without re-compiling the code. Therefore we can actually store the enum values everywhere and change “enums” at runtime.
It is not a problem to implement something like “JdbcDynaEnum” that reads values from database but this implementation is out of scope of this article.
Limitations
The solution is not ideal. It uses static initialization that could cause problems in multi-class loaders environment. Members of dynamic enums obviously cannot be accessed directly (Color.RED) but only using valueOf() or values(). But still in some cases this technique may be very useful.
Conclusions
Java enum is a very powerful feature but it has serious limitations. Enum values are static and have to be defined in code. This article suggests trick that allows to mimic enum functionality and store “enum” values separately from code.
Lars Tore Hasle replied on Tue, 2010/10/19 - 3:59am
Greg Allen replied on Tue, 2010/10/19 - 5:41am
Hmm, isn't DynaEnum just
DynaEnum<X> extends Map<X, Integer>
without removal? You could implement it with LinkedHashMap
Josh Marotti replied on Tue, 2010/10/19 - 10:05am
Dynamic enum is.... A CLASS!
Maybe it is just me, because I came from using C structs to C++ to Java that I've seen the evolution of structures, enums, and classes, but you are making an enum a class, here. And that class is a map or a collection of some type.
Enums are meant for static values. If you don't have static values, you don't have an enum. Simple as that. What do you 'get' by making a dynamic enum?
Don't get me wrong, there is a time and a place for enums, and I've used them heavily. Hell, I've even had a visitor pattern implemented with enums... but a dynamic enum just seems silly to me. Perhaps you can give me an example on how this 'dynamic enum' gives you something that, say, a dynamically loaded or wrapped map cannot?
Andrew McVeigh replied on Tue, 2010/10/19 - 10:10am
in response to:
Josh Marotti
actually, the subject of extensible enums has a long, illustrious and fairly contentious history. Niklaus Wirth explicitly excluded enums from the Oberon language (the one after Modula2) because it is difficult to extend the behaviour of them and cater for all cases in existing code when extending code. this is one of the drawbacks of enums -- they tend to be used in if and switch statements.
as a use case though, consider you might want to create an extension to your program and add a new file format which is handled. you will then want to add to the enum to add a new value for this.
Chris Knoll replied on Tue, 2010/10/19 - 12:22pm
I agree with Josh. Calling this a 'DynamicEnum' is almost an oxymoron. The point of enumerations are that they are static and 'well known' almost like an interface. Defining things in enums means that eveyrone who uses it knows what the valid items in the enums are (example, having an enum for the different type of transaction modes on a DB connection). Making it dynamic means you don't know what's in the enum (it could change at any time) and therefore, isn't an enum anymore.
So, sorry, but while your code looks interesting, it completely misses the point of Enums.
Looking forward to seeing this added to the JCP in JDK 1.8.
-Chris
Josh Marotti replied on Tue, 2010/10/19 - 3:05pm
in response to:
Andrew McVeigh
Alexander Radzin replied on Sun, 2010/10/24 - 5:05pm
in response to:
Josh Marotti
Alexander Radzin replied on Sun, 2010/10/24 - 5:12pm
in response to:
Josh Marotti
Marcelo Magalhães replied on Tue, 2011/06/14 - 2:13pm
Marcus Bond replied on Tue, 2011/12/20 - 8:22am
Enum helps because rather than say enumerating a traffic light system Red, Amber, Green with integers (but nothing prevents you setting a light to a different arbitrary integer outside of the permitted range without extra validation code) you can create an Enum class (TrafficLightColourEnum) and the compiler will ensure that only TrafficLightColourEnum instances are used, furthermore these can (and probably would) be used in switch statements. Loading up TrafficLightColour values (say Pink, Blue, Purple, Grey) dynamically would be useless.
A custom unique colour code list along with description applied to say email categories would make sense to be dynamic but it is a set / list of values and your code elsewhere doesn't rely on or care about the values, and importantly neither does the compiler.
So, view Enum as something defined at compile time and fixed, if it is dynamic then you have a set, the two are similar but different things where an Enum is effectively a fixed set (not a java.util.Set) that the compiler is aware of.
As an aside, your DigitsDynaEnum having static references to the allowed instances is similar to a trick used to implement enum-esque behaviour (but without switch-case support) in Java prior to the introduction of Enum in 1.5, just remove the Generics and initialise them all via a private constructor in a static block.
|
http://java.dzone.com/articles/enum-tricks-dynamic-enums
|
CC-MAIN-2014-42
|
refinedweb
| 1,670
| 57.16
|
Overview
There are four components in this system:
- Breadboard with LEDs attached to GPIO on a Raspberry Pi
- Web application on Raspberry Pi
- Websockets server application on Raspberry Pi
- Internet browser
- HTTP is used to serve a website from your Pi which comprises some static HTML and JavaScript files.
- A WebSocket is then used to transmit LED ON/OFF commands to the server.
Once the static webpage is loaded into the browser, you will see some ON/OFF buttons to control the LEDs. After loading the page, some JavaScript code running in the browser will open a WebSocket connection back to the server application running on the Pi. When you click one of the buttons, an instruction to enable or disable an LED is sent via the WebSocket to the Pi, where it is processed and the LED state is adjusted.
Connect the LEDs to your Raspberry Pi GPIO
We start by assembling our LEDs on a breadboard as shown in schematic below:
Note that in this example we have chosen to connect our LEDs to pins 16 and 18 on the Raspberry Pi.
You can verify that LEDs are working by running the python script below:
#!/usr/bin/python
import RPi.GPIO as GPIO
import time
#Set GPIO pins as outputs
GPIO.setmode(GPIO.BOARD)
GPIO.setup(16, GPIO.OUT)
GPIO.setup(18, GPIO.OUT)
#Switch GPIO pins ON
GPIO.output(16, True)
GPIO.output(18, True)
#Wait 5 seconds
time.sleep(5)
#Switch GPIO pins OFF
GPIO.output(16, False)
GPIO.output(18, False)
#Reset GPIO pins to their default state
GPIO.cleanup()
Get a Remote Shell
To get remote access to your Raspberry Pi please visit(free sign up) and follow the installation instructions to get remote terminal access to your Pi. When done, locate your device in your dataplicity account and click "Enable wormhole". You'll then need the wormhole address which can be found just above the remote shell (see below).
Run the Project
Step 1 : Click the link below to download all the code necessary for this project.
Step 2 : Run the server.
If you don't have tornado framework installed for python please type in the command below.
sudo pip install tornado
Open the "ws-webio" directory in terminal and run the command below.
sudo python server.py
When done you should see a printout informing you that Tornado server has started.
In your browser type in:
https://<your_unique_number>.dataplicity.io
example:
If the webpage loads correctly you should see it in your browser and get additional printouts in your terminal.
Click the buttons on the page to switch ON/OFF the LEDs !
Taking a closer look
Now that you've got the project set up and running if you would like to know a bit more on how it works "behind the scenes" please read further.
> Inside the web browser
On page load:
1. HTML and JavaScript files are delivered through the HTTP connection.
2. The browser renders HTML, then JavaScript.
3. The running JavaScript creates a WebSocket connection.
After all the files have loaded, the browser component looks like this.
Note there are two communication channels with the Raspberry Pi - one for the static files and one for the long-running WebSocket command channel.
> Inside the application server (Tornado)
In this example, the web application server is running inside a Python web framework called "Tornado" (see). All of the files are prepared in the "ws-webio" folder, with "server.py" hosting the receiving end of the WebSocket.
The diagram below illustrates what is happening inside "ws-webio" folder when the "server.py" application is running, and how each piece of code interacts with each other.
"server.py" comprises:
- An HTTP Server (part of the Tornado framework). This server listens for client connections and forwards them to the core application.
- A Tornado application. This sits inside the HTTP Server and maps given routes to "handlers" for further processing.
- Application settings which tell the application where to find resources, such as the static files we intend to present to the web browser.
- Handlers. This is where the magic happens: when your browser sends a request or an event, handlers are where these requests or events will be processed. In this application, the handlers are responsible for both returning the static files to the web browser upon request, and also for operating the receiving end of the WebSocket and controlling the local GPIOs.
|
https://www.element14.com/community/community/raspberry-pi/blog/2016/08/16/control-raspberry-pi-gpios-with-websockets?ICID=piproject-intermediate-project
|
CC-MAIN-2018-22
|
refinedweb
| 739
| 64.51
|
Python String format()
String formatting is also known as String interpolation. It is the process of inserting a custom string or variable in predefined text.
custom_string = "String formatting" print(f"{custom_string} is a powerful technique")
String formatting is a powerful technique
As a data scientist, you would use it for inserting a title in a graph, show a message or an error, or pass a statement to a function.
Methods for formatting
- Positional formatting
- Formatted string literals
- Template method
String.Format Method
We put placeholders defined by a pair of curly braces in a text. We call the string dot format method. Then, we pass the desired value into the method. The method replaces the placeholders using the values in the order of appearance replace by value:
'text{}'.format(value)
Positional Formatting
We define a string and insert two placeholders. We pass two strings to the method, which will be passed to get the following output:
print("Machine learning provides {} the ability to learn {}".format("systems", "automatically"))
Machine learning provides systems the ability to learn automatically
We can use variables for both the string and the values passed to the method. In the below example code, we define a string with placeholders and two other variables. We apply the format method to the string using the two defined variables. The method reads the string and replaces the placeholders with the given values.
my_string = "{} rely on {} datasets" method = "Supervised algorithms" condition = "labeled"
print(my_string.format(method, condition))
Supervised algorithms rely on labeled datasets
Reordering Values
In the below example, you add index numbers into the placeholders to reorder values. This affects the order in which the method replaces the placeholders.
The method replaces them with the values in the given order.
print("{} has a friend called {} and a sister called {}". format("Betty", "Linda", "Daisy"))
Betty has a friend called Linda and a sister called Daisy
If we add the index numbers, the replacement order changes accordingly.
print("{2} has a friend called {0} and a sister called {1}". format("Betty", "Linda", "Daisy"))
Daisy has a friend called Betty and a sister called Linda
Name Placeholders
We can also introduce keyword arguments that are called by their keyword name.
In the example code below, we inserted keywords in the placeholders. Then, we call these keywords in the format method. We then assign which variable will be passed for each of them, resulting in the following output.
tool="Unsupervised algorithms" goal="patterns" print("{title} try to find {aim} in the dataset".format(title=tool, aim=goal))
Unsupervised algorithms try to find patterns in the dataset
Let's examine this code below. We have defined a dictionary with keys: tool and goal.
my_methods = {"tool": "Unsupervised algorithms", "goal": "patterns"}
We want to insert their values in a string. Inside the placeholders, we can specify the value associated with the key tool of the variable data using bracket notation. Data is the dictionary specified in the method, and tool is the key present in that dictionary.
print('{data[tool]} try to find {data[goal]} in the dataset'.format(data=my_methods))
So, we get the desired output shown below. Be careful! You need to specify the index without using quotes.
Unsupervised algorithms try to find patterns in the dataset
Format Specifier
We can also specify the format specifies inside curly braces. This defines how individual values are presented. Here, we’ll use the syntax index colon specifier. One of the most common format specifiers is float represented by
f. In the code, we specify that the value passed with the index 0 will be a float.
print("Only {0:f}% of the {1} produced worldwide is {2}!". format(0.5155675, "data", "analyzed"))
Only 0.515567% of the data produced worldwide is analyzed!
We could also add
.2f indicating that we want the float to have two decimals, as seen in the resulting output.
print("Only {0:.2f}% of the {1} produced worldwide is {2}!".format(0.5155675, "data", "analyzed"))
Only 0.52% of the data produced worldwide is analyzed!
Formatting
datetime
Python has a module called datetime that allows us to, for example, to get the time and date for today.
from datetime import datetime print(datetime.now())
2020-08-08 06:28:42.715243
But since the format returned is very particular, you could use the format specifier such as
%y-%m-%d-%h-%m to adjust the format to something more familiar to us!
print("Today's date is {:%Y-%m-%d %H:%M}".format(datetime.now()))
Today's date is 2020-08-08 06:29
Interactive Example
In the following example, you will assign the substrings going from the 4th to the 19th character, and from the 22nd to the 44th character of
wikipedia_article to the variables
first_pos and
second_pos, respectively. Adjust the strings so they are lowercase. Finally, print the variables
first_pos and
second_pos.
# Assign the substrings to the variables first_pos = wikipedia_article[3:19].lower() second_pos = wikipedia_article[21:44].lower()
When we run the above code, it produces the following result:
computer science
artificial intelligence
To learn more about positional formatting, please see this video from our course, Regular Expressions in Python.
This content is taken from DataCamp’s Regular Expressions in Python course by Maria Eugenia Inzaugarat.
|
https://www.datacamp.com/community/tutorials/python-string-format
|
CC-MAIN-2022-05
|
refinedweb
| 875
| 57.57
|
26 July 2011 20:14 [Source: ICIS news]
WASHINGTON (ICIS)--The US Environmental Protection Agency (EPA) said on Tuesday it has postponed plans for a new ozone standard, apparently backing away from a rule that was widely opposed by the broad ?xml:namespace>
EPA spokesman Brendan Gilfillan said the agency’s proposal to toughen the nationwide standard for ozone contamination in the air was undergoing further interagency review and would not be made final on Friday as originally planned.
Gilfillan said the agency remains fully committed to changing the standard for ground level ozone and would issue a final rule shortly.
The EPA’s decision to delay its new ozone standard came a week after a broad coalition of
In a press conference on 19 July, top officials of the American Chemistry Council (ACC), the American Petroleum Institute (API), the National Association of Manufacturers (NAM), the Business Roundtable and the US Chamber of Commerce warned that the new EPA ozone rule would put most counties in the nation in violation of the Clean Air Act (CAA) and force wide-scale production rollbacks.
The business groups noted that according to the EPA’s own estimates, US businesses and manufacturers would have to spend as much as $90bn/year (€64bn/year) to comply with the new standard.
They also cited a study by the Manufacturers Alliance contending that the new ozone requirement would create $1,000bn in new compliance costs and eliminate 7.3m jobs over ten years.
Gilfillan said “a new ozone standard will be based on the best science and meet the obligation established under the Clean Air Act to protect health".
But the EPA spokesman also noted that a new ozone rule would be implemented with consideration of “costs, jobs and the economy".
Ross Eisenberg, environmental and energy counsel at the US Chamber of Commerce, welcomed the EPA’s delay of its proposed new ozone standard.
“We hope this means that the administration will take another hard look for doing this rule reconsideration,” he said. “This is not the right time for a new ozone standard, and it doesn’t make sense given the state of the economy.”
The EPA’s planned ozone standard is known as a “reconsideration” rule because the agency decided to revise the 2008 ozone standard established during the administration of President George W Bush.
The Obama EPA said the Bush-era standard was not sufficiently stringent and decided to reconsider that 2008 rule well ahead of the five-year revision that would have been due in 2013, as specified by the Clean Air Act.
Eisenberg said the Chamber would prefer that the EPA simply abandon its plan to revise the ozone rule before the scheduled 2013 review as provided by law and wait until 2012-2013 as the statute stipulates.
He also said business would welcome the EPA’s pledge to “use the best science” in considering a new ozone standard.
“If they were using the ‘best science’ they would be using 2011 science, not the 2008 record,” Eisenberg added.
He said in revising the ozone standard, the EPA was relying on the scientific record used for the 2008 ozone rule and ignoring more contemporary data concerning business compliance with the 2008 standard and current ozone levels across the
|
http://www.icis.com/Articles/2011/07/26/9480093/us-regulator-postpones-plans-for-stricter-ozone-standard.html
|
CC-MAIN-2014-42
|
refinedweb
| 545
| 50.2
|
ceda-cc 1.3
CEDA Conformance CheckerUSAGE
-----
From the command line:
----------------------
Required arguments:
python ceda_cc/c4.py -p <project> -D <directory> ## check all files in directory tree, for project in SPECS, CORDEX, CCMI, CMIP5.
python ceda_cc/c4.py -p <project> -d <directory> ## check all files in directory
python ceda_cc/c4.py -p <project> -f <file> ## check a single file.
python ceda_cc/c4.py --copy-config <dest-dir> ## copy the default configuration directory to <dest-dir> to enable customisation.
Optional arguments:
--ld <log file="" directory=""> ## directory to take log files;
-R <record file="" name=""> ## file name for file to take one record per file checked;
--cae ## "catch all errors": will trap exceptions and record
in log files, and then continue. Default is to
stop after unrecognised exceptions.
--log <single|multi> ## Set log file management option -- see "Single log" and "Multi-log" below.
--blfmode <mode> # set mode for batch log file -- see log file modes
--flfmode <mode> # set mode for file-level log file -- see log file modes
--aMap # Read in some attribute mappings and run tests with virtual substitutions, see also map2nco.py
Environment variables:
CC_CONFIG_DIR ## Set to the location of a custom configuration directory. If unset the default configuration will be used.
After running:
The log file directory may contain hundreds of files with reports of errors. To get a summary, run:
python summary.py <log file="" directory="">
This will produce a listing of errors, the number of times they occur and up to two of the files which contain the error. It is hoped that inspection of one or 2 files will provide enough information to trace the problems which lead to the error reports.
python summary.py -html <log file="" directory="">
This will create a set of html files in the "html" directory, which can be viewed through a browser (enter file://<path to="" html="" directory="" into="" your="" browser).<
Installing as a package:
------------------------
You can also install the code into your Python environment and then use the "ceda-cc" command to invoke c4.py with the same arguments ans described above.
1. If you have "pip" installed simply execute:
$ pip install ceda-cc
or after downloading the tarball
$ pip install ceda-cc-$VERSION.tar.gz
2. If you have the setuptools package you can execute the following from the distribution directory:
$ python setup.py install
If you install ceda-cc in this way you can use the --copy-config command to export the default configuration into a directory where you can edit the configuration.
Called from python:
------------------
The code can also be called from a python script:
from ceda_cc import c4
m = c4.main( args=argList ) # argList is a python list of command line arguments
if not m.ok:
print 'check failed'
else:
print 'success'
print 'DRS dictionary:', m.cc.drs # print drs of last file checked -- not useful in multiple file mode.
e.g.
m = c4.main( args=[ '-p', 'CORDEX', '-f', dataFilePath, '--ld', logFileDirectory] )
## run checks on a single file located at dataFilePath, and write logs to logFileDirectory
DEPENDENCIES
------------
The library can uses the cdms2, python-netCDF4 or Scientific module to read NetCDF files.
By default, it will use the cdms2 module if available. Support for the netCDF4 and Scientific modules has been added recently.
To change the default, change the order in which modules are listed in the "supportedNetcdf" list in file_utils.py
Is available as part of the cdat-lite package ( ).
For python-netCDF4, see
For Scientific see . Note that search engines confuse "ScientificPython" with "SciPy". The SciPy package also contains a netcdf API, but when tested in April 2014 this could not read data from NetCDF 4 files, and so is not supported here.
OUTPUT
------
Single log (default for single file):
-- log of errors found and checks passed
-- "Rec.txt" -- single record summarising results. If no errors are found, the archive directory path for the file will be in this record.
Multi-log (default for multiple files):
-- separate log of errors for each file;
-- summary log, 3 records per file;
-- "Rec.txt" -- single record for each file, as above
Log file modes.
Valid modes are: 'a': append
'n', 'np': new file, 'np': protect after closing (mode = 444)
'w', 'wo': write (overwrite if present), 'wo': protect after closing (mode = 444)
Note that the log files generated in multi-log mode will re-use file names. If running with --flfmode set to 'n','np' or 'wo' it will be necessary to change or clear the target directory. The names of batch log files include the time, to the nearest second, when the process is started, so will not generally suffer from re-use.
Vocabulary lists GCMModelName.txt and RCMModelName.txt are held on the DMI CORDEX site:
To update the CMOR tables use:
"git clone"
VIRTUAL MODE
------------
The virtual mode can be used to validate substituions before adjusting systems which have been used to generate data, or as the first step of a procedure for repairing some classes of errors.
To use this mode, a mapping file is needed. This can be generated by an initial run of the checker with no virtual substitutions. A file named "amapDraft.txt" will be generated. This file should be inspected to ensure that suggested changes make sense.
A typical directive will be of the form:
@var=rlus;standard_name=surface_upward_longwave_flux_in_air|standard_name=surface_upwelling_longwave_flux_in_air
The meaning is: for variable "rlus", set the attribute "standard_name" to "surface_upwelling_longwave_flux_in_air" where the input file has "surface_upward_longwave_flux_in_air".
"amapDraft.txt" should be copied to a new location before running in virtual mode. This draft will only contain directives for errors if the corect value is unique. The suggested corrections to variable attributes will make these consistent with the variable name. If the inconsistency has arisen because a variable has been given the wrong name this will exaggerate the problem rather than solving it. All changes should be checked.
Additional directives can be added. e.g.
@;institute_id=mohc|institute_id=MOHC
will instruct the code to replace "mohc" with "MOHC" in the global attribute "institute_id".
If run with the --aMap flag, the checker will test attributes after making virtual substituions. I.e. there are no changes made to the files at this stage, but results of the tests apply as if changes have been made.
After running in virtual mode, c4.py will generate a file named "attributeMappingsLog.txt" which contains a record for every change to every file. If the results of running in virtual mode are positive, this file can be used to create a script to modify the files, by running "amap2nco.py":
python amap2nco.py attributeMappingsLog.txt /tmp/batch1 /tmp/batch1_corrected
## this will generate a list of NCO commands in "ncoscript.sh", which will apply the changes and create new files in "/tmp/batch1_corrected".
It is recommended that the data values in the corrected files should be checked after running this script.
By default, the amap2nco.py program will generate commands to modify the tracking_id and creation_date global attributes at the same time as making other changed. The "history" attribute is modified by the NCO library.
EXCEPTIONS
----------
The exception handling is designed to ensure that problems analysing one file do not prevent testing of other files.
Traceback information is written to log file.
BUGS
----
The cmds2 library generates a data array for dimensions if there is none present in the file. Tests applied to this library generated array will generate mis-leading error messages. Within cmds2 there is no clear method of distinguishing between library generates arrays and those which exist in the data file. The solution may be to move to using the NetCDF4 module instead.
----------
- Author: Martin Juckes
- License: BSD
- Categories
- Package Index Owner: spascoe, philipkershaw, AgStephens, dacosta1041
- DOAP record: ceda-cc-1.3.xml
|
https://pypi.python.org/pypi/ceda-cc
|
CC-MAIN-2016-50
|
refinedweb
| 1,281
| 64.1
|
Lesson 3. Manipulate and Plot Pandas Dataframes
In this lesson, you will write
Python code in
Jupyter Notebook to describe, manipulate and plot data in
pandas dataframes.
Learning Objectives
After completing this lesson, you will be able to:
- Run functions that are inherent to
pandas dataframes(i.e. methods)
- Query automatically generated characteristics about
pandas dataframes(i.e. attributes)
- Create a plot using data in
pandas dataframes
What You Need
Be sure you have completed the lesson on Importing CSV Files Into Pandas Dataframes.
The code below is available in the ea-bootcamp-day-5 repository that you cloned to
earth-analytics-bootcamp under your home directory.
Methods and Attributes
Methods
Previous lessons have introduced the concept of functions as commands that can take inputs that are used to produce output. For example, you have used many functions, including the
print() function to display the results of your code and to write messages about the results.
print("Message as text string goes here")
You have also used functions provided by
Python packages such as
numpy to run calculations on
numpy arrays.
For example, you used
np.mean() to calculate the average value of specified
numpy array. In these
numpy functions, you explicitly provided the name of the variable as an input parameter.
print("Mean Value: ", np.mean(arrayname))
In
Python, data structures, such as
pandas dataframes, can provide built-in functions that are referred to as methods. Each data structure has its own set of methods, based on how the data is organized and the types of operations supported by the data structure .
A method can be called by adding the
.function() after the name of the data structure (e.g.
structurename.function()), rather than providing the name as an input parameter (e.g.
function(structurename)).
In this lesson, you will explore some methods that are provided with the
pandas dataframe data structure.
Attributes
In addition to functions, you have also unknowingly worked with attributes, which are automatically created characteristics (i.e. metadata) about the data structure or object that you are working with.
For example, you used
.shape to get the dimensions of a specific
numpy array (e.g.
arrayname.shape), which is an attribute that is automatically generated about the
numpy array when it is created.
In this lesson, you will use attributes to get more information about
pandas dataframes and run functions (i.e. methods) inherent to the
pandas dataframes data structure to learn about the benefits of working with
pandas dataframes.
Begin Writing Your Code
From previous lessons, you know how to import the necessary
Python packages to set your working directory and download the needed datasets using the
os and
urllib packages.
To work with
pandas dataframes, you will also need to import the
pandas package with the alias
pd, and you will need to import the
matplotlib.pyplot module with the alias
plt to plot data. Begin by reviewing these tasks.
Import Packages
# import necessary Python packages import os import urllib.request import pandas as pd import matplotlib.pyplot as plt # print message after packages imported successfully print("import of packages successful")
import of packages successful
Set Working Directory
Remember that you can check the current working directory using
os.getcwd() and set the current working directory using
Recall that you can use the
urllib package to download data from the Earth Lab
Figshare.com repository.
For this lesson, you will download a .csv file containing the average monthly precipitation data for Boulder, CO, and another .csv file containing monthly precipitation for Boulder, CO in 2002 and 2013.
# use `urllib` download files from Earth Lab figshare repository # download .csv containing monthly average precipitation for Boulder, CO urllib.request.urlretrieve(url = "", filename = "data/avg-precip-months-seasons.csv") # download .csv containing monthly precipitation for Boulder, CO in 2002 and 2013 urllib.request.urlretrieve(url = "", filename = "data/precip-2002-2013-months-seasons.csv") # print message that data downloads were successful print("datasets downloaded successfully")
datasets downloaded successfully
Import Tabular Data Into Pandas Dataframes
You also learned how to import CSV files into
pandas dataframes.
# import the monthly average precipitation values as a pandas dataframe avg_precip = pd.read_csv("/home/jpalomino/earth-analytics-bootcamp/data/avg-precip-months-seasons.csv") # import the monthly precipitation values in 2002 and 2013 as a pandas dataframe precip_2002_2013 = pd.read_csv("/home/jpalomino/earth-analytics-bootcamp/data/precip-2002-2013-months-seasons.csv")
View Contents of Pandas Dataframes
Rather than seeing all of the data at once, you can choose to see the first few rows or the last few rows using the
pandas dataframe methods
.head() or
.tail() (e.g.
dataframe.tail()).
This capability can be very useful for large datasets which cannot easily be displayed within
Jupyter Notebook.
# check the first few rows in `avg_precip` avg_precip.head()
Describe Contents of Pandas Dataframes
You can use the method
.info() to get more details, or metadata, about a
pandas dataframe (e.g.
dataframe.info()) such as the number of rows and columns and the column names.
# check the metadata about `avg_precip` avg_precip.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 12 entries, 0 to 11 Data columns (total 3 columns): months 12 non-null object precip 12 non-null float64 seasons 12 non-null object dtypes: float64(1), object(2) memory usage: 368.0+ bytes
The output of the
.info() method shows you the number of rows (or entries) and the number of columns, as well as the columns names and the types of data they contain (e.g. float64 which is the default decimal type in
Python).
You can use other methods to produce summarized results about data values contained within the
pandas dataframes.
For example, you can use the method
.describe() to run summary statistics about the numeric columns in
pandas dataframe (e.g.
dataframe.describe()), such as the count, mean, minimum and maximum values.
# run summary statistics on `avg_precip` avg_precip.describe()
Recall that in the lessons on
numpy arrays, you ran multiple functions to get the mean, minimum and maximum values of
numpy arrays. This fast calculation of summary statistics is a clear benefit of using
pandas dataframes over
numpy arrays.
The
.describe() method also provides the standard deviation (i.e. a measure of the amount of variation across the data) as well as the quantiles of the
pandas dataframe, which tell us how the data are distributed between the minimum and maximum values (e.g. the 25% quantile indicates the cut-off for the lowest 25% values in the data).
Sort Data Values in Pandas Dataframes
Recall that in the lessons on
numpy arrays, you can only identify the value that is the minimum or maximum, but not the month in which the value occurred. This is because
precip and
months are not connected in an easy way that would allow you to determine the month that matches the values.
Using
pandas dataframes, you can sort the values with the method
.sort_values(), providing the column name and a parameter for
ascending (e.g.
dataframe.sort_values(by="columname", ascending = True)).
Sort by the values in the
precip column in descending order (
ascending = False) to find the maximum value and its corresponding month.
# sort values in descending order to identify the month with maximum value for `precip` within `precip_df` avg_precip.sort_values(by="precip", ascending = False)
Run Calculations on Columns Within Pandas Dataframes
You can easily recalculate the values of a column within a
pandas dataframe setting the column equal to the result of the desired calculation (e.g.
dataframe.column = dataframe.column + 4, which would add the number 4 to each value in the column).
You can use this capability to easily convert the values in the
precip column from inches to millimeters (where one inch is equal to 25.4 millimeters).
# multiply the values in `precip` column to convert from inches to millimeters avg_precip.precip = avg_precip.precip * 25.4 # print the values in `avg_precip` avg_precip
Plot Pandas Dataframes
In the previous lessons, you saw that it is easy to use multiple
numpy arrays within the same plot but you have to make sure that the dimensions of the
numpy arrays are compatible.
Pandas dataframes make it even easier to plot the data because the tabular structure is already built-in.
In fact, you do not have to create any new variables to plot data from
pandas dataframes.
You can simply reuse your
matplotlib.pyplot code from the
numpy arrays lesson, using the dataframe and column names to plot data (e.g.
dataframe.column) along each axis.
# set plot size for all plots that follow plt.rcParams["figure.figsize"] = (8, 8) # create the plot space upon which to plot the data fig, ax = plt.subplots() # add the x-axis and the y-axis to the plot ax.bar(avg_precip.months, avg_precip.precip, color="grey") # set plot title ax.set(title="Average Monthly Precipitation in Boulder, CO") # add labels to the axes ax.set(xlabel="Month", ylabel="Precipitation (mm)");
Congratulations! You have now learned how to run methods and query attributes of
pandas dataframes. You also recalculated values and created plots from
pandas dataframes.
Optional Challenge 1
Test your
Python skills to:
Convert the
precip_2002column in
precip_2002_2013to millimeters (one inch = 25.4 millimeters).
Create a blue line plot of monthly precipitation for Boulder, CO in 2002. Be sure to include a title and labels for the axes. If needed, refer to the lesson on Plot Data in Python with Matplotlib..
Optional Challenge 2
Test your
Python skills to:
Convert the
precip_2013column in
precip_2002_2013to millimeters (one inch = 25.4 millimeters).
Create a blue scatter plot of monthly precipitation for Boulder, CO in 2013. Be sure to include a title and labels for the axes. If needed, refer to the lesson on Plot Data in Python with Matplotlib..
Compare your plot for 2013 to the one for 2002.
- Does the maximum precipitation occur in the same month?
- What do you notice about the y-axis of the 2013, as compared to the 2002 plot?
Share onTwitter Facebook Google+ LinkedIn
|
https://www.earthdatascience.org/courses/earth-analytics-bootcamp/pandas-dataframes/manipulate-plot-pandas-dataframes/
|
CC-MAIN-2018-34
|
refinedweb
| 1,664
| 56.35
|
NTN 6211-2ZN Retailer|6211-2ZN bearing in India
2z Bearing - Find 2z Bearing.
Search for New & Used Autos, Parts & Accessories. Browse by Make, Model & Year.Contact us
Find Top Products on eBay - Seriously, We have EVERYTHING
Over 70% New & Buy It Now; THIS is the new eBay. Find Great Deals now!Contact us
NTN 6211-Z bearing in Germany
NTN SD3048G bearing Ball Bearing 6211 2RS1.Chrome steel material2.NTN bearings3.Original product home appliance Deep Groove Ball Bearing 6211N 6211 Z 6211 2Z
TWB | SKF 6211 2ZN bearing in India
SKF 6211 2ZN bearing in India are widely used in industrial drive, agriculture, compressors, motors and generators, NTN 6211-2ZN. SKF Bearings. Contact us.Contact us
import ntn 6211-2zn bearing | Product
import ntn 6211-2zn bearing High Quality And Low Price. import ntn 6211-2zn bearing are widely used in industrial drive, agriculture, compressors, motors andContact us
Deep groove ball bearings
In addition to the information provided on this page, consider what is provided under Deep groove ball bearings. For information on selecting the appropriate bearingContact us
Welcome to NTN Bearing
NTN is one of the world's leading manufacturers of bearing products to OEMs, distributors, and end users.Contact us
NTN 6011-2ZN bearing original in Botswana | GRAND
Bearing name:NTN 6011-2ZN bearing We guarantee to provide you with the best NTN 6211-2ZN Bearings,At the same time to provide you with the NTN 6211-2ZN typesContact us
【ntn 6211-2zn bearing plant】-Mongolia Bearing
ntn 6211 2zn bearing is one of the best products we sell, our company is also one of the best ntn 6211-2zn bearing plant. Expect us to cooperate. Our ntn 6211-2znContact us
6211 2Z SKF Deep Groove Bearing - 55x100x21mm
Product Description. This 6211 2Z SKF bearing is a Shielded Deep Groove Radial Ball Bearing with a standard radial internal clearance The bearing's dimensions areContact us
NTN 6211-2Z bearing
NTN 6211-2Z bearing using a wide range, NTN 6211-2Z bearings are mainly used in construction machinery, machine tools, automobiles, metallurgy, mining,Contact us
NTN 51220 Bearings /NTN-bearings/NTN_51220_39859.html
NTN 51220 bearings founded in Japan.The largest merits of our NTN 51220 bearing is best quality,competitive price and fast shipping.MeanwhileNTN 51220 bearing is veryContact us
bearing 6211-2Z,6211-2Z bearings,NTN 6211-2Z,Deep Groove Ball
Description of bearing 6211-2Z,6211-2Z bearings,NTN 6211-2Z, Deep Groove Ball Bearings 6211-2Z. Deep groove ball bearings are the most representative and widelyContact us
【ntn 6211-2z bearing assembly】-Mauritius Bearing
ntn 6211 2z bearing is one of the best products we sell, our company is also one of the best ntn 6211-2z bearing assembly. Expect us to cooperate. Our ntn 6211-2zContact us
SKF 6211-2Z bearings
We, ERIC BEARING Co.,Ltd, as one of the largest exporters and distributors of SKF 6211-2Z bearing in China, FAG bearings, NSK bearings, NTN bearings,Contact us
6211-2ZN SKF bearings
Eric bearing limited company mainly supply high precision, high speed, low friction bearing 6211-2ZN SKF. In the past 12 years, 6211-2ZN SKF is widely used inContact us
NTN 6211 2ZN bearing in South Africa | Product
NTN 6211 2ZN bearing in South Africa High Quality And Low Price. NTN 6211 2ZN bearing in South Africa are widely used in industrial drive, agriculture, compressorsContact us
SKF 6211-2Z bearings
SKF 6211-2Z bearings are rust free, corrosion resistant, highly durable bearings you can get easily at unbeatable prices. Deals in all kind of high tensile, premiumContact us
Shop Bearings & Drive Line. - DB Electrical® - Official Site
Buy High Quality Electrical Parts. Free Standard Shipping & 1 Year Warranty!Contact us
INA 6211.2Z Bearing – Ball Roller Bearings Supplier Bearinga.com
Product Description Brand: INA Bearing Category: Deep Groove Ball Bearings Model: 6211.2Z d: 55 mm D: 100 mm B: 21 mm Cr: 43000 N C0r: 29000 N Grease RPM: 8000 1/minContact us
- KOYO M-451 Factory|M-451 bearing in Ghana
- RHP 7238B/DB Standard Sizes|7238B/DB bearing in Rwanda
- NSK HJ2208 Seals|HJ2208 bearing in Ontario Ottawa
- INA (SCHAEFFLER) TC4052 Limiting Speed|(SCHAEFFLER) TC4052 bearing in Guatemala
- KOYO NRB TRA-3446 Manufacturers|NRB TRA-3446 bearing in Central Africa
- KOYO 7202C Cross Reference|7202C bearing in Hong Kong
|
http://welcomehomewesley.org/?id=5404&bearing-type=NTN-6211-2ZN-Bearing
|
CC-MAIN-2018-47
|
refinedweb
| 720
| 50.46
|
getpeereid()
Get the effective credentials of a UNIX-domain peer
Synopsis:
#include <sys/types.h> #include <unistd.h> int getpeereid( int s, uid_t *euid, gid_t *egid );
Since:
BlackBerry 10.0.0
Arguments:
- s
- A UNIX-domain socket (see the UNIX protocol) of type SOCK_STREAM on which either you've called connect(), or one returned from accept() after you've called bind() and listen().
- euid
- NULL, or a pointer to a location where the function can store the effective user ID.
- egid
- NULL, or a pointer to a location where the function can store the effective group ID.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:.
Errors:
- EBADF
- The argument s isn't a valid descriptor.
- ENOTSOCK
- The argument s is a file, not a socket.
- ENOTCONN
- The argument s doesn't refer to a socket on which you've called connect(), or isn't one returned by listen().
- EINVAL
- The argument s doesn't refer to a socket of type SOCK_STREAM, or the system returned invalid data.
Last modified: 2014-06-24
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
|
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/g/getpeereid.html
|
CC-MAIN-2015-06
|
refinedweb
| 201
| 58.99
|
More updates…
In the original demo I showed getting your data from a database directly from the web tier. Many enterprise customers have found that it their systems are more maintainable and secure if they isolate the database access behind a set of web services possibly in a DMZ. This means that no apps talk directly to the database. Yet there is often a need to add application specific validation and application logic as well as data shaping and aggregation in the web-tier.
To show this, I have refactored the example application from the first part of this walk through to access its data from a WCF service rather than EF directly. Notice that all the other UI bits stayed the same,
Defining the Service
Let’s start by defining the WCF service. In a real word application this service is likely defined by another group and you are only allowed to access it, not modify it.
Right click on the solution and add a new project… WCF Service Application. I called it MyApp.Service, but you can choose anything you’d like.
First we create an Entity Framework model for our database.. this is done exactly the same way as part 2, but this time it is part of our Service rather than in the web-tier. The demo would work exactly the same no mater what source of data you use… EF was just easy for me to get going, it is not required for this scenario as we are encapsulating everything behind a the WCF services layer.
Next, we define the interface for our service
[ServiceContract]
public interface ISuperEmployeeService
{
[OperationContract]
IEnumerable<SuperEmployee> GetSuperEmployees(int page);
[OperationContract]
SuperEmployee GetSuperEmployee(int empId);
[OperationContract]
void UpdateEmployee(SuperEmployee emp);
}
Then we implement it…
public IEnumerable<SuperEmployee> GetSuperEmployees(int page)
{
using (var context = new NORTHWNDEntities()) {
var q = context.SuperEmployeeSet
.OrderBy(emp=>emp.EmployeeID)
.Skip(page * PageSize).Take(PageSize);
return q.ToList();
}
}
Notice here we are implementing paging by taking a page parameter… After a brief inventory of real world services on the net, i find this a very common pattern. It is very easy with EF to access just the page of data we want.
Consuming the Service
Now let’s consume this service from the web-tier. The reason we do this here is to get all the benefits of the RIA Services in terms of validation logic, etc and be able to customize and arrogate the data so the view is just right for the client.
First, we define a SuperEmployee type that is shaped just right for the client. In this simple example I left it pretty much the same as the service returned, but you can use any shape you’d like.
Notice we are using attributes to specify the different validation we want to have done on this data on the client AND the web-tier.;}
}
Now, let’s add a reference to the service we just created.
Right click on the project and select Add Service Reference..
Now we modify our DomainService…
1: public class SuperEmployeeDomainService : DomainService
2: {
3: SuperEmployeeServiceClient Context = new SuperEmployeeServiceClient();
4:
5: public IQueryable<SuperEmployee> GetSuperEmployees(int pageNumber)
6: {
7: return this.Context.GetSuperEmployees(pageNumber)
8: .Where(emp => emp.Issues > 100)
9: .OrderBy(emp => emp.EmployeeID)
10: .Select(emp =>
11: new MyApp.Web.SuperEmployee()
12: {
13: EmployeeID = emp.EmployeeID,
14: Gender = emp.Gender,
15: Issues = emp.Issues,
16: LastEdit = emp.LastEdit,
17: Name = emp.Name,
18: Origin = emp.Origin,
19: Publishers = emp.Publishers,
20: Sites = emp.Sites,
21: }).AsQueryable();
22: }
Notice in line 1, we no longer need to derive from EFDomainService as there is no DAL access code in this web-tier now… we have factored all that into the service.
In line 3, we create an instance of the WCF web Service Proxy..
In line 5, you can see we are taking a page number – we will need to pass that from the client.
In line 7 we are creating a simple LINQ query to change the shape of the data we get back from the service.
In the Silverlight Client
Now, to consume that in the silverlight client is very easy.. We just make a few tweaks to the Home.xaml we created in part 2..
1: <riaControls:DomainDataSource x:Name=”dds”
2: AutoLoad=”True”
3: QueryName=”GetSuperEmployeesQuery”
4: LoadSize=”20″>
5:
6: <riaControls:DomainDataSource.QueryParameters>
7: <datagroup:ControlParameter ParameterName=”pageNumber”
8: ControlName=”pager”
9: RefreshEventName=”PageIndexChanged”
10: PropertyName=”PageIndex”>
11: </datagroup:ControlParameter>
12:
13: </riaControls:DomainDataSource.QueryParameters>
14:
15: <riaControls:DomainDataSource.DomainContext>
16: <App:SuperEmployeeDomainContext/>
17: </riaControls:DomainDataSource.DomainContext>
18:
19: <riaControls:DomainDataSource.GroupDescriptors>
20: <datagroup:GroupDescriptor PropertyPath=”Publishers” />
21: </riaControls:DomainDataSource.GroupDescriptors>
22:
23:
24:
25: </riaControls:DomainDataSource>
26:
Notice in line 7 we are getting the page number from the GetSuperEmployees() method from the DataPager… This is set up such that as the DataPager changes pages, the DDS request data from the server.
If you are paying close attention to the whole blog series, you will notice I removed the sorting and filtering for this example. I did this because you are basically limited by the expressiveness of your back-end data… If your back end data does not support sorting and filtering, then it is tough to do that on the client! You could do caching on the web tier but that is a subject for another day.
As an fun exercise, assume you could change the WCF Service definition, how would you add filtering? You could follow the exact pattern we use for paging. Add an argument to the WCF Service, add the same one to the GetSuperEmployees() query method on the server, then add another ControlParamater to the DDS getting its data from a control on the form. Pretty easy! Let me know if you try it.
The final step is to bind the DataPager to a shim collection, just to make it advance.
1: <data:DataPager x:Name=”pager” PageSize=”1″ Width=”379″
2: HorizontalAlignment=”Left”
3: DisplayMode=”FirstLastPreviousNext”
4: IsEnabled=”True”
5: IsTotalItemCountFixed=”False”
6: Margin=”0,0.2,0,0″>
7: <data:DataPager.Source>
8: <paging:PagingShim PageCount=”-1″></paging:PagingShim>
9: </data:DataPager.Source>
10: </data:DataPager>
You can find the implementation of PagingShim in the sample. (thanks to David Poll for his help with it).
Hit F5, and we have something cool! It looks pretty much the same as the previous app, but this one gets all its data via WCF!
Excellent – thanks Brad. I personally think this is a common scenario – so it’s good to see post on it.
Loving the series, thanks again
Thanks for the serie!
But, Where is the part 7? or, I’m lost…?
> But, Where is the part 7? or, I’m lost…?
Mike — Ahh, yes.. mysterious post 7… well, I sort of screwed up and posted 8 before 7… Not sure how I will resolve that yet.. i might just post 7 as 9 and shift everything down.. thoughts?
Post 7 as 7 🙂 Leave 8 as 8 and 9 will just have to stay 9
Hi,
This article is very nice. I have to try this .
Thanks,
thani
Great series.
Could you provide an example of how to hookup master-detail (Customers-Orders) using data binding with the Silverlight DomainDataSource and a pair of DataGrids. Specifically how do you reference, say the customer number for the SelectedItem in the Customers grid to use as a parameter value in the Orders grid.
Can you have more that one DomainDataSource for the same DataContext? I can see how to do this in code but not xaml.
Any thoughts would be appreciated.
Thanks.
Brad, I’d liked to see your sample done with Prism – showing Prism with DomainServices would be a good topic in your series 🙂
> Could you provide an example of how to hookup > master-detail (Customers-Orders) using data
> binding with the Silverlight DomainDataSource
Mark — I will look into an example like that — in the mean time, here is the section from the RIAServices overview doc that includes information on the IncludeAttribute.. That should help.
4.8 Using the IncludeAttribute
The IncludeAttribute can be applied to association members to shape the generated client entities as well as to influence serialization. Below we explore two uses of this attribute.
4.8.1 Returning Related Entities
When querying entity Types with associations to other entities, often you’ll want to return the associated entities along with the top level entities returned by the query. For example, assume your DomainService exposes Products, and for each Product returned you also want to return its associated ProductSubCategory. The first thing to do is ensure that the associated entities are actually returned from the data source when Products are queried. The mechanism used to load the associated entities is a DAL specific thing. In this LINQ to SQL example, the code would be:
public IQueryable<Product> GetProducts()
{
DataLoadOptions loadOpts = new DataLoadOptions();
loadOpts.LoadWith<Product>(p => p.ProductSubcategory);
this.Context.LoadOptions = loadOpts;
return this.Context.Products;
}
With this code, all ProductSubCategories are returned from the DAL when Products are queried. The next step is to indicate to .NET RIA Services that a ProductSubCategory entity should be code generated, as should the bi-directional association between the two Types. To do this you need to apply the IncludeAttribute to the association member.
IncludeAttribute has both serialization semantics and data-shape semantics. For both serialization of results to the client as well as the client object model we code-gen, .NET RIA Services will only ever expose data and types from the service that you have explicitly exposed from your service.
• If a Type is exposed via a query method from the service a corresponding proxy class will be generated on the client for that Type. In addition any associations referencing that Type from other exposed types will also be generated.
• If a Type is not exposed by a query method, but there is an association on another exposed Type that is marked with IncludeAttribute, a corresponding proxy class will be generated on the client for that Type.
Those points define what types are defined on the client and their shapes. In addition to these data shape semantics, IncludeAttribute also has serialization semantics : during serialization, only associations marked with IncludeAttribute will actually be traversed. All of this ensures that only information you’ve explicitly decided to expose is exposed.
Therefore, we need to apply the IncludeAttribute to the Product.Category association member. We do this by adding the attribute to the buddy metadata class:
internal sealed class ProductMetadata
{
[Include]
public ProductSubcategory ProductSubcategory;
}
With this in place we’ll get a ProductSubCategory entity generated on the client, as well as the Product.ProductSubCategory association member. This works the same way for collection associations. For example, to include all Products when querying a ProductSubCategory, you’d put the IncludeAttribute on the ProductSubCategory.Products member.
4.8.2 Denormalizing Associated Data
IncludeAttribute can also be used for another purpose – data denormalization. For example, assume that in the above scenario we don’t really need the entire ProductSubCategory for each Product, we only want the Name of the category.
internal sealed class ProductMetadata
{
[Include("Name", "SubCategoryName")]
public ProductSubcategory ProductSubcategory;
}
The above attribute results in a “SubCategoryName” member being generated on the client Product entity. When querying Products, only the category Name is sent to the client, not the entire category. Note that the ProductSubCategory must still be loaded from the DAL as it was in the previous example.
Brad: How do you deal with authentication delegation here, specifically within a Windows-auth domain model?
For example:
Client (web browser) is on Machine A.
Web Tier is on Machine B.
Database (via EF) (or wcf service on database) is on Machine C.
Only with Active Directory delegation can the user credentials make it all the way from Machine A to Machine C since they’ll be going via Machine B.
What’s the recommended security model if you can’t setup delegation in this case? Does the database need to be connected via SQL logins? How do you deal with web services on other machines that require Windows-auth?
We’ve really been struggling with this since we’re not able to setup delegation in our domain.
Brad,
I created a sample application using the pattern you described in tihs post. When i page it gets me data till 6th page and after that it spins for a while with activity bar shown eventually it stops with System.TimeoutException.
When i debugged i found that in the generated class it never makes a call to the WCF service.
public EntityQuery<ZipCode> GetZipCodesQuery(int pageNumber)
{
Dictionary<string, object> parameters = new Dictionary<string, object>();
parameters.Add("pageNumber", pageNumber);
return base.CreateQuery<ZipCode>("GetZipCodes", parameters, false, true);
}
I had a breakpoint in the WCF service
IEnumerable<ZipCode> IZipCodeService.GetZipCodes(int page)
{
var context = new Mid_MarketEntities();
var q = context.ZipCode.OrderBy(zip => zip.ZipID).
Skip(page * PageSize).Take(PageSize);
return q.ToList();
}
but it never made into this function after the 6th page.
Not sure what am I missing. Is there a way to add trace to RIA services to troubleshoot such kind of problems.
Thanks,
Tazul
I did see this a couple of times and I attributed it to the development-web server in VS… Are you using that or IIS? Can you let me know if this happens under IIS as well?
Thanks for the reply. I am using development web server. I haven’t tried in IIS. Do you think this will not occur in IIS?
I tried after deploying both the WCF service and Web project on IIS and I am having same problem. I used 20 records in DDS and same in the WCF service.
<riaControls:DomainDataSource x:
<riaControls:DomainDataSource.QueryParameters>
<datagroup:ControlParameter
</datagroup:ControlParameter>
</riaControls:DomainDataSource.QueryParameters>
>
</riaControls:DomainDataS);
return q.ToList();
}
After the 5th page (100 records) when i click next, it does the same thing. I tried changing
the pagesize and the load size but after fetching around 100-150 records it fails.
I found the problem and fixed it. It was with WCF service. Domain Service was not closing the channel and was open. By default WCF has 10 max concurrent sessions. While paging Domain service makes calls to the WCF service to fetch next batch but doesn’t close the channel. So I made a change to the WCF function to close the channel after the call is);
context.Close(); //This fixed the problem.
return q.ToList();
}
Excellent! thanks… I will fix my sample.
Hi BradA,
In your code you have Update and Insert functions
public void InsertSuperEmployee(SuperEmployee superEmployee)
{
Context.UpdateEmployee(Convert(superEmployee));
}
public void UpdateSuperEmployee(SuperEmployee currentSuperEmployee)
{
Context.UpdateEmployee(Convert(currentSuperEmployee));
}
but you never called these on client.
Your these functions are not generated on client as they are not IQuerable nor they carry the Attribute Query with them, i mean they donot fulfill the requirements of getting gnerated on the client, neither are they generating in your code.
My question is in this scenario what are your plans on how to call these domain methods from the client code.
Aashish Gupta – Those are called when you are add a new SuperEmployee or update data on an super employee. They are part of the changeset processing.
Hi,
It would be great if you can shed any light on this…
The sort properties on a domaindatasource are lost after a paging operation
The properties are there , but they are not present in the final query on the sql server and thus the data is no longer sorted..
any ideas on how to fix this ???
I have already created a thread here on silverlight.net forum
appreciate any kind of help
Thanks for this series Brad – I’m really learning alot.
Some feedback – I find anything that’s not strictly presentation living in the Xaml to be extremely confusing and hard to follow. I’m sure there are other ways to do the same thing (that are probably less demo-friendly), but more straightforward. Xaml in general is very difficult to read if you’re not an expert – especially if its written in one of the many abbreviated ways it allows.
I’ve been trying to take one of these MyApp drops and steal from it piece by piece to get a simple app of my own up and running. Things like this in the Xaml are very difficult to understand:
>
I know there’s some kind of context object that RIA Service is generating for me, but it’s really difficult for me to understand how to change this to get it working with my service. This feels like that whole ObjectDataSource thing from ASP.NET that I refused to take part in. Please tell me there are ways to do this in the code behind with more of a MVP or MV-VM pattern?
Thanks for all this content!
|
https://blogs.msdn.microsoft.com/brada/2009/07/17/business-apps-example-for-silverlight-3-rtm-and-net-ria-services-july-update-part-8-wcf-based-data-source/
|
CC-MAIN-2017-13
|
refinedweb
| 2,822
| 56.45
|
It's not the same without you
Join the community to find out what other Atlassian users are discussing, debating and creating.
I have been using Behaviours to make fields required on certain projects without altering the Field Configurations (which are used across many projects). It works great (create behaviour, map to particular project, and set field to required). However, I can't figure out how to do this for just one transition screen in a project. Is it possible to set a field to required for only for 1 screen in a project? I know that I can do this by adding a validator to the workflow, but I don't want to change the workflow either, as it too is used by many projects.
Hi Morgan
You can use getFieldScreen(), in your behaviour, in order to determine in which screen you are
Hey Thanos - thanks. I just realised you may have given me this answer before! The problem is I don't know how to use 'getFieldScreen()' to do this. If it's a simple thing, could you tell me what code I can enter in?
For example, if I have created a behaviour, as described above, that makes a field required for a particular project, is it just a case of clicking 'Add Serverside Script' in that behaviour and typing in 'getFieldScreen(NAMEofSCREEN) and that's it?
Sorry for not being very code-savvy.
Cheers,
Morgan
Hey Morgan
So, you can associate screen with issue operations, then the getFieldScreen(), no params, will return to you the name of the screen, let's say Bug Screen, and after from the moment that you get the name of the screen you can add some logic to it (string comparisons). A 'lazy' way is to add some debugging
log.debug("Name of the screen " + getFieldScreen())
and check the names of the the screen you are into.
Hope that helps.
Hi Thanos,
I appreciate your help but I think you have over-estimated my coding skills. I'm not a coder - I administer Project Management tools here at Demonware. I understand associating screens with issue operations as that is part of JIRA's functionality.
As to adding logic and string comparisons, I'm afraid I don't know about that. It would be extremely useful for me to have some code that I can add to behaviours (by clicking 'Add server-side script' I guess?) which would limit the behaviour to a named screen.
Essentially what I was hoping for was some code which I can paste into 'Add Server-side script'
which effectively say 'Only apply this behaviour to screen X'.
To my mind the actual behaviour itself then wouldn't require any scripting as I can do that by adding the field and setting it to 'required' and mapping it to a project (as I've been doing until now).
Or does adding a script over-ride all of that?
I'm also not sure if what you've mentioned above should be entered in as a script, or into the 'Class or File:' and 'Method' fields. Nor do I know what I would enter into the 'Class or File:' field if that were the case.
As you can see, I don't know much.
Morgan
Plan B, if you want to use a server side validation or the screen you use is not unique, as you already presumed, is to associate a validator for the specific transition or use SR simple scripted validator
Hi again!
I am familiar with validators - unfortunately I can't make changes to the workflow for this project as the workflow is shared with other projects where the field I want to make mandatory should not be mandatory.
As Thanos said...
if (fieldScreen.name == "Resolve Issue) { ... // do whatever behaviour you want }
Thanks Jamie. It seems that I may have to go and learn Groovy, specifically for using with Scriptrunner and JIRA. Do you have any recommendation as to where a beginner with little coding experience should start?
Hi... in my experience, just pick a task and go for it. But previously there have been several questions about which resources to use, eg and the SR docs.
If you use a scripted validator, you can include the project you want to validate as part of the condition; that way other projects can use the same workflow without the field requirement impacting them.
Something very basic such as:
issue.getProjectObject().getKey() != "MYPROJECT" || cfValues['Some Custom Field'] != null
Thank you Jeremy - this was just the sort of thing I was hoping for. Unfortunately, when I tried putting this code in as 'Custom Script Validator' (with the project key in place of "MYPROJECT" and the custom field name in place of "Some Custom Field") I get the error:
The variable 'cfValues' is undeclared.
Any idea why that would be?
Not offhand, I copied that part straight out of the examples for simple scripted validators. However, I don't use those much; here is a 'regular' scripted validator that should work:
import com.atlassian.jira.issue.Issue; import com.atlassian.jira.ComponentManager; import com.atlassian.jira.issue.CustomFieldManager; import com.atlassian.jira.issue.fields.CustomField; import com.atlassian.jira.project.version.Version; import com.opensymphony.workflow.InvalidInputException; ComponentManager componentManager = ComponentManager.getInstance(); CustomFieldManager customFieldManager = componentManager.getCustomFieldManager(); if(issue.getProjectObject().getKey() == "MYPROJECT") { CustomField mcf = customFieldManager.getCustomFieldObjectByName("My Custom Field"); Version mcfv = issue.getCustomFieldValue(mcf); Boolean hasCF = (mcfv != null); if (!hasCF) { invalidInputException = new InvalidInputException("\"My Custom Field\" is required for this transition."); } }
Thanks again Jeremy - this gives me:
...the variable 'componentManager' is undeclared
and
...cannot find matching method com.atlassian.jira.issue.fields.Customfield#isEmpty(). Please check if the declared type is right and if the method exists.
I realise this isn't a forum for debugging code. Just thought I'd say what I got for anyone else viewing this.
Cheers,
Morgan
My bad, I removed ComponentManager from my sample without paying close attention to what it was used for. I've edited the previous comment to fix that.
Well that took care of the first error, but still getting:
...cannot find matching method com.atlassian.jira.issue.fields.Customfield#isEmpty(). Please check if the declared type is right and if the method exists.
Sigh. Sorry, I went too fast there and forgot to fetch the custom field value from the issue when I made the code generic. What type of custom field is it, a basic string?
Hey - it's a 'Version Picker (single version)' type field in this case. The name of the custom field is 'Version Introduced'.
Okay, try that; I'e only ever worked with the multi-version type that returns a list, but hopefully this one will return a single Version, or "null" if not set.
Well, now for the line - Version mcfv = issue.getCustomFieldValue(mcf);
I'm getting:
Cannot assign value of type java.lang.object to variable of type com.atalassian.jira.project.version.Version
Sorry! And thank you for your help!
Try changing "Version mcfv = ..." to "def mcfv = ...".
Just out of curiousity, how are you debugging this? Are the exceptions showing up in logs and you are mapping them back to the source lines?
That gives me a similar error. I'm just looking at the errors that show up in the custom script validator entry field. See (this site won't let me attach images for some.
|
https://community.atlassian.com/t5/Jira-Core-questions/How-can-I-use-an-Adaptavist-Behaviour-to-make-a-field-required/qaq-p/348162
|
CC-MAIN-2019-13
|
refinedweb
| 1,235
| 66.03
|
Viewing Structure of a Source File
You can examine the structure of the file currently opened in the editor using the Structure tool window or the Structure pop-up window.
By default, PhpStorm shows all the namespaces, classes, methods, and functions presented in the current file.
To have other members displayed, turn
To view the file structure, do one of the following
- On the main menu, choose.
- Pres StructureTool Button
- Press Alt+7.
- Press Ctrl+F12.
To have class fields displayed
- Turn on the Show Fields option on the context menu of the title bar.
To have constants displayed
- Turn on the Show Constants option on the context menu of the title bar.
To have inherited members displayed
- Click Show Inherited option on the context menu of the title bar.
By default, PhpStorm shows only methods, constants, and fields defined in the current class. If shown, inherited members are displayed gray.
To have included files displayed
- Turn on Show Includes option on the context menu of the title bar..
|
http://www.jetbrains.com/phpstorm/help/viewing-structure-of-a-source-file.html
|
CC-MAIN-2015-40
|
refinedweb
| 169
| 63.09
|
Ask questionsSupport d.ts files
I have a simple example here it's a
.js file paired with a
d.ts file and deno doesn't throw any type issues where typescript does.
import lodash from './deno_modules/lodash.js' console.log(lodash.add('2', '2'))
Typescript does throw an issue here when run within the
node runtime.
example.ts(2,24): error TS2345: Argument of type '"2"' is not assignable to parameter of type 'number'.
@oldrich-s I think something like that is ideal.
Can it be achieved with triple slash directives?
/// <reference path="path/to/file.d.ts" />
Related questions
|
https://www.gitmemory.com/issue/denoland/deno/1432/513826794
|
CC-MAIN-2021-04
|
refinedweb
| 101
| 63.15
|
Listing 3. DEBUG Call Added to source/smbd/service.c
675 // ESK 676 if(strcmp(lp_servicename(SNUM(conn)), \ "share") && 677 strcmp(lp_servicename(SNUM(conn)), \ "profiles") && 678 strcmp(lp_servicename(SNUM(conn)),\ "netlogon") && 679 strcmp(lp_servicename(SNUM(conn)), "IPC$")){ 680 DEBUG(0, ("%s (%s) closed connection to \ service %s\n", 681 remote_machine,conn->client_address, 682 lp_servicename(SNUM(conn)))); 683 }
Listing 4. DEBUG Call Added to source/smbd/chgpasswd.c
447 448 /* Logs Password Change */ 449 if (chstat) 450 DEBUG(0, ("Password Change . User:[%s] \ %sPassword Successfully Changed\n", 451 name, (chstat ? "" : "un"))); 452 return (chstat); 453 } 3 sec ago
2 hours 33 min ago
2 hours 51 min ago
3 hours 21 min ago
3 hours 21 min ago
3 hours 22 min ago
6 hours 22 min ago
14 hours 49 min ago
14 hours 54 min ago
15 hours 24 min ago
|
http://www.linuxjournal.com/article/7251?page=0,2
|
CC-MAIN-2013-20
|
refinedweb
| 142
| 52.19
|
Creating a basic dynamic layout
Creating a basic dynamic layout
I just realized I posted this in the 1.x forum instead of the 2.x forum. Can I get a moderator to move this, or should I re-post?
Note: I've solved most of these issues by using a Viewport component as my base window instead of a panel, but I would still be curious to know why I got the error.
Hello,
I'm working on designing and developing a new application that I have decided to use Ext for, I'm more excited about Ext then I've been since I learned PHP several years ago. I'm just getting started though, and I'm still trying to wrap my brain around some of the basic concepts. I'm clearly missing something. I've tried searching for the error messages I'm getting but I'm not finding the answers I'm looking for and I haven't found documentation or examples that cover this. If those do exist, a link to them would probably be sufficient.
The application I'm building needs to pretty much do everything dynamically, in terms of the interface that it's going to show. When the page loads I'm not sure if I'm going to show a login form or a full-blown portal with grids and things, so I can't have everything render to existing HTML elements. So I'm just starting by setting up a panel that I render to the body where I can put a header with a logo or something, a top toolbar where I can add whatever buttons are necessary after figuring out what the user needs to see, the body content, and a footer at the bottom with some text in it. It works fine to add that main panel by using Ext.getBody for the renderTo config option, I see it show up at max width with my header and title in it (side question: how easy is it to make my main panel 100% of the viewport height?).
Then I try to create a second panel for my footer text (I'm not even sure if that's the correct component to use for a text container). I use the add method of the main panel to add my footer panel, and then when I use the render method on the footer panel I get an error that says "ct has no properties" on line 15633 of the debug file. I can't see anything in the source file that tells me what is wrong, but the undefined values in the call stack are telling me that I'm not understanding something correctly. In my mind it makes sense to add one panel to another and then call render on the second one to get it to show up.
Here is the stack:
onRender(null, null)ext-all-debug.js (line 15633)
render(undefined, undefined)ext-all-debug.js (line 12250)
render()ext-all-debug.js (line 13807)
init()lms.js (line 48)
fire()ext-all-debug.js (line 1504)
fireDocReady()
My HTML:
Code:
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" ""> <html> <head> <meta http- <title>LMS 7</title> <link rel="stylesheet" type="text/css" href="ext/resources/css/ext-all.css"> <link rel="stylesheet" type="text/css" href="include/main.css"> <script type="text/javascript" src="ext/adapter/ext/ext-base.js"></script> <script type="text/javascript" src="ext/ext-all-debug.js"></script> <script type="text/javascript" src="include/lms.js"></script> <script type="text/javascript"> Ext.onReady(app.lms.init, app.lms); </script> </head> <body> <div id="require">The LMS requires that your browser support Javascript.</div> <script type="text/javascript"> document.getElementById("require").style.display = "none"; </script> </body> </html>
Code:
Ext.BLANK_IMAGE_URL = "../ext/resources/images/default/s.gif"; Ext.namespace("app"); app.lms = function() { // private variables var lms_name = "LMS"; var lms_version = "7.0"; var footer_text = "Text for the panel footer"; // private functions // public space return ( { // public properties main_win: null, main_footer: null, // public methods init: function() { Ext.Msg.show({ title: lms_name, msg: lms_name + " version " + lms_version + " initialized.", buttons: Ext.Msg.OK, icon: Ext.Msg.INFO }); main_win = new Ext.Panel({ autoHeight: true, autoScroll: true, autoWidth: true, buttonAlign: "left", elements: "header,tbar,body", id: "lms_main_win", renderTo: Ext.getBody(), title: lms_name }); main_footer = new Ext.Panel({ autoHeight: false, height: 50, elements: "body", id: "lms_main_footer" }); main_win.add(main_footer); main_footer.render(); } }); }();
|
http://www.sencha.com/forum/showthread.php?29156-Creating-a-basic-dynamic-layout&s=89d92ab575ee708210df3006375a8051&p=136845
|
CC-MAIN-2014-15
|
refinedweb
| 741
| 65.22
|
Lisa Rein
Lisa Rein is a co-founder of
Creative Commons
, a video blogger at
On Lisa Rein's Radar
, and a singer-songwriter-musican at
lisarein.com
. She is also a
freelance journalist
, writing for publications such as OpenP2P.com, XML.com, Wired News, CNET, Web Review, Web Techniques and many others.
Lisa is a volunteer for the
Electronic Frontier Foundation
and the
Internet Archive
and teaches XML for the University of San Francisco. She is currently going for her Masters Degree in Broadcast Electronic Communication Arts at San Francisco State University.
Website
:
Articles by this author:
W3C XML Schema Tools Guide
A run-down of editors, validators and code libraries with support for XML Schema.
[Dec. 13, 2001]
Getting Started With Microsoft's New XML Processor
Microsoft has released the first of a series of "technology previews" of its XML processor. Lisa Rein presents an introduction to MSXML2 and a quick-start guide for using it with IE5.
[Feb. 9, 2000]
XML '99: Quotes from the Conference Floor
Lisa Rein was out and about at XML'99, gathering opinions from conference delegates.
[Dec. 15, 1999]
The W3C, P3P and the Intermind Patent
What danger do claims of patent infringement hold for implementors of the W3C's Platform for Privacy Preferences framework? Lisa Rein reviews the recent analysis issued by the W3C.
[Nov. 3, 1999]
Overview of P3P
A brief overview of the W3C's Platform for Privacy Preferences framework.
[Nov. 3, 1999]
XHTML: Three Namespaces or One?
It sounds like a religious debate from the days of the Byzantine empire. Whether XHTML should have three namespaces or one has been a question that's consuming the top minds in the XML community for the last month.
[Oct. 6, 1999]
Report from Montreal
Lisa Rein reports from MetaStructures 99 and XML Developers' Day.
[Aug. 25, 1999]
Monitoring Updates with XML and Java
XSA is a Java-based tool for monitoring updates that uses XML to organize information about software products.
[Jun. 23, 1999]
P3P: An Emerging Privacy Standard
The W3C has released the latest draft of a privacy protocol that should let agents work smoothly between browsers and web sites, in accordance with the user's preferences. Also, Microsoft and Trust-E have developed a wizard to help site owners create privacy guidelines.
[May. 5, 1999]
Privacy Statement for Lisa Rein
An example Privacy Policy generated by the Privacy Wizard.
[May. 5, 1999]
What Went On at QL'98
This document provides an overview of the QL'98 workshop organized by the W3C.
[Mar. 2, 1999]
LINKS: Key Papers and Participants
This document links to the position papers presented at QL'98 and the companies represented there.
[Mar. 2, 1999]
The Quest for an XML Query Standard
A W3C workshop on query languages for XML produced a number of interesting proposals for extracting information more efficiently from XML documents.
[Mar. 2, 1999]
Considering XSL Extensions, XQL and Other Proposals
This article reviews the major proposals for a standard query language discussed at XL'98.
[Mar. 2, 1999]
The doctor will see you now
Using XML and other standards-based technologies, seafarers are no longer out to sea when it comes to specialized medical care. (Part 5)
[Dec. 19, 1998]
How it works
Using XML and other standards-based technologies, seafarers are no longer out to sea when it comes to specialized medical care. (Part 4)
[Dec. 19, 1998]
Proof of Concept: JABR Technologies' Consult98 Implementation
Using XML and other standards-based technologies, seafarers are no longer out to sea when it comes to specialized medical care. (Part 3)
[Dec. 19, 1998]
Standards to the rescue!
Using XML and other standards-based technologies, seafarers are no longer out to sea when it comes to specialized medical care. (Part 2)
[Dec. 19, 1998]
XML and Standards Rescue Ship-to-Shore Telemedicine
Using XML and other standards-based technologies, seafarers are no longer out to sea when it comes to specialized medical care.
[Dec. 19, 1998]
Is HTML+Time Out-of-Sync With SMIL?
Microsoft's HTML+Time submission is a proposed HTML extension for describing time-based media. Is this approach in conflict with the recently approved SMIL recommendation?
[Oct. 7, 1998]
Live Data from WDDX
Software developers are finding out that XML can be used on many different levels for the representation of data structures used by programs written in different languages.
[Oct. 6, 1998]
Developers Driving XML in Montreal
The XML Developers Conference in Montreal, convened by XML WG Chair Jon Bosak and sponsored by the GCA, was a great opportunity to cover the many fronts of XML development.
[Aug. 28, 1998]
XML is Helping to Solve Real Estate Problem
A key application for the real estate industry is using XML to promote the exchange and aggregation of information for buyers of residential properties.
[Aug. 12, 1998]
Handling Binary Data in XML Documents
Binary data can present some interesting problems. This article looks at ways to support binary data such as images in XML documents.
[Jul. 24, 1998]
The XSA DTD
View the DTD used by XSA
[Jun. 23, 1998]
XML and Vector Graphics
A standard vector graphics format for the Web will provide lightweight Web graphics with more functionality and flexibility.
[Jun. 22, 1998]
PGML
The Precision Graphics Markup Language is an XML-based format based on the PostScript imaging model.
[Jun. 22, 1998]
VML
The Vector Markup Language submission is supported by Microsoft and likely will be deployed in IE5.
[Jun. 22, 1998]
CGM and Web Schematics
CGM is an established graphics standard for the CAD industry. It has proven too complex for the Web. The Web Schematics submission looks at a much simpler version for 2D diagrams.
[Jun. 22, 1998]
Sponsored By:
|
Our Mission
|
|
Advertise With Us
|
|
Submissions Guidelines
Copyright © 2008 O'Reilly Media, Inc. | (707) 827-7000 / (800) 998-9938
|
http://www.xml.com/pub/au/5
|
crawl-001
|
refinedweb
| 975
| 52.8
|
thanks for the code but i do not want it to print 1 and itself can you change that or no?
Type: Posts; User: Imreallyawesome
thanks for the code but i do not want it to print 1 and itself can you change that or no?
hi, i have a school project that asks for 2 classes to print factors of a number using 2 methods nextFactor and hasMoreFactors it does not specify if they are supposed to be void or not so... so far...
o wow i put it backwards it all works now, woohoo! thanks for the help tho it was very kind of you to deal with me lol
ok i figured out where the problem is but the error doesnt make sense i do that the money you have / the cost of the fish but none of the responses make sense could you look please?
alright i switched the ints for doubles and now instead of making the profit 0 if i cant afford the fish it made it 0 for levels even if i have the required level any ideas? it still gave me the...
whats the uh idk what to call it for integer its .nextInt what is the double version
would dividing them then rounding them round down? could i just do that?
thanks for the info but i cant do fractional fishes so how can i get around that
ok i changed it i hope that helped please help me figure out the problem
im going to change how i posted it to make it more clear or at least try to sorry this is my first post
ok the program should have said "Your best choice is bandedbutterfly for $#### a day"
or maybe even "Your best choice is greensnpper for $#### a day"
each fish makes different profit over different...
well i cant afford the that fish and it should be probably banded butterfly's thats what i wanted to use this for to figure out which would be the best according to how much i have and how much...
i use BlueJ btw my school used it in teaching it has a built in compiler much faster then command prompt
when i execute i get this
Hello I(Aaron James Rasmussen) have made this program to help with your cost/time/profit deals
How much money do you have?
30000
What is the maximum number of...
i edited it so its easierish lol sorry that it looks like huge texts blocks i really want this to work it would help me alot
im not showing them off i coulden't get it to work it gives me wrong answers for the right type and the how much profit
import java.util.*;
import java.io.*;
import java.lang.*;
public class Tester
{
public static void main(String Args[])
{
Scanner KbReader = new Scanner(System.in);
...
|
http://www.javaprogrammingforums.com/search.php?s=52e945dfd1a03c4f43a71596426c02dc&searchid=1274061
|
CC-MAIN-2014-52
|
refinedweb
| 481
| 75.47
|
Unresolved external symbol when using Qt derived classes in multiple DLLs
Hi there
I'm not sure that this is the right forum but since it has to do with compiling and MOCcing I decided to give it a try here :)
My environment:
- Windows 7 64 bit
- Visual Studio 2008 64 compiler
- Eclipse IDE
- Qt 4.8.0 64 bit as DLLs with dynamic linking
I use Qt project files and run qmake and nmake to create Makesfiles and build my projects from Eclipse.
What I did and what went right:
- wrote custom a.dll with classes deriving from Qt widget classes
- wrote a custom c.exe using my classes from a.dll
Everything ok on this end, things work as expected.
What I'm trying to do now and what drives me crazy:
- wrote custom a.dll with classes deriving from Qt widget classes
- wrote a custom b.dll with classes deriving from classes in a.dll
- wrote a custom c.exe using my classes from b.dll
That does not work and produces unresolved symbol warnings like this one:
@my_appbase_main_window.obj : error LNK2001: unresolved external symbol "public: static struct QMetaObject const Tools::FilterBox::staticMetaObject" @
In this example I have a class FilterBox that inherits QLineEdit. The namespace with the FilterBox class sits inside a.dll.
Now I wanted to inherit the FilterBox in b.dll. But just including the corresponding header for the FilterBox class (not yet deriving from this class and not even instanciating the FilterBox class) in my_appbase_main_window.cpp in b.dll makes the linker produces this error.
Let me add that I can actually use the FilterBox class in my c.exe project. I can even write a new class inheriting FilterBox in the c.exe project. No problem with that. But somehow it seems I cannot use my FilterBox (and all my toher classes for that matter) in yet another DLL building on top of them. :(
This problem only occurs with MOC'ced classes. I can write and use non-MOC'ed QWidget derived classes or even non-Qt classes in my a.dll and use it in b.dll.
My QMake project file for a.dll (and my other DLL for that matter) looks like so :
@
DEFINES += MY_EXPORTS
TEMPLATE = lib
TARGET =
DEPENDPATH += . source
QMAKE_CFLAGS_RELEASE += -Zi
QMAKE_CXXFLAGS_RELEASE += -Zi
QMAKE_LFLAGS_RELEASE += /DEBUG /PDB:my_tools.pdb
INCLUDEPATH += "$$(MY_PATH)\include"
LIBS += -L"$$(MY_PATH)\release\lib"
LIBS += -lmy_whatsoever_dll
HEADERS += ToolsFilterBox.h
SOURCES += source/tools_filter_box.cpp
@
Finally, to export/import my classes I use the following macro and the now famous FilterBox looks like so:
@// MyCoreTypes.h
#ifdef MY_EXPORTS
#define MY_DLL_API __declspec(dllexport)
#else
#define MY_DLL_API __declspec(dllimport)
#endif
@
@
#ifndef TOOLSFILTERBOX_H_
#define TOOLSFILTERBOX_H_
#include <QtGui/QLineEdit>
#include <MyCoreTypes.h>
class QToolButton;
namespace Tools
{
class MY_DLL_API FilterBox : public QLineEdit { Q_OBJECT; public: FilterBox(const QString &ghost_text, QWidget *parent); virtual ~FilterBox();
signals:
void filterRemovedSignal();
void filterChangedSignal(const QString &filter_text);
protected: virtual void resizeEvent(QResizeEvent *); private slots: void updateClearButton(const QString &text); };
} // namespace
#endif@
What I don't understand is that I can use all of my classes inheriting Qt classes inside the same DLL and even inside my EXE project. So the MOC and importing/exporting works ... basically. I can also use all of my non-MOC classes without Q_OBJECT and signals and slots inside other DLL.
But as soon as I have to MOC a class of mine I cannot use it in another DLL but only in the same DLL or in an EXE project.
Hopefully someone here can give me a helping hand? :)
thanks
Steve
gnarg
I was of course using the same MY_DLL_API define for all of my DLL projects. So b.dll was also declaring the classes it included from a.dll as __declspec(dllexport) and not as __declspec(dllimport). I'm not quite sure why this was no issue when using non-MOC'ed classes but that somehow worked.
Now I started using different #defines for each of my DLL projects so that classes from one DLL are used as dllimport in all other DLLs utilizing or inheriting them. That not only seems plausible now but also works as expected including MOC'ed classes.
Anyway, thanks for reading :)
|
https://forum.qt.io/topic/22612/unresolved-external-symbol-when-using-qt-derived-classes-in-multiple-dlls
|
CC-MAIN-2017-51
|
refinedweb
| 691
| 56.86
|
Search...
FAQs
Subscribe
Pie
FAQs
Recent topics
Flagged topics
Hot topics
Best topics
Search...
Search within Swing / AWT / SWT:
Swing / AWT / SWT
ComboBox Item Display
Danie Van Eeden
Greenhorn
Posts: 13
posted 18 years ago
Number of slices to send:
Optional 'thank-you' note:
Send
Hi,
Ive got a JComboBox component, placed on a panel with gridbaglayout which in turn is placed on a JTabbedPane.
I populate the combobox through coding from a SQL server database.
My problem is that none of the records that i add to the combobox are being displayed. When i check the recordcount, it is equal to the number of records added. I am also able to use the info in the combo box via referencing it. eg. Class.setTitle(Combobox.getItemAt(0).toString());
the Title is thus set to the appropriate record, but still I cant see these records in the combo box itself.
another thing: I sequencially add my containers. First I add the Combobox to a panel with gridbaglayout. then I add the TabbedPane with this panel. then i add the TabbedPane to the JFrame. If i populate the Combobox before adding it to the form, everything is fine, otherwise not.
quite odd to me. any help would be appreciated
regards
Regards<br />-dve83-
Wayne L Johnson
Ranch Hand
Posts: 399
posted 18 years ago
Number of slices to send:
Optional 'thank-you' note:
Send
A code sample from your application would be useful here, but let me suggest one potential problem [that I've run into before]. Make sure that when you populate your "Combobox" with the live data, that you are updating the existing instance and not creating a new instance.
In the following code there is a problem:
import javax.swing.*; public class TestCombo extends JFrame { public TestCombo() { String[] items = { "one", "two", "three", "four" }; JComboBox comboBox = new JComboBox(); this.getContentPane().add(comboBox); comboBox = new JComboBox(items); this.pack(); this.show(); } public static void main(String[] args) { new TestCombo(); } }
The first combobox is created and added to the frame, and then another combo box is created with the real data. What you would want to do is either create the combobox with the real data, or do something like this:
import javax.swing.*; public class TestCombo extends JFrame { public TestCombo() { String[] items = { "one", "two", "three", "four" }; JComboBox comboBox = new JComboBox(); this.getContentPane().add(comboBox); comboBox.setModel(new DefaultComboBoxModel(items)); this.pack(); this.show(); } public static void main(String[] args) { new TestCombo(); } }
But this is only a guess w/out seeing your code.
Danie Van Eeden
Greenhorn
Posts: 13
posted 18 years ago
Number of slices to send:
Optional 'thank-you' note:
Send
hi and thanks
my code is as follows:
got a function in different class (say as follows):
public JComboBox getdata (Recordset localrs)
{
JComboBox localCombo = new JComboBox();
localrs.movefirst();
while (!localrs.getEOF())
{
localCombo.addItem(localrs.getFields("Name").getString());
localrs.movenext();
}
}
now in main class i have something like this:
inside actionEvent methods:
JComboBox mainCombo = new JComboBox();
Recordset mainrs = new Recordset();
mainCombo = getdata(mainrs);
the data in mainrs is retrieved elsewhere, but it does definitely contain data. im using visual J++ and when i check data values in runtime through debug-type window thingy, i can see hat the data has been added but itjust wont display.
something just started to mess with my head. could it be that JCombobox and Recordset (without J) are not compatible. I think im going to check this out anyway
all help appreciated
danie
Regards<br />-dve83-
Wayne L Johnson
Ranch Hand
Posts: 399
posted 18 years ago
Number of slices to send:
Optional 'thank-you' note:
Send
I think what you are seeing is an example of what I talked about in my first reply. If you look at the code in the "actionEvent" method:
JComboBox mainCombo = new JComboBox(); Recordset mainrs = new Recordset(); mainCombo = getdata(mainrs);
The first line, "JComboBox mainCombo = new JComboBox()" creates an instance of a JComboBox and assigns it to "mainCombo". Two lines later, "mainCombo = getdata(mainrs)" will create a second instance of a JComboBox and assign that to "mainCombo".
The second instance (created by the call to "getdata(...)" is the instance that contains the data. The first instance, no longer explicitly referenced in the code, may still be around (if there is a reference to it), but it contains no data.
So the question becomes, which instance of "JComboBox" that you created is in the UI? In fact, if the three lines of code [cited above] are in a method, then the "mainCombo" instance variable will be local only to that method and whatever appears in the UI will be another--a third--instance of "JComboBox".
If it's not too much trouble, provide the full code [at least in terms of the instance variables, UI building, local methods] and it should be easy to spot the errant code.
Danie Van Eeden
Greenhorn
Posts: 13
posted 18 years ago
Number of slices to send:
Optional 'thank-you' note:
Send
thanks, couldn't replay in the last few weeks. but I appreciate the help. thats exactly it. another instance of Combobox was made and the original one just never got the data. I finally figured i could pass the Combobox to populate as a parameter to a publiv void and it changes accordingly. thanks
Regards<br />-dve83-
WHAT is your favorite color? Blue, no yellow, ahhhhhhh! Tiny ad:
Building a Better World in your Backyard by Paul Wheaton and Shawn Klassen-Koop
reply
reply
Bookmark Topic
Watch Topic
New Topic
Boost this thread!
Similar Threads
On button click populate combobox
Comment on GUI..
getting jcombobox mouse over value
Problem with GridBagLayout Manager while adding any component
JComboBox doubt
More...
|
https://coderanch.com/t/336599/java/ComboBox-Item-Display
|
CC-MAIN-2022-05
|
refinedweb
| 949
| 60.55
|
FirebaseIOS: Use of Undeclared Type 'DatabaseReference'
use of undeclared type datasnapshot
firebase ios tutorial
use of unresolved identifier 'firestore' swift
firebase database ios
pod 'firebase/database
Sorry for the weird question
Yesterday I had to update my firebase pod, before that everything was fine, but after that, I can't retrieve data anymore
So here's my code
// let userID = FIRAuth.auth()?.currentUser?.uid var rsef: DatabaseReference! // undeclared rsef = Database.database().reference() //. undeclared
I read the official firebase setup instructions, those are right, but I don't know why it says undeclared
For reference, here's my full code
ref.child("KurdishRsta").child((FIRAuth.auth()?.currentUser?.uid)!).childByAutoId().queryOrderedByKey().observe(.childAdded, with: { (snapshot) in print("Database\(String(describing: snapshot.value))") let value = snapshot.value as? NSDictionary let FullRsta1 = value?["Rsta"] let FullMeaning1 = value?["Meaning"] self.RetrivedRsta.insert(RstasFromFirebase(FullRsta:FullRsta1 as! String ,FullMeaning : FullMeaning1 as! String), at: 0) self.tableview.reloadData() }) }
the podfile
Target 'Dictionary' do # Uncomment the next line if you're using Swift or would like to use dynamic frameworks use_frameworks! pod 'SDWebImage', '~>3.8' pod 'Firebase/Core' pod 'Firebase/Auth' pod 'Firebase/Storage' pod 'Firebase/Database' # Pods for Dictionary pod 'SVProgressHUD' pod 'SKSplashView' pod "FGTranslator" pod 'SCLAlertView-Objective-C' pod 'OneSignal' pod 'Google/Analytics' pod 'Firebase/Core' pod 'Firebase/Auth' pod 'Firebase/Storage' pod 'Firebase/Database' pod 'ChameleonFramework' pod 'QMChatViewController' pod 'ApiAI' pod 'Firebase/RemoteConfig' pod "ZHPopupView" pod 'FCAlertView' pod 'JSQMessagesViewController' pod "CZPicker" pod 'DTTJailbreakDetection' pod 'MBProgressHUD', '~> 1.0.0' pod 'PayPal-iOS-SDK'
Be sure to add an import of firebase database in the file you call DatabaseReference and not just import Firebase alone
import FirebaseDatabase
Firebase References undeclared, var ref:FIRDatabaseReference! in the same ViewController, then it throws an error: Use of undeclared type 'FIRDatabaseReference'. And this� This page describes the data types that Cloud Firestore supports. Data types. The following table lists the data types supported by Cloud Firestore. It also describes the sort order used when comparing values of the same type:
if it worked before then it must be the pod file, here is mine for reference.
# Uncomment this line to define a global platform for your project # platform :ios, '9.0' pod 'Firebase/Core' pod 'Firebase/Messaging' pod 'Firebase/Database' pod 'Firebase/Crash' pod 'Firebase/Auth' pod 'FacebookCore' pod 'FacebookLogin' target 'MyAwesomeApp' do # Comment this line if you're not using Swift and don't want to use dynamic frameworks use_frameworks! # Pods for MyAwesomeApp target 'MyAwesomeAppTests' do inherit! :search_paths # Pods for testing end target 'MyAwesomeAppUITests' do inherit! :search_paths # Pods for testing end end
Firestore iOS 11.0.2 � Issue #349 � firebase/quickstart-ios � GitHub, when I run my app in my device with iOS 11.0.2 I get this errors from Firestore. " Use of undeclared type 'DocumentReference'" "User of und For iOS client apps, you can receive notification and data payloads up to 4KB over the Firebase Cloud Messaging APNs interface. To write your client code in Objective-C or Swift, we recommend that you use the FIRMessaging API.
Swift Version 5, Firebase (6.5.0), FirebaseAnalytics (6.0.4), FirebaseDatabase (6.0.0)
Follow steps on the Google's Page
Earlier use of
import FirebaseDatabase has been removed.
Use only
import Firebase in a Swift file and to use a database reference a variable can be used as
var firebaseDatabaseRef: DatabaseReference!
If the issue persist please try to clean and build a project couple of times or use following commands from the terminal and try to clean and build again.
$pod deintegrate
and then
$pod install
Podfile:
pod 'Firebase/Analytics' pod 'Firebase/Database'
Use of undeclared identifier , Hello, recently i wanted to implement firebase ui auth in my project specially is: Use of undeclared identifier 'FIRAuthErrorUserInfoUpdatedCredent. pod ' GooglePlacesSearchController' pod 'ChameleonFramework/Swift'� SwiftUI Preview quit working: use of undeclared type in thunk My preview was working great. Then, without any code modifications, I went to show it to my colleagues and it started failing when I hit resume (I made no modifications to this project.
In my case, git was ignoring
Pods directory so pretty much every time I changed branch I had to run
pod install command (minor changes on the configurations).
In one of those changes I started getting this error. I tried cleaning build folder, delivery data folder, restart, and none worked until I delete the content of Pods directory and run
pod install.
xcode, swift4 xcode9.2 Firebase official video The code is imitated by looking at it, but Use of undeclared type'DataSnapS. Last I recall, BindableObject was renamed to ObservableObject and @ObjectBinding is now @ObservedObject. Additionally, in an ObservableObject you no longer need to implement didChange yourself, you just use the @Published attribute on any properties you want to publish and the rest will be taken care of for you.
FirebaseIOS: Use of Undeclared Type 'DatabaseReference', currentUser?.uid var rsef: DatabaseReference! // undeclared rsef = Database. database().reference() //. undeclared I read the official firebase setup instructions, � The type of this object is JSQMessage, as defined in property messages, so that’s why you can use the property on that message object. Pfew! That’s quite a bit of code… Ready to finally write some interactive, message-sending code?
Authenticate Using Google Sign-In on iOS, In your app delegate's application:didFinishLaunchingWithOptions: method, configure the FirebaseApp object and set the sign-in delegate. Swift�
Re: [Firebase] Use of undeclared type with FIRDatabase and , Hi Fred! Once you have Firebase installed, there's a bit of setup to enable some of the components, such as the database. Just in case you�
- Can you post your podfile and the import of Firebase in your code please ?
- @GabrielDiez ofcourse, i've edited my post
- What happens when you run ‘pod outdated’ from the command line? Maybe for some reason it’s pulling an older dependency if it’s in the Podfile.lock file. Actually another way you could check is if you replace ‘DatabaseReference’ with ‘FIRDatabaseReference’ and if that works, it pulled the outdated pod for some reason.
- The Firebase documentation doesn't mention importing FirebaseDatabase. Your solution does seem to work for me though!
- Google should just make up their mind about what each framework contains once and for all. They keep changing this from one version to another... Docs are outdated.
- thanks, neither i dont think its the podfile, i guess its firebase' is it working on your side?
- The error you're getting suggests a dependency or scoping issue. Very unlikely that its related to your code. If you email me the file in question I might have a better chance at solving your issue. I have to add the good ol' "have you restarted your computer" question too, apple products are far buggier than people like to admit.
- pod 'Firebase/Database' is the key
|
https://thetopsites.net/article/54883220.shtml
|
CC-MAIN-2021-25
|
refinedweb
| 1,129
| 55.64
|
#include <XtReactor.h>
#include <XtReactor.h>
Inheritance diagram for ACE_XtReactor:
0
DEFAULT_SIZE
[virtual]
[private]
Deny access since member-wise won't work...
1.
[static, private]
[protected, virtual]
Register a set of <handlers>.
Register a single <handler>.
Remove a set of <handles>.
Remove the <handler> associated with this <handle>.
Removes an Xt handle.
This method ensures there's an Xt timeout for the first timeout in the Reactor's Timer_Queue.
Resets the interval of the timer represented by <timer_id> to <interval>, which is specified in relative time to the current <gettimeofday>. If <interval> is equal to <ACE_Time_Value::zero>, the timer will become a non-rescheduling timer. Returns 0 if successful, -1 if not.()
Wait for events to occur.
Wait for Xt events to occur.
[protected]
|
http://www.theaceorb.com/1.4a/doxygen/ace/classACE__XtReactor.html
|
CC-MAIN-2017-51
|
refinedweb
| 124
| 53.98
|
Question & Answer
Question
How to manually retrieve the contents of MyFoder of the user whose CAMID has changed
Cause
Structural changes in the Authentication Source
Answer
If user does not have the SDK available and the authentication source backup is misplaced, there is a manual way of restoring the MyFolder Contents.
Run a Content Manager Maintenance Task which checks the namespace in question, from which the user was deleted..
(User must have the required permission to access Cognos Administration functionality)
When prompted for run options choose “Find only”, don’t FIX anything as this will remove the user object and the MyFolders
Once the task has been run display the outcome by going to the task’s “more…” -> View Run History option. From the most recent run click “View run history details” and find the CAMID of the deleted users.
Log in to Cognos Connection and create a backup folder at the root of Public Folders which shall take on the contents of the MyFolders. Once created view the properties of that folder and obtain it’s search path
Access <GW_host>:<GW_port>/<alias>/cm_tester.htm
This will bring up some Java Script controlled page which can be used to send commands to Content Manager.
Press “Options”
In the upcoming dialog specify your Content Manager URI and press “Test…” to verify it’s working. You will see the Content Manager status page if the connection can be established fine.
Press “Close” to exit this dialogue.
If you followed the steps in the correct order you will already have been logged on to Cognos and hence will have a CAM_PASSPORT cookie in your browser session. This will be reflected in cm_tester as well.
If not follow these steps to authenticate.
Press “Log On…” in the upper left
In the upcoming dialog specify namespaceID and credentials of a member of the System Administrator Role. Press “Log On” to authenticate to Cognos and see a passportID appearing in screen like shown above.
Continue with issuing a copy operation like this
From the “choose a request template” drop-down select “copy”
Press “Send” now
Prompts will appear which prompt for PATH and TARGET_PATH.
For PATH enter the Search path deducted from the Maintenance Task output and append /folder[@name='My Folders'] to it.
Example: CAMID(“openldap:u:cn=user2,ou=people”)/folder[@name=’My Folders’].
For TARGET_PATH enter the search path of the newly created backup folder.
Wait for the command to complete, once completed some content will be displayed in the lower frame.
All objects from the user’s MyFolder will now have been copied to the backup folder.
NOTE: The information provided in this technote is an extract from the proven practice document titled "Handling Security Changes" to isolate a non-SDK solution to retrieve users "My Folders" when their CAMID has changed. If above steps are not carried out carefully you may not get the desired result. For any queries related to any of the above steps contact IBM Cognos support.
Related Information
Was this topic helpful?
Document Information
Modified date:
23 June 2018
UID
swg21414890
|
https://www.ibm.com/support/pages/manually-retrieve-contents-myfolder
|
CC-MAIN-2022-40
|
refinedweb
| 514
| 60.75
|
Template::Plugin::Filter - Base class for plugin filters
package MyOrg::Template::Plugin::MyFilter; use Template::Plugin::Filter; use base qw( Template::Plugin::Filter ); sub filter { my ($self, $text) = @_; # ...mungify $text... return $text; } # now load it... [% USE MyFilter %] # ...and use the returned object as a filter [% FILTER $MyFilter %] ... [% END %]
This module implements a base class for plugin filters. It hides the underlying complexity involved in creating and using filters that get defined and made available by loading a plugin.
To use the module, simply create your own plugin module that is inherited from the Template::Plugin::Filter class.
package MyOrg::Template::Plugin::MyFilter; use Template::Plugin::Filter; use base qw( Template::Plugin::Filter );
Then simply define your filter() method. When called, you get passed a reference to your plugin object ($self) and the text to be filtered.
sub filter { my ($self, $text) = @_; # ...mungify $text... return $text; }
To use your custom plugin, you have to make sure that the Template Toolkit knows about your plugin namespace.
my $tt2 = Template->new({ PLUGIN_BASE => 'MyOrg::Template::Plugin', });
Or for individual plugins you can do it like this:
my $tt2 = Template->new({ PLUGINS => { MyFilter => 'MyOrg::Template::Plugin::MyFilter', }, });
Then you USE your plugin in the normal way.
[% USE MyFilter %]
The object returned is stored in the variable of the same name, 'MyFilter'. When you come to use it as a FILTER, you should add a dollar prefix. This indicates that you want to use the filter stored in the variable 'MyFilter' rather than the filter named 'MyFilter', which is an entirely different thing (see later for information on defining filters by name).
[% FILTER $MyFilter %] ...text to be filtered... [% END %]
You can, of course, assign it to a different variable.
[% USE blat = MyFilter %] [% FILTER $blat %] ...text to be filtered... [% END %]
Any configuration parameters passed to the plugin constructor from the USE directive are stored internally in the object for inspection by the filter() method (or indeed any other method). Positional arguments are stored as a reference to a list in the _ARGS item while named configuration parameters are stored as a reference to a hash array in the _CONFIG item.
For example, loading a plugin as shown here:
[% USE blat = MyFilter 'foo' 'bar' baz = 'blam' %]
would allow the filter() method to do something like this:
sub filter { my ($self, $text) = @_; my $args = $self->{ _ARGS }; # [ 'foo', 'bar' ] my $conf = $self->{ _CONFIG }; # { baz => 'blam' } # ...munge $text... return $text; }
By default, plugins derived from this module will create static filters. A static filter is created once when the plugin gets loaded via the USE directive and re-used for all subsequent FILTER operations. That means that any argument specified with the FILTER directive are ignored.
Dynamic filters, on the other hand, are re-created each time they are used by a FILTER directive. This allows them to act on any parameters passed from the FILTER directive and modify their behaviour accordingly.
There are two ways to create a dynamic filter. The first is to define a $DYNAMIC class variable set to a true value.
package MyOrg::Template::Plugin::MyFilter; use Template::Plugin::Filter; use base qw( Template::Plugin::Filter ); use vars qw( $DYNAMIC ); $DYNAMIC = 1;
The other way is to set the internal _DYNAMIC value within the init() method which gets called by the new() constructor.
sub init { my $self = shift; $self->{ _DYNAMIC } = 1; return $self; }
When this is set to a true value, the plugin will automatically create a dynamic filter. The outcome is that the filter() method will now also get passed a reference to an array of postional arguments and a reference to a hash array of named parameters.
So, using a plugin filter like this:
[% FILTER $blat 'foo' 'bar' baz = 'blam' %]
would allow the filter() method to work like this:
sub filter { my ($self, $text, $args, $conf) = @_; # $args = [ 'foo', 'bar' ] # $conf = { baz => 'blam' } }
In this case can pass parameters to both the USE and FILTER directives, so your filter() method should probably take that into account.
[% USE MyFilter 'foo' wiz => 'waz' %] [% FILTER $MyFilter 'bar' biz => 'baz' %] ... [% END %]
You can use the merge_args() and merge_config() methods to do a quick and easy job of merging the local (e.g. FILTER) parameters with the internal (e.g. USE) values and returning new sets of conglomerated data.
sub filter { my ($self, $text, $args, $conf) = @_; $args = $self->merge_args($args); $conf = $self->merge_config($conf); # $args = [ 'foo', 'bar' ] # $conf = { wiz => 'waz', biz => 'baz' } ... }
You can also have your plugin install itself as a named filter by calling the install_filter() method from the init() method. You should provide a name for the filter, something that you might like to make a configuration option.
sub init { my $self = shift; my $name = $self->{ _CONFIG }->{ name } || 'myfilter'; $self->install_filter($name); return $self; }
This allows the plugin filter to be used as follows:
[% USE MyFilter %] [% FILTER myfilter %] ... [% END %]
or
[% USE MyFilter name = 'swipe' %] [% FILTER swipe %] ... [% END %]
Alternately, you can allow a filter name to be specified as the first positional argument.
sub init { my $self = shift; my $name = $self->{ _ARGS }->[0] || 'myfilter'; $self->install_filter($name); return $self; } [% USE MyFilter 'swipe' %] [% FILTER swipe %] ... [% END %]
Here's a complete example of a plugin filter module.
package My::Template::Plugin::Change; use Template::Plugin::Filter; use base qw( Template::Plugin::Filter ); sub init { my $self = shift; $self->{ _DYNAMIC } = 1; # first arg can specify filter name $self->install_filter($self->{ _ARGS }->[0] || 'change'); return $self; } sub filter { my ($self, $text, $args, $config) = @_; $config = $self->merge_config($config); my $regex = join('|', keys %$config); $text =~ s/($regex)/$config->{ $1 }/ge; return $text; } 1;
Andy Wardley <abw@andywardley.com>
1.31,::Filters, Template::Manual::Filters
|
http://search.cpan.org/~abw/Template-Toolkit-2.14/lib/Template/Plugin/Filter.pm
|
CC-MAIN-2017-17
|
refinedweb
| 935
| 51.78
|
This tutorial will teach you how to develop a simple app for the iPad using Flash CS 5.5. In this two-part series, we'll be building a simple hangman game. After completing this series, you should feel comfortable enough to start developing your own iOS applications with Flash.
Step 1: Generating A CSR
Before we get started actually building our application, let's take a step back and get the development environment setup. If you're Flash developer new to iOS development, then you probably have no idea how to test Flash applications on an iOS device. Unfortunately, it isn't as simple as just clicking "Run". You'll first need to create what's called a CSR, or Certificate Signing Request. You'll need to upload this CSR to the Apple Provisioning Portal, which means you'll also need to create an iOS developer account at the official iOS development site. We won't cover all the steps of setting up an account with the iOS developer program here, but the instructions on Apple's site are easy to follow. If you aren't already an iOS development program member, sign up now.
After you've registered with Apple as an iOS developer, you're ready to continue setting up a CSR. Inside your applications folder on a Mac, you'll find a folder called utilities, and within this folder, there is an application called "Keychain Access". Open Keychain Access to get started with the certificate generation process.
NOTE: Unfortunately, you will need to be running the latest version of OS X in order to successfully, build, test, and deploy iOS applications, even though you are building them with Flash!
With the Keychain Access program open, choose the following menu options: KeyChain Access > Certificate Assistant > Request A Certificate From A Certificate Authority.
You will need to enter the e-mail address you used to register with apple, and then enter the same name as well. After you've filled these in, make sure "Saved To Disk" is selected and press continue. You will need to choose where to save the request. I chose to save it to the desktop.
Now we can upload it to the provisioning portal. Log into the developer center at developer.apple.com and on the left of the page you should see a link to the Provisioning Portal, go ahead and click that link.
Within the Provisioning Portal click on "Certificates" and then on the "Request Certificate" button.
At the bottom of the this page, browse to where you saved the CSR from the previous step, then click on "Submit".
You will be returned to the previous page. It will say "Pending Issuance" under "Status". Wait a few seconds, and refresh the page and your certificate should be available for download. Don't download it just yet, we still have a few steps to go.
Step 2: Registering a Device
Plug in your device and open iTunes. Make sure your device is selected, and you will see some information about it. Click on the Serial Number and it will change to your device ID.
Go to Edit > Copy. This will copy the device UDID to your clipboard.
Back in the Provisioning Portal, click on the menu item "Devices" and then choose "Add Devices"
Enter a name for your device and the "Device ID" you copied in the step above and then click on submit. Your device is now ready.
Step 3: Generating an App ID
Still within the Provision Portal, click on the menu item "App IDs", then choose "New App ID".
Enter a name that you want to reference this app by. I chose "MobileTuts Hangman". Next choose a Bundle Seed ID, choosing "Use Team Id" is most common.
Finally, choose your Bundle Identifier, I chose "com.mobiletuts.hangman".
When you are finished entering the information click "Submit".
If all goes well you should see your app enabled.
Step 4: Obtaining a Development Provisioning Profile
Choose the menu item "Provisioning", then making sure you are under "Development", choose "New Profile".
On the next screen, making sure you are under "Development", enter a "Profile Name" (I chose "MobileTutsHangman"), tick your certificate, choose the App ID you created in the step above, choose the device you are going to test on, and then click submit.
You will be returned to the previous page. The status will say pending, wait a few seconds and refresh the page, and your profile should be ready for downloading. Go ahead and download it now.
Next, go back to the "Certificates" link and you should see "MobileTutsHangman" listed under the "Provisioning Profiles" of your certificate.
Go ahead and download your certificate now, and, while still there, download the "WWDR intermediate certificate" as well.
Step 5: Adding the certificate to KeyChain Access and Creating a .P12
If you don't have keychain access open, do so now. Then double click on the certificate you downloaded above, add it to keychain access, and double click on the "WWDR intermediate certificate as well".
Within keychain access, choose certificates and then "twirl down" the certificate you just added, right click on your private key and choose "Export".
I saved mine to a folder named "MOBILETUTS HANGMAN". Make sure the type is set to "Personal Information Exhange (.p12)" and click "Save".
You will be prompted for a password for your .P12, make sure you remember this password, we will need it when we import the .P12 to flash. You will also need to enter the password for your machine after entering this password.
Step 6: Create a New Flash Project
Now that we have set everything up, we are ready to begin our Flash Project.
Go to File > New and Choose "Air For iOS".
Make sure the following properties are set, then press "OK".
- Width: 768
- Height: 1024
- Frame Rate: 24
- Background Color: White
Save this document as Hangman.fla.
Step 7: Creating Main.as
Go to file > new and choose "Actionscript File".
Save this as "Main.as", and then enter the following code:
package{ import flash.display.MovieClip; public class Main extends MovieClip{ public function Main() { trace("Working"); } } }
Set your Document Class to "Main" and test the movie. You should see your Movie open up in the ADL, and see "Working" traced to the output window.
Step 8: Adding A TextField To the Project
Choose the "Text Tool" and make sure the following properties are set under the "CHARACTER" panel.
- Size: 45pt
- Color: #0000ff"
Now drag a textfield out onto the stage and enter the following text into it: "MOBILETUTS+ HANGMAN".
Next, choose the "FILTERS" panel and click on the new filter button and choose "DROP SHADOW".
Set the following properties on the Drop Shadow.
- Blur X: 5px
- Blur Y: 5px
- Strength: 50%
- Quality: Low
- Angle: 45
- Distance: 5px
Finally, set the following properties on the TextField:
- X: 76.00
- Y: 26.00
- W: 561.00
- H: 56.00
Lastly, make sure the textfield is set to "Classic Text" and "Static Text".
Step 9: Setting the iOS Preferences and Building the App
Click on the Main Stage. Under "PUBLISH", select the Air For iOS option.
Under the "General Tab", Fill in the "Output File","App name", and "Version". I chose "MobileTutsHangman.ipa", "MobileTuts Hangman", and "1.0" for the version. Next, select "Portrait" for the Aspect Ratio, tick off the Full screen, choose "Auto" for rendering, "iPad" for the Device, and "Standard" for the Resolution.
Switch over to the "Deployment" tab. Browse for the .P12 file you created in the steps above, enter the password you made for it, and tick off "Remember password for this session". Next, browse for the provisioning profile you downloaded and add it.
Tick off "Quick publishing for device testing" and finally click "Publish". It can take a few minutes to build your app.
Step 10: Uploading the "Provisioning Profile to iTunes"
To be able to test on the device, you must first put the Provisioning Profile onto it.
Go to the folder where you saved the Provisioning Profile and drag it into the iTunes "Library". Next select your device under "Devices" and click on sync.
If you go to "Settings" on your IPad and select "General", You should see a "Profiles" Menu. Click it and you should see that your profile has been installed.
Step 11: Installing the App onto the iPad
Now that we have compiled the app and installed the Provisioning Profile, we can upload it to the iPad and test it to make sure everything has gone as planned so far.
Drag the .ipa file that was created when you published the application into the "LIBRARY" section for your iPad in iTunes, and then choose "Sync".
If all goes well you should be able to start the app and see the the text we added to the App.
Step 12: Creating the Default Image
We now have the app on our iPad, but it is blank when it starts up. In this step we will make the default graphic that will show while the app is loading.
I have included a file called Default-Portrait.png in the exercise files. You will need to make sure this is in the same folder as your .fla, and not in any subdirectories or it will not work. This image needs to be 768*1004 as per the instructions on the adobe site.
Back in Flash, go to the "Air For iOS" setting as you did in the steps above.
On the "General" tab, make sure all the settings are the same and click on the "Add" (+) button, browse to "Default-Portrait.png", and then add it to the "Included Files".
Step 13: Adding the Icons
The icons are used for the iPad itself as well as for the App Store. Go to the "Air For iOS" settings and click on the "Icons" tab. You must supply the icons in the sizes stated in this panel. I have included the icons in the "icon" folder of the download files. They are named icon(size).png, where size is the graphic size. Go through the icon sizes and browse to the respective icons.
Step 14: Republish the App
Make sure all the settings are the same on the "General" and "Deployment" tabs and click "Publish". Then add the .ipa file to iTunes, make sure you
delete the previous version off your iPad first, and then finally resync the ipa file with the iPad.
You should now see the app has a picture associated with it and the "Splash Screen" shows when you first start up the app.
Conclusion
With all of the "hard" stuff out of the way, we can begin programming our hangman game. Be looking for the next part of this tutorial to do just that. And thanks for reading!
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
|
https://code.tutsplus.com/articles/building-a-hangman-ipad-app-with-flash-getting-started--mobile-8098
|
CC-MAIN-2017-04
|
refinedweb
| 1,825
| 74.08
|
NAME
vga_gettextfont, vga_puttextfont - get/set the font used in text mode
SYNOPSIS
#include <vga.h> void vga_gettextfont(void *font); void vga_puttextfont(void *font);
DESCRIPTION
The functions gives access to the buffer where svgalib saves the font used by the kernel. This is used by the restorefont(1), savetextmode(1), and textmode(1) utilities to restore a possibly screwed console font. The font buffer occupies 8192 bytes. However, versions 1.2.13 and later use a larger buffer internally and the size of the buffers passed to vga_gettextfont(3) and vga_puttextfont(3) can be set with vga_ext_set(3). The functions give access to the internal buffers of svgalib, not the font tables of the VGA card. This means that both functions must be called in graphics mode and a newly set font takes effect only at the next vga_setmode(TEXT) call.
SEE ALSO
vga_setmode(3), vga_ext_set(3), svgalib(7), vgagl(7), libvga.config(5), restorefont(1), restoretextmode(1), restorepalette(1), savetextmode(1), textmode(1), vga_dumpregs(3), vga_gettextmoderegs.
|
http://manpages.ubuntu.com/manpages/intrepid/man3/vga_gettextfont.3.html
|
CC-MAIN-2013-48
|
refinedweb
| 166
| 57.16
|
8.6. Concise Implementation of Recurrent Neural Networks¶
While Section 8.5 was instructive to see how recurrent neural networks (RNNs) are implemented, this is not convenient or fast. This section will show how to implement the same language model more efficiently using functions provided by Gluon. We begin as before by reading the “Time Machine” corpus.
from d2l import mxnet as d2l from mxnet import np, npx from mxnet.gluon import nn, rnn npx.set_np() of size
(hidden layers, batch size, number of hidden units). The number of
hidden layers defaults to be 1. In fact, we have not even discussed yet
what it means to have multiple layers—this will happen in
Section 9.3. = np.random.uniform(size=px and Predicting¶
Before training the model, let us travellervmjznnngii'
As is quite obvious, this model does not work at all. Next, we call
train_ch8 with the same hyper-parameters defined in
Section 8.5 and train our model with Gluon.
num_epochs, lr = 500, 1 d2l.train_ch8(model, train_iter, vocab, lr, num_epochs, ctx)
Perplexity 1.2, 135377 tokens/sec on gpu(0) time traveller have do now limic fo not meace i a not said the traveller but now you begin to seethe object of my investig
Compared with the last section, computational does not everyone use this model for text compression? Hint: what about the compressor itself?
|
http://d2l.ai/chapter_recurrent-neural-networks/rnn-concise.html
|
CC-MAIN-2020-29
|
refinedweb
| 229
| 66.44
|
A HT16K33 Display Library for easy printing to LED Modules
The Holtek HT16K33 is a LED Display driver IC and can be used with I2C. Just two I2C wires enables you to control lot of LEDs . As long as you have I2C available - you can easily add this display driver to your Arduino project.
The Holtek HT16K33 LED Driver Chip
The HT16K33 LED Driver Chip can control up to 16 x 8 LEDs. There are maker-friendly modules available with 7segment LEDs, 14segment LEDs, dot matrix and even "blank" PCB als simple backpacks for your own project. As the HT16K33 communicates via I2C you only need two GPIOs. If 128 LEDs are not enough - up to 8 chips can be controlled by selecting the I2C address. Yes - up to 1024 LEDs or 64 alphanumeric digits (each digit with 16 LEDs) driven by just two Arduino GPIOs! The HT16K33 comes in various SMD variants, the largest one can choose one of 8 eight I2C addresses (0x70 - 0x77). You only have to short 3 pads like in a 3bit binary Code (1-2-4). For example to get address 0x73 you have to connect the pads 1 and 2. As with every other I2C device an I2C Scanner will help you to find the right address of your device (yes - I did it wrong on my first multi display also...).
The HT16K33 supports "brightness" and several frequencies for "blinking".
But
only for all digits of the IC, not individually per LED segment or digit.
Modifications of Noiasca HT16K33 library - Why a new library?
In the Arduino world we are constantly using the so called "print" class. You will use it for Serial output (Serial.print), printing to LCDs, OLEDs or even for the output of a webserver. I added this "print" method to the Noiasca HT16K33 library. The new library offers easier output with print (and some other helpers). The idea is simple: I'm using "print" for output on Serial and I want to use it the same manner with the LED displays.
If 8 digits are not enough, you can combine modules to one large logical display. You can handel the large display "as one" and print out text like you would do it with Serial.
display.print(F("even a long text expanding over up to 8 modules"));
So - if you need to print lot of text (or large numbers) on LED displays or displays with more than one IC this library could make your life easier. And yes: you can save precious SRAM, keep all data in Flash Memory by using the F-Makro!
The modifications - What's the background? library inherits the print class (which uses the write method) and now we are able to "print" integers, floats, C-strings (char arrays), (Arduino) Strings to the display like you are used to from other libraries.
As you might know from other libraries they offer some helper methods. The method
display.setCursor(newPosition);
is such an example, it will set the cursor to a defined position. Later on you will read about more important methods.
7 segment vs 14 segment character set
As explained on my MAX7219
page, there are a some restrictions on 7 segment displays, and all this
restrictions are still true for the HT16K33 library. This is a limitation of the
LED display itself, not the driver chip. Obviously 14 segment displays can show much
better characters.
Currently I haven't found any 16 segment modules. Therefore a dedicated 16 segment support is still missing. But as soon as someone points me to affordable HT16K33 modules with 16 segment LEDs, the implementation should be straight forward.
Speaking about character sets, the Noiasca HT16K33 library supports printable characters 0d32 to 0d127 from the ANSI character table. Starting with Version 1.1.0 you can chose from several character sets if you don't like mine. The differences are minor, but it's up to you, use what you like best.
Points or Dots
Printing points or dots needs some additional explanation. As printing to the display is done character by character we have to find a way how to "activate" the decimal point of a previous printed digit. This is simply done by storing the last printed character. Therefore we can "reprint" the previous digit with an activated dot.
This works quite well - up to the point if the previous character is a dot. This just "breaks" the print out. This means you can easily print a dot after another character - but not a dot after a dot.
So this is will not work:
...
instead print a blank before the next dot, like following:
. . .
This will be shown up correctly on the display - and if you carefully check your display it is exactly what you want: 7 segments must not be enlighten (all off), but the decimal point does. So it makes absolutely sense to print blanks between the dots.
By the way: if you want print floats, you don't have to care about the comma, it will be printed accordingly:
display.print(14.32);
If you only need one decimal - this function comes already with the print function:
float myFloat = 14.32; display.print(myFloat, 1); // will print 14.3 only
End of Line
If we come to the "last" digit of our device, we have to to decide what should happen with the next character. On default the library jumps to the "NEXT_DEVICE". If you have written the last position on the last device, the display will wrap around to the first device.
The Linefeed and Carriage Return
In this library, linefeed (LF, 0d10, 0x0A, \n) and carriage return (CR, 0d13, 0x0C, \r) will have no special effect. Both characters are below 0d32 and therefore classified as not printable. So even if I recommend the print, println will have no negative effect as the linefeed of the println will be omitted.
Alternatives
It's always a good advise to look for alternatives. So google for Arduino HT16K33 libraries and have a look on some alternatives.
Like for many other modules - Adafruit - is always a good starting point for an Arduino Library. My first HT16K33 HW module is an original Adafruit 0.54" backpack. Lot of ideas came from their library. Adafruit does an amazing job providing us with libraries, support them and buy hardware from them! If I haven't written my own library, Adafruits library would be my first choice.
Another alternative comes from Arduino forum user HKJ-lygte. He provides a generic LED display driver "library" and the HT16K33 is just one out of many supported ICs. I liked his idea to split the hardware definition of each LED segment from the character set / font table. But there are a lot of precompiler #defines and I prefer to use the Arduino "standard" I2C library "wire.h" for my projects. His homepage is worth a visit because he has a very comprehensive collection of available LED displays.
RAM and Flash Usage
I did some comparisions with other libraries. The print class will cost some extra RAM and flash. Nevertheless it will help you saving a lot more RAM and flash memory the more you will print to your display. Mostly you will use the Serial.print anyway in your sketch - so lot of code of the stream library needs already compiled to your sketch and the new HT16K33 library is just reusing this code also. If you do your comparisions - just don't give up after the first line. Try a more complex example with lot of text and see, how easy it becomes with the Noiasca HT16K33 Library.
The example "12 Sevenseg" is a port of the "Adafruit LED Backpack" sevenseg example. The codes should do (nearly) the same, but you see already some differences in flash and RAM resources:
Both compiled for Arduino UNO on 1.9.2019, today's figures could differ. Other sketches could bring up other results.
HT16K33 Library Examples
The HT16K33 library comes with lot of examples, here are some of them:
If you have any ideas or a specific usecase, just come up with the idea you want do get added!
Installation and usage of the Noiasca HT16K33 Library
Download the ZIP (at the end of this page) and unzip it to your libraries folder. It might be necessary that you restart your Arduino IDE before you can use the library.
There is an additional example sketch called 11_HelloWorld_14segment. It shows all available methods and some special things you should know about the library.
a) Include the new library into your sketch with
#include <NoiascaHt16k33.h>
b) the library files
In general, there is no need to change anything in the .h/.cpp files of the library. All hardware relevant settings can be done in your sketch. Just use the proper constructer, the right parameters for the begin method or one of the setters for your display object. I don't like libraries when they force one to "manipulate" properties in any file. This is something I learned from Marco the author of the MD_MAX7219 library. Library files should be stable between all sketches they are included to. "Modifying" the library files might effect other sketches and therefore these modifications should be avoided.
There are only two exception to this rule:
- If you want to use a different font for your display, you can chose another font in the file src/NoiascaHt16k33.h around row 85.
- If you like to have debug messages from the library, this is can be activated - as in most libraries - in the .h header file.
c) see the examples
I've added some examples to make it easier to understand the functionality of the library. The first sketch should always be some "strandtest" to find out if your hardware works with the new software.
Important Methods
Here are some of the most important methods in the Noiasca HT16K33 library:
uint8_t begin(uint8_t i2c_addr, uint8_t numDevices = 1);
Defines the (start) I2C address of your display and the number of used devices. If your display consists of more than one device, the other addresses must be ascending (e.g. 0x70 0x71 0x72). If you only use one device, you can omit the optional numDevice parameter as it gets defaulted to 1 anyway if not set. The begin method returns 0 if the initialization was successfull.
void blinkRate(uint8_t b);
Takes the constants HT16K33_BLINK_OFF, HT16K33_BLINK_2HZ, HT16K33_BLINK_1HZ or HT16K33_BLINK_HALFHZ and sets the blink rate of the display. As limited by the hardware all digits of your display will blink.
void off(void);
Switch of the display.
void on(void);
Switch display on (after it was switched off).
void clear(void);
Clear the display (and the library displaybuffer if used in future).
bool isConnected(void);
Returns true if the display is connected. If you are using multiple ICs as one logical display - all I2C addresses will be checked.
void setBrightness(uint8_t b);
Sets the brightness of the display, takes a value from 0 (lowest) to 15 (brightest).
void setCursor(uint8_t newPosition);
Set the cursor for the next writing operation to the definied new position
void setDigits(uint8_t newDigits);
Set the number of digits per device. For usual the begin method sets the digits based on the used constructor. If you want to use some custom built displays you can limit the amout of used digits to your needs.
void writeLowLevel(uint8_t position, uint16_t bitmask);
Sends the provided bitmask directly to the given position.
The used methods are based on the Arduino LCD API 1.0 but only methods which make sense for a LED display are used.
Due to the inheritance of the print class all write, print and println variants are supported like you are used to from other Arduino libraries.
Supported Hardware
currently the library supports following hardware modules
Adafruit 0.56" 4-Digit 7-Segment Display
For the Adafruit 0.56" 4-Digit 7-Segment Display w/I2C Backpack use the Noiasca_ht16k33_hw_7_4_c constructor.
This display uses digit 0 and 1 on the left (hour), digit 2 is used for the colon (to blink in second rhythm) and digit 3 and 4 are used on the right side (minute). The library will care about this odd display layout: To show 0123 on the display, just send this command:
display.print("0123");
To activate the colon use:
display.showColon(true);
Adafruit
Quad Alphanumeric Display (14-Segment)
For the Adafruit Quad Alphanumeric Display - 0.54" Digits w/ I2C Backpack use the Noiasca_ht16k33_hw_14_4 constructor. It just limits the display to 4 digits. As alternative, you can use the standard 8 digit constructor and limit the digits in your setup().
Noiasca_ht16k33_hw_14 display = Noiasca_ht16k33_hw_14() setup() { Wire.begin(); display.begin(0x70); display.setDigits(4);
WtihK 14 Segment Display
The "HT16K33 AlphaNumeric 0.54" 8-Digit 14 Segment LED I2C Interface" displays from WtihK are simple to use:
Noiasca_ht16k33_hw_14 display = Noiasca_ht16k33_hw_14() setup() { Wire.begin(); display.begin(0x70);
Caveat
So, are there any reasons not to use Noiasca HT16K33? The library will have no dedicated support for dot-matrix displays. For dot-Matrix I still use the MAX7219 and I have no plan to migrate to the HT16K33. Also writing of individual segments is not in the focus of Noiasca HT16K33, even if you can write a raw bitmask to a digit.
If you want to format numbers, use either sprintf or do it manually. You will find some ideas in the examples.
History / Version log
1.1.2 Release Candidate 2022-02-28 1.1.1 Release Canditate 2021-11-27 2021-11-27 corrected TWCR to 328/2560 only 1.1.0 Release Candidate 2021-03-14 2021-03-14 extract chartables in separate font files 1.0.2 Release Candidate 2020-06-07 2020-06-05 on(), off() 2020-06-04 Wire.begin() must be called in the user sketch. A simple hook to catch on AVR platform is implemented 2020-09-19 fix for ESP8266
Migration from another Library to Noiasca HT16K33
To migrate from another library to Noiasca HT16K33 you have to check each "output". But this step is pretty easy: treat your display just like you would print to the serial monitor or to a LCD.
|
https://werner.rothschopf.net/201909_arduino_ht16k33.htm
|
CC-MAIN-2022-40
|
refinedweb
| 2,371
| 73.27
|
I have a system script i am calling from a Template. i have a parameter in the Tempalate that contains some path information the script needs. this will be set differently everytime the template is used.
I can not figure out how to pass the parameter value in the runScript call? if i just hard code it as a string it works fine. But as soon as i use the parameter, even though it has the same value as the string, i get a syntax error.
Thanks for any help.
I have a system script i am calling from a Template. i have a parameter in the Tempalate that contains some path information the script needs. this will be set differently everytime the template is used.
What is the datatype of the parameter that you are passing to the script? It it coming from a UDT?
No, not from a UDT ( I dont think).
it is just a string value entered in a custom property parameter. All i am doing is trying to determine is different sub systems have active alarms by using the system.alert.queryAlertStatus().
Here is the Module Defiition:
def SystemActiveAlerts(System,UnAck, Ack ):
import system
return system.alert.queryAlertStatus(path=System, activeAndUnacked=UnAck, activeAndAcked=Ack).rowCount
Here is my call, which works, where ‘CA3/PCW/’ is a path filter:
runScript("app.alert.SystemActiveAlerts('CA3/PCW/’, 1, 1)", 5000)
Here is the call that does not work substituting the string above with the same string in a custom property, where Status.System containes CA3/PCW/*:
runScript(“app.alert.SystemActiveAlerts({Status.System}, 1, 1)”, 5000)
This results in a Syntax error.
Thanks,
I think your call is literally passing in the text “{Status.System}” as the first parameter.
Does this work for you?
runScript("app.alert.SystemActiveAlerts(" + toStr({Status.System}) + ", 1, 1)", 5000)
Dan
no luck.
with it in quotes like that seems like it is literally sending + toStr({Status.System}) +, tried without the quotes and the + and still no joy.
Thanks,
In what scope is this expression being run from, an expression tag? Property binding?
Also, what component is the custom property part of? A root container, or a component within the container?
Not sure i fully understand the question, but will try.
This is on a template.
I have an internal Property defined as an integer called “Acitve”
I have a Template Parameter defined as a String called “System”
Active is linked to an Expresion, the expresion equals:
runScript(“app.alert.SystemActiveAlerts({Status.System}, 1, 1)”, 5000)
System has a string entered into it of:
CA3/PCW/*
Have also tried:
‘CA3/PCW/*’
both result in a syntax error.
If i replace {Status.System}, wtih ‘CA3/PCW/*’ in the expresion, the expresion then works.
Thanks,
Oops forgot, both parameters (System and Active) are defined on the root container.
Ok, thanks for clearing that up, I think I was getting something confused.
Try using concat():
runScript(concat("app.alert.SystemActiveAlerts('",{Status.System},"', 1, 1)"),5000)
The single quotes around your string that your sending to SystemActiveAlerts should fix the syntax error, if not, can you post the full syntax error? Thanks.
That did it!
Now all i have to do, is read up on Concat so I can figure out why.
Thanks for the help.
Using the concatentation opeators (+) should’ve worked too, it sounds like those single quotes were the missing piece. Glad you got it working.
|
https://forum.inductiveautomation.com/t/pass-parameter-in-runscript/5529
|
CC-MAIN-2021-39
|
refinedweb
| 570
| 68.16
|
There is nothing that drives me crazier than unstructured data on a file server. If there is an application that we can blame, that helps. If the data is a smattering of personal and workgroup data, self-censorship starts to kick in. Of course we can do all of the standard remediation attempts to consolidate file servers and put concise group membership with appropriate permissions.
One option that exists to administrators now is to leverage cloud storage for the primary instance of file servers. One solution is the Nasuni Filer to replace NAS servers that function as file servers in your organization. While I'm quite familiar with a number of cloud storage solutions, I have long thought what will make a "cloud" solution very relevant to the mainstream organization is a turnkey solution.
The Nasuni Filer is very simple in that it is distributed as an open virtual format (OVF) virtual machine. The virtual machine is then assigned a local cache on storage resources on-premise. The local cache is a nominal storage allocation, 500 GB for example, that is the most commonly accessed data in the file server's namespace. The rest of the data is in a storage cloud with the ingress and egress traffic managed by the Nasuni Filer. The Nasuni Filer is also smart in that you can have the data reside in the Amazon Simple Storage Service (S3) cloud, Nirvanix Storage Delivery Network (SDN), Iron Mountain Archive Services Platform (ASP), or Rackspace Cloud Files.The Nasuni Filer does a few things that make its architecture attractive. First of all, the OVF deployment is attractive as any administrator with a virtualized infrastructure can do so quickly. The second thing I really like is that the filer shows up on your local network to be managed in Microsoft Active Directory for full permission and share management through familiar interfaces. Figure A below shows the Filer's architecture: Figure A
The Nasuni Filer also starts to get smart with the data before it uploads it to the cloud. Realizing that the transfer bandwidth is the most sensitive link in a cloud-based storage solution, Nasuni performs four critical processes on data before it is uploaded to the cloud. These are: chunking into blocks, de-duplication, compression, and then encryption. The data is protected with OpenPGP AES-256 bit encryption. Figure B shows this pre-transfer process:
Figure B
Click image to enlarge
Click image to enlarge
Nasuni was introduced to me by one of my colleagues, Greg Knieriemen, who produces the popular Infosmack podcast. Episode 52 features Andres Rodriguez, CEO and founder of Nasuni. I highly recommend that you check out this episode not just for this solution, but how cloud storage has evolved out of necessity as well as what it can and cannot do.
I am always trying to find inroads to a cloud solution when the conditions can be right. With cloud storage as easy as being a CIFS endpoint within your Active Directory domain on your network, does it.
|
https://www.techrepublic.com/blog/data-center/cloud-storage-for-primary-storage-as-an-alternative-to-file-servers/
|
CC-MAIN-2017-51
|
refinedweb
| 506
| 58.92
|
How to Integrate Selenium Grid With Docker
How to Integrate Selenium Grid With Docker
It is very easy to integrate Selenium Grid with Docker. Here's how.
Join the DZone community and get the full member experience.Join For Free
You must have studied about Selenium Grid. Selenium Grid is a component of Selenium used for parallel testing across different browsers and machines. You can have your system as a hub that will distribute the test cases across various other machines. Other machines on which the test will be distributed are called nodes. A hub is a machine on which the tests are executed but they run on different nodes. This doesn’t overload the machine and also, if you have less time, then you can go for parallel testing using different nodes. Docker is a container used to store databases, libraries, and dependencies. These will be used to built or run any application.
Why Docker With Selenium Grid?
If you want a specific configuration of a browser, you to have to test the tests cases on Firefox v46+, but if you have a limitation, then you can’t install that version in your system. Using 46 versions in your system can disrupt other applications. So, in this case, it is better to go for a node with Firefox v46+ so that you can easily run your test cases. You can easily save time performing parallel testing.
Docker is a container that stores libraries and dependencies. If you want your Selenium Grid Server to be up and running, you have to download a Selenium JAR file in every hub and node. You have it, then fire the command in the hub to get the server up. From that, you can get the IP address. In the nodes, you can get the server up by adding the IP of the hub and port number. This process is very time-consuming and takes a lot of manual effort. Docker will help you do all the processes in a single go.
Prerequisites for Docker Setup in Your Machine
Now, we will be integrating the Selenium Grid with Docker, but before doing that, you should have some prerequisites to installed on your machine:
JDK v1.7+ on your machine
Chrome and Firefox browser installed
Selenium Web Driver JARs
Test cases designed in Selenium Web Driver and TestNg
In TestNG.xml, the parallel flag should be true so that parallel testing is possible.
Steps to Download Docker
Now, you have all the prerequisites to install Docker in your machine. Now, the main step is to download Docker. Let’s look at the steps.
- Download Docker.
- It will pop up the installation window and there you have to click on all the checkboxes.
- Click on every Next button during installation.
- After the installation is finished, you will get three Docker options: Virtual Box, Docker Quickstart, and Kitematic.
- Open the Docker quickstart terminal to get a unique IP for your machine.
- After doing that, you will get the message “Start interactive shell.”
- After getting the above message, Docker is installed properly and you can start using it.
Now, to integrate Docker with Selenium Grid, you have to install hubs and nodes. You can start using hubs and nodes from Docker only but for the first instance, you have to configure them into the Docker container. For running tests through docker, you have to get some Docker images that will allow tests to be executed from the Docker container.
Images to be downloaded for Selenium Grid in Docker container:
Selenium hub image
Selenium node-firefox image
Selenium node-chrome image
Selenium node-firefox-debug image
Selenium node-chrome-debug image
Let’s look at the ways you can download the images. You can go to the Docker repository and search for these images in the search box one by one but you'll get countless results. You have to select an image with a maximum number of pulls (i.e. an image downloaded the maximum number of times). Click on that image and then you will get a Docker command. Copy and save it.
Let’s have a look at one example.
docker pull selenium/hub
Now, you have to come back to the quikstart terminal. Open that and put the command in it. An image will be downloaded in your docker machine. You must allow some times as downloading images takes some time. After the image installation is done, you will get a success message. Also, don’t try to run two image installations at the same time, as this would make the installation very slow. Try to go one by one.
Other commands are:
docker pull selenium/node-firefox
docker pull selenium/node-firefox
docker pull selenium/node-firefox-debug
docker pull selenium/node-chrome-debug
Start Using Selenium Grid and Docker
The first thing you generally do is make the Selenium server up and running on the hub from the Docker. Open the quick start terminal and type the below command.
docker run -d -p 4444:4444 –name selenium-hub selenium/hub
Once you hit this command, the hub server will be up and running. You can verify whether it is up by going to url in your browser:. You can even change the IP address of the Selenium Grid if your Docker is installed with a different IP address.
Now, you have made the hub running. You have to then get the nodes up and running and then connect the nodes with the hub. We have already saved the names of the nodes in Docker by the names Chrome and Firefox. Run the below command to run a Chrome node.
docker run -d –link selenium-hub:hub selenium/node-chrome
Run the below command to run a Firefox node:
docker run -d –link selenium-hub:hub selenium/node-firefox
Now, again go to the URL on the hub machine. You will see two nodes connected with the hub. Now, you have to find the port numbers on which your nodes are running. Run the bellow command in the quick start terminal:
docker ps -a
You have to now use VNC viewer. First, download it from the internet and do all the installation steps. Once it is downloaded and installed, you can run it in your machine. You have to then type the hub url in it and then the port number of your chrome node. Click Connect on the pop-up. If it asks for a password, put “secret”.
You can do the same for the Firefox node. Now, you can test by writing a test script so that browsers will open on these nodes and we can see whether parallel testing is being successfully performed.
Let’s see a sample test script:
public class GridTest { @Test public void seleniumGridTest() throws MalformedURLException { DesiredCapabilities; if (browserType.equals("firefox")) { .firefox(); cap.setBrowserName("firefox"); cap.setPlatform(Platform.WINDOWS); } else { .internetExplorer(); cap.setBrowserName("iexplore"); cap.setPlatform(Platform.WINDOWS); } RemoteWebDriver RemoteWebDriver(new URL(http: //localhost:4444/wd/hub), cap); driver.navigate().to(""); driver.findElement(By.xpath("//input[@]")).sendKeys("username"); driver.findElement(By.xpath("//input[@]")).sendKeys("password"); driver.close(); }
Also, you have to set the parallel flag in a TestNG.xml file. You can do it by writing the parallel="tests" tag in the suite name. When you run the test, you will see the tests being executed on both machines. You can analyze the behavior of the application at the same time.
A video on Selenium Grid with Docker by Docker Houston Meetup:
Conclusion
So, you have read how to integrate Selenium Grid with Docker. It is very easy. You can run your tests and perform automated software testing on different machines and operating systems. Just follow the above steps and save time. All the best!
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/how-to-integrate-selenium-grid-with-docker
|
CC-MAIN-2019-51
|
refinedweb
| 1,317
| 64.81
|
0
Hi all, I'm new to java and here is my code. It runs, but the output is not what I wanted. This project allows the user to enter up to 6 lottery tickets and I have to generate random numbers 0-49. The number generate fine, but the number of tickets entered doesn't. If i enter two tickets, It'll still print out six. It should print out the number of tickets based on user input. Please help. Thank you. My teacher does not allow us to use arrays for this project. For extra credit we can use 2d arrays, but no arraylists. How would I rewrite this code just using 2d arrays?
import java.util.Collections; import java.util.ArrayList; import java.util.Scanner; public class LotteryTest { public static void main(String[] args){ Scanner input = new Scanner(System.in); { ArrayList<Integer> numbers = new ArrayList<Integer>(); System.out.println("Please enter the number of tickets you want"); int numtix = input.nextInt(); for(int i = 0; i <= 49; i++){ //loop to choose numbers.add(i+0); } for (int i=0; i<6; i++) { //loop for # of lottery tickets Collections.shuffle(numbers); //randomly shuffle elements of arraylist System.out.print("\nLotto Ticket #"+(numtix)+": "); for (int j=0; j<6; j++) { // determine how many numbers for each ticket System.out.print(numbers.get(j) + " "); } } } } }
Edited by ippo: add more
|
https://www.daniweb.com/programming/software-development/threads/422890/lottery-arraylist-problem
|
CC-MAIN-2017-09
|
refinedweb
| 229
| 61.22
|
When I load or reload a datagrid with thousands records, it took a little while. I want to show user a busy sign and the sign will disappeared after the loading finish. How do I do?
Thank you in advance!
import mx.managers.CursorManager;
//in your getData function
CursorManager.setBusyCursor();
//in your getDataResultHandler function
CursorManager.removeBusyCursor();
Thank you very much for your help.
How can I make the sign bigger?
How to make the stop watch bigger you mean?
I don't think that's possible, but what you can do is create your own custom timer if you're
artistically gifted.
See CursorManager
|
https://forums.adobe.com/thread/631251
|
CC-MAIN-2018-30
|
refinedweb
| 104
| 61.63
|
in reply to
Registering Subclass Modules via Plugin
I think a factory is for when the caller knows what they want. Your case is different.
I find out which subclasses are available by scanning all or part of the @INC path list for modules in a given namespace. I do this once, as the app starts. Note that big @INC scans can be time prohibitive. In that case, I look to see which directory my parent module lives in by checking %INC, then look there for my children, making the big assumption that they get installed to the same place as the parent. It works for Bigtop's tentmaker.
|
http://www.perlmonks.org/index.pl?node_id=650832
|
CC-MAIN-2015-48
|
refinedweb
| 109
| 81.22
|
Create Queues and Queue Subscriptions for SAP Event Mesh
- How to manage queues, topics and queue subscriptions using the SAP Event Mesh management dashboard.
Prerequisites
- An instance of SAP Event Mesh has already been created
- User has been assigned with role collection “Enterprise Messaging Developer” Assign Roles to Users
Queues and queue subscriptions are the core of asynchronous messaging. Messages are retained in queues until they are consumed by subscribing applications.
The SAP Event Mesh management dashboard for the default service plan is provided as a multitenant business application. Subscription can be set up only by administrators of the global account.
- Step 1
You need to subscribe to SAP Event Mesh in order to access its management dashboard.
To subscribe to SAP Event Mesh
- Open your global account, then subaccount.
- Choose Instances and Subscriptions in the left pane.
- Choose Create.
- Choose Event Mesh and standard plan.
- Choose Create.
- Step 2
For default Plan :
Open the SAP BTP Cockpit.
Click on the Subscriptions menu.
Click on Go to Application.
It opens the SAP Eventing Mesh management dashboard screen. The management dashboard allows you to manage different messaging clients as shown below.
Select the message client.
- It will open the SAP Event Mesh Management Dashboard screen Overview tab.
For dev Plan :
You need to click on View Dashboard to open the dashboard to manage queue or queue subscription as shown in the below screen.
- Step 3
On the Management Dashboard, you can create a queue to work with SAP Event Mesh.
Queues enable point-to-point communication between two applications. An application can subscribe to a queue.
To create a queue, click on Create Queue.
Enter the name of the queue. For example, queue
salesorder.
The name of the queue has to follow the pattern you specified in the (JSON) descriptor when you created the SAP Event Mesh service instance. Choose the View Rules tab, to see the rules that must be followed when you enter the queue name. As shown in the screenshot below, the View Rules tab provides the following information for the instance:
List of rules.
Type to which the rule belongs.
Permissions defined for the rule.
In your example, you need to follow following pattern.
After the queue has been created, the queue name is appended to the namespace and is displayed on the UI.
On the Queues tab, you can view
List of queues.
Number of messages in each queue.
Size (in kilobytes) of all the messages in each queue.
For the
salesorderqueue you’ve created in the example, the values are displayed as below:
If you want to delete the
salesorderqueue, you can delete the queue using delete a queue icon.
CAUTION: Deleting a queue also deletes any associated queue subscriptions and any messages that are in the queue.
What is true when deleting a queue?
- Step 4
Service enables a sending application to publish messages and events to a topic. Applications must be subscribed to that topic, and be active when the message is sent. Topics do not retain messages.
Create a queue subscription if you want to retain messages that are sent to a topic.
In this example, subscribe queue
salesorderto the
s4hanasalesordertopic.
Click on the Show subscriptions for this queue icon under Actions.
The Queue Subscription screen is displayed. Create the
s4hanasalesordertopics.
Below screen displays how to create a topic and once a topic is created how is it displayed.
This queue can only subscribe to topics that follow the rules defined in the service descriptor
An external source can publish messages to the topic. Messages can only be consumed from the queue.
Multiple queues can subscribe to one topic. In this case, the same message is published to all the subscribed queues.
One queue also can be associated with multiple topics. In this case, any message that is published to all these topics is stored in the subscribed queue.
You can delete a topic using the delete icon.
Which of the following options are correct?
|
https://developers.sap.com/tutorials/cp-enterprisemessaging-queue-queuesubscription.html
|
CC-MAIN-2022-40
|
refinedweb
| 663
| 67.25
|
>>>>> On Wed, 31 Mar 2004 22:22:24 -0600, Steve Cohen <scohen@javactivity.org>
said:
> Wow, that's nasty. Or maybe not. It sounds like there is a default
> admin setting you can make and then the site DIRSTYLE command
> toggles the feature on and off for the duration of the connection,
> right?
Yes, it only effects your connection. Since each time you call 'site
dirstyle' it toggles, in the original code snippet I was calling 'site
dirstyle' twice, just to determine what the current setting is, *not*
change it ( although it is changed for a short while ). You are
correct and the administrator can have a site default to either Unix
or MSDOS style listings. We are currently using this method and
haven't had trouble.
> I see several ways we could go.
> 1. Just leave it as it is. (safest?)
> 2. try your test (if SYST returns "Windows") and then pick the
> appropriate parser.
> 3. If SYST returns Windows, run the site DIRSTYLE command once
> or twice as needed to force the session into UNIX mode and
use the Unix parser.
> 4. If SYST returns Windows, run the site DIRSTYLE command once or
> twice as needed to force the session into Windows mode and use
> the NT parser.
I vote 2.
> I am assuming that calling the DIRSTYLE command is only for that
> session, right? Do all versions of NT (4.0, 2K, XP, etc.) support
> this command?
I've seen this dirstyle setting on all Microsoft Windows FTP servers
I've come across so far.
> What is your recommendation? I think we should take your
> recommendation here as you seem to have the most knowledge. It
> sounds like we could get such a change in and not delay the release
> if you are willing to test this in action. Does that make sense -
> or should we just leave it alone for now.
I think we ( [net] ) shouldn't switch the users current dirstyle on
them. Lets use this method to determine what the current dirstyle is
and then give the user the correct parser. We never want the inquiry
to blow up, so we should default to NT's Parser. I'll test and/or
write up an additional test for our functional suite.
> What I am trying to avoid is causing trouble for our poor user last
> week who just "solved" the problem using our earlier version by
> switching to unix mode on his server, and will now be broken by our
> fix.
We ( at work ) are currently using 'site dirstyle' in some cases to
programmatically force the site to unix listings ( your solution 3 )
in order to use the original list parsers ( yea, we still have some
old stuff using the old parsers! ).
> On Wednesday 31 March 2004 1:05 pm, Jeffrey D. Brekke wrote:
>> The only way we've found is to use the output of the SITE DIRSTYLE
>> command itself, calling it twice:
>>
ftp> site DIRSTYLE
>> 200 MSDOS-like directory output is off
ftp> site DIRSTYLE
>> 200 MSDOS-like directory output is on
>>
>> Then parse that last line:
>>
>> /** * Method isMSDOSDirstyle. * @return String */ public boolean
>> isMSDOSDirstyle() { boolean retVal = false; try { if
>> (sendSiteCommand("DIRSTYLE")) { sendSiteCommand("DIRSTYLE"); retVal
>> = (getReplyString().indexOf("on") > 0);
>> }
>> }
>> catch (IOException e) { retVal = false;
>> }
>> return (retVal);
>> }
>>
>> That might work nicely, until the output of site DIRSTYLE changes
>> ;)
>>
>> >>>>> On Tue, 30 Mar 2004 20:05:33 -0600, Steve Cohen >>>>>
>> <scohen@javactivity.org> said:
>> >
>> > Maybe we're wrong to rely exclusively on the SYST command. Jeff,
>> > since you evidently have admin access to an NT FTP server, maybe
>> you > can find a quick way to distinguish between an NT server in
>> Unix > DIRSTYLE mode and one in NT mode. LIke maybe the header
>> information > uses different text or something.
>> >
>> > Maybe we could have something like this:
>> >
>> > if "Windows" == syst() result { // do a list command just to look
>> at > the header if header is unix style return unix else return
>> windows.
>> > }
>> >
>> > If not, we can always go the FAQ route.
>> >
>> > Steve
>> >
>> > On Monday 29 March 2004 12:08 pm, Jeffrey D. Brekke wrote:
>> >> Here on a Windows2003 server toggling the DIRSTYLE doesn't
>> change >> the output of the SYS command. It still reports Windows
>> NT 5.0, so >> if the MSDOS dirstyle is off, then the wrong parser
>> will get >> autoselected. I guess we could put this in the FAQ or
>> something >> with some examples maybe.
>> >>
>> >> >>>>> On Mon, 29 Mar 2004 06:47:25 -0600, Steve Cohen >>>>>
>> >>
>> >> <scohen@javactivity.org> said: >> > I think you may be referring
>> to something said by a user last
>> >>
>> >> week, > in which he "solved" the problem by configuring his NT
>> FTP >> server to > use the "Unix display format". I didn't know
>> before >> that that was > an option. What I still don't know is if
>> one takes >> that option, > does the SYST command return "Windows"
>> or "Unix"? >> If it's the > former, then we have a problem. It's
>> not an >> unsolvable problem > because you can always use the form
>> of >> listFiles() that takes a > parser class name or a different
>> SYST >> value (e.g. listFiles("UNIX") > ) as a parameter, although
>> you >> can't at present do this from Ant. > (But adding this
>> capability >> to Ant is what I'm striving towards.)
>> >>
>> >> > A little investigation would be good here. But we need to
>> >>
>> >> remember > the point of autodetection. It's not foolproof,
>> can't >> be foolproof, > since it depends on SYST identification
>> which is >> not necessarily > cast in stone. It is an attempt to
>> raise the >> default success rate > of using listFiles() out of the
>> box from >> maybe 90% to 98%, by > autodetecting other cases, the
>> most common >> of which is Windows but > also OS2, VMS, OS400, etc.
>> So while we >> need to strive to become as > good as possible, I
>> don't think we'll >> ever hit 100%. FTP is too > loosely specced
>> for that to happen.
>> >>
>> >> > Default falling back to unix if other methods fail would
>> involve
>> >>
>> >> a > much more complex mechanism in which the program would have
>> to
>> >>
>> >> > decide ("this isn't working") and try something else. While I
>> >
>> >>
>> >> wouldn't totally rule that out, I don't at present feel there is
>> > >> enough solid information to justify that effort.
>> >>
>> >> > On Sunday 28 March 2004 11:59 pm, Mario Ivankovits wrote:
>> >> >> +1
>> >> >>
>> >> >> ?
>> >>
>> >> >> --
|
http://mail-archives.apache.org/mod_mbox/commons-dev/200404.mbox/%3C85fzbnwzb8.fsf@firefoot.brekke.org%3E
|
CC-MAIN-2016-07
|
refinedweb
| 1,045
| 71.55
|
Eli Bendersky's website - Book reviews of reading: January - March 20192019-03-30T06:10:00-07:002019-03-30T06:10:00-07:00Eli Benderskytag:eli.thegreenplace.net,2019-03-30:/2019/summary-of-reading-january-march-2019/ …</li></ul>" by Tara Westover - the author grew up in a survivalist Mormon family in Idaho, with zero education until late teens, and parents who refused any formal contact with the establishment (no birth certificates, no medical care, etc). This is her autobiographic account, focusing on her quest for formal education and her complicated, corrosive relationship with her family. A rather disturbing book in many aspects, but very good writing.</li> <li>"Docker Deep Dive" by Nigel Poulton - an overview of using Docker, detailing many advanced features in addition to the simple things. While I think this is a good reference book, I'm less sure about its value as a tutorial. Similarly to my complaints about the author's Kubernetes book, too much is spent on the how and too little on the why. This book feels like a certification exam preparation text (which it is, to some degree). Since Docker as a technology is not hard to understand this is less of a problem for this book, but still something that could be improved.</li> <li>"Designing Data-Intensive Applications" by Martin Kleppmann - an extensive overview of modern data-processing and distributed systems - databases, stream processing, distributed locking/concensus, and so on. Hard to do this book justice in a single reading - I'm pretty sure its main utility is as a reference. It's a truly massive book, and quite a test to read cover to cover. As opposed to most programming books it doesn't have many diagrams, and almost no code snippets - it's all walls of text, page after page. The author has spent 4 years researching and writing the book, and it shows. Overall I thought it's really good and am looking forward to using it as a reference. Much of the distributed systems literature is dated, and this book serves as an excellent bridge from theory (textbooks, Lamport papers, and so on) to modern applications.</li> <li>"Your Money or Your Life" by Vicki Robin, Joe Dominguez - the classic bible of the FI/RE (financial independence / retire early movement), originally published in 1992 and revised in 2008 with some more modern material. The basic premise is "strictly track your earnings / spending, minimize spending through frugality to get out of debt / earn enough to live off interest, investments as early retirement". It was probably easier to achieve when the book was originally written and treasury bonds had double digit % returns :-) It's interesting to see where the modern FI/RE movement came from, but otherwise I didn't find much new in this book. Even if you don't really intend to retire early, the ideas in the book are interesting. I'd think it would be very useful to read for folks who feel they don't have a good control of their finances, and if they haven't heard of these ideas before the book is a reasonably good introduction.</li> <li>"Comet" by Carl Sagan and Ann Druyan - everything you wanted to know (and probably more than you imagined there is to know) about comets - their history, properties, future. Great Carl Sagan writing as usual, though a bit outdated now, having been written in 1985. I wish Sagan would live to see all the recent advances in astronomy and space exploration - the comet landing mission for example (Rosetta).</li> <li>"Dismantling America" by Thomas Sowell - a collection of Sowell's essays on various issues in the USA circa 2008-2010. Very good writing as usual - it's interesting how I agree with Sowell on many financial topics, but not so much on the topics of human rights. Sowell seems to be taking a Republican view with a more radicalized laissez-faire bend in economics. Many of the essays are repetitive, and some of them are not really on topics that are interesting outside their immediate time frame (for example the Duke Lacrosse team rape trial, to which several chapters are dedicated). Overall an interesting read, no matter which side of the political divide you are.</li> <li>"The Alchemist" by Paulo Coelho - a re-read, but the first time I read this book was before 2003 so there's no review on record. I think I'm completely missing the point of this book - it's so highly acclaimed, yet I totally fail to get all the mysticism, spirituality and applicable life lessons here. Didn't enjoy it at all.</li> <li>"Why We Sleep: Unlocking the Power of Sleep and Dreams" by Matthew Walker - this book will certainly give serious food for thought to many people. It tries to explain why the natural sleep period of 7-9 hours is critical for health and mental development in humans. The author makes a convincing case, though I found a book a bit too preachy at times. It reminds me of all the other books talking about X and presenting X to be the source of all good, while lack of X is the source of all evil. Moreover, it's disappointing how little we <em>really</em> know about sleep - most of the cited research results are circumstantial evidence at best; this is not much different from other medical research, unfortunately. In any case, these shouldn't detract from the value of reading this book. Especially if you believe you "don't need much sleep", it's a very important read.</li> <li>"How to Be a Stoic: Using Ancient Philosophy to Live a Modern Life" by Massimo Pigliucci - another modern take on stoicism. The book is not too bad overall, but I wouldn't recommend it unless you're really into the ancient sources - especially the writings of Epictetus. For a guide to stoicism, Irvine's book is much superior to this one.</li> <li>"Seven Databases in Seven Weeks" by Perkins, Redmond and Wilson - a quick comparison and overview of PostgreSQL, MongoDB, CouchDB, Redis, Neo4J, HBase and DynamoDB. The book's subtitle is "A guide to modern databases and the NoSQL movement", and one of the things it tries to uncover is how the new "NoSQL" databases are different from established RDBMSs like Postges. A nice book overall, though IMHO it lacks depth. About 90% of it is spent on simple tutorial-level introductions to the databases, and the examples are mostly toys. I suppose it's understandable since the differences between such tools are rarely easy to define very accurately.</li> <li>"Digital Minimalism" by Cal Newport - a manifesto of reducing social media consumption and other modern distractions. Although the book is chatty, it's definitely useful - especially for people who spend a lot of time daily consuming information on their phones.</li> <li>"Walden, or Life in the Woods" by Henry David Thoreau - a classic of American literature published in 1854. The main theme of the book is praise of "simple living", one of the earliest insiprations of modern frugal living movements. Thoreau describes the several years he spent living in a self-constructed wooden cabin next to the Walden pond (on the outskirts of Concord, MA). As usual for me, it's not very easy to read books from so far back - the writing style is very unfamiliar. It's part autobiography, part economics, part essays on human nature and the beauty of the natural world.</li> <li>"Get Well Soon: History’s Worst Plagues and the Heroes Who Fought Them" by Jennifer Wright - a quick and enjoyable read, despite the grim topic. The writing is very good, but a bit too fluffy with a lot of detours to unrelated stuff. It's almost as if there wasn't that much to write about the plagues themselves - which is a good sign, I guess? Seriously though, it would be better to get some more medical details about the diseases and their cures, and less background historical information.</li> </ul> <p>Re-reads:</p> <ul class="simple"> <li>"The Righteous Mind" by Jonathan Haidt</li> <li>"The Price of Privelege" by Madeline Levine</li> <li>"The Pastures of Heaven" by John Steinbeck</li> </ul> Summary of reading: October - December 20182019-01-09T18:29:00-08:002019-01-09T18:29:00-08:00Eli Benderskytag:eli.thegreenplace.net,2019-01-09:/2019/summary-of-reading-october-december-2018/ …</li></ul>.</li> <li>.</li> <li>.</li> <li>.</li> <li> <em>some</em> mathematically and linguistically inclined women managed to get to universities before the war, even though it was largely discouraged.</li> <li>.</li> <li>"Darwin" by Adrian Desmond and James Moore - an extremely thorough biography by Charles Darwin. Very long and dense book, but the writing is good so it's not too taxing to plow through.</li> <li>.</li> <li>"Naked Money" by Charles Wheelan - a really good explanation of how money works, with in-depth discussions of inflation/deflation, the 2008 crisis, Japan, Euro and the U.S.-China trade situation.</li> <li>.</li> <li>; <em>why</em>.</li> <li>.</li> <li>.</li> <li>.</li> </ul> <p>Re-reads:</p> <ul class="simple"> <li>"Crypto" by Steven Levy</li> </ul> Summary of reading: July - September 20182018-09-30T05:40:00-07:002018-09-30T05:40:00-07:00Eli Benderskytag:eli.thegreenplace.net,2018-09-30:/2018/summary-of-reading-july-september-2018/ …</li></ul> religion, in particular Judaism. Though a bit tedious at times, it's a very well-written book and it's obvious a lot of careful research went into it. The author touches upon some core themes of Jewish identity and the internal tensions in Israel (most of which are true to this day) in a manner that's extremely impressive for an outsider.</li> <li>"The Wizard and the Prophet" by Charles C. Mann - an interesting approach to the challenges of progress of technological society. Presented through two competing views - the technology-solves-everything view (wizards), exemplified by Norman Borlaug who did ground-breaking research on developing higher yield grain crops, and the human-consumption-must-decrease view (prophets), exemplified by William Vogt, a prominent environmentalist of the early 20th century. The book is a mix of the biography of the two scientists and general discussion of environmental issues such as energy, genetically modified crops, etc.</li> <li>"Factfulness" by Hans Rosling - forget about "third world" vs. "the west". Reading this book is a humbling experience and a monument to our ignorance about how fast the world has been changing over the past 30 years. Rather than "rich countries" vs. "poor countries", the author divides income into 4 levels, and discusses the social changes that predated the economic changes in Asia, noting how the same social changes are now occurring in Africa and other regions of the world which many consider "hopeless". There's no bimodal distribution of income any more - it's much more Gaussian now. I particularly liked the digs at the media leveraging the human instinct for panic. The author's cool stories from his time as a young doctor in Africa are the icing on an overall excellent and insightful book.</li> <li>"Little House in the Big Woods" by Laura Ingalls Wilder - decided to try reading what my kid is learning in school. Very sweet little novel with quite a few interesting historical details on the life of farmers in Wisconsin in the 1870s.</li> <li>"Linux Kernel Development" by Robert Love - I didn't originally plan to read this whole book - I got it for reference, to look up some specific topics I was interested in. But it turned out to be well written and fun enough so I just went ahead and read most of it. Some sections I skimmed (block devices, etc), but other sections I read several times. Pretty good book overall, though I'd prefer one with just a focus on how things work rather than how to hack on the kernel at this stage. Unfortunately all the books with that focus are at least 10 years old now and things change fast... So this probably remains the most modern "how the kernel works" reference.</li> <li>"The Luzhin Defense" by Vladimir Nabokov - a moving story of true madness - a chess player who never managed to live in the real world outside the domain of chess. I expected it to be a bit more about chess than it actually was, but the book is undeniably powerful.</li> <li>"The Sympathizer" by Viet Thanh Nguyen - fictional story of a North-Vietnam double agent infiltrating the South's secret police, mostly following through his exile in Los Angeles following the fall of Saigon in 1975. <em>Extremely</em> good writing, outright masterful in places. The book certainly has a lot in it, including Asian immigrant (and local) mentality in the US, thoughts on communism vs. American culture, etc. The last part (after the "confession" ends) is somewhat puzzling though, can't say I liked it much. Communists' treatment of their own agents returning from abroad is important to know about, I'm just not sure so much ink had to be spent on that part.</li> <li>"Concurrency in Go" by Katherine Cox-Buday - I have mixed feelings about this book. Most of it is spent on tutorial-level explanations of concurrency, and doing so in a manner markedly inferior to "The Go Programming Language"'s two chapters on the same topics. The explanations are overly complicated on one hand, and not interesting enough on the other. The last part of the book, however, deals with more advanced concurrency patterns and is actually pretty interesting. The ultimate proof would be in using this book as a reference at a later stage when these patterns can be put to use in real code.</li> <li>"No One Can Pronounce my Name: a novel" by Rakesh Satyal - A story of some immigrants from India and their families from the unusual angle of homosexuality. Good writing with believable, very human characters.</li> <li>"Farmer Boy" by Laura Ingalls Wilder - Book 2 in the "little house" series, telling about the childhood of Laura's husband, Almanzo Wilder. I read this whole book aloud with my kids and it took a while (370 pages), but we really liked it overall - very interesting and detailed account of how farmers lived in the late 19th century. One criticism is the book's beginning is a bit rough for a children's book, IMO; it tells how older, rambunctious boys were beating their school teacher - this part really spooked my daughter and initially she refused to touch it after reading this part. We read through it together, though, and the rest of the book really has none of this meanness (in fact, Almanzo hardly returns to school after these first few pages).</li> <li>"Stoicism and the Art of Happiness" by Donald Robertson - I found this book tedious and borderline unreadable. First, it is much too academic, quoting extensively from the ancient Stoics and other, more modern, authors who wrote on Stoicism. It's not clear whether the author expresses any of his thoughts at all! Second, the writing style is almost intentionally stilted: text is constantly interrupted with notes, footnotes, quotes from others, exercises, and so on. I really like the idea of Stoicism, but this particular book is a waste of time, IMHO.</li> <li>"The Future Is History: How Totalitarianism Reclaimed Russia" by Masha Gessen - a good and poignant book by a Russian/American expat journalist. What has happened in Russia in the past 20 years ("recurrent totalitarianism" as one of the politicians mentioned in the book put it) is very troubling and I think it's an important lesson to learn w.r.t. the potential in other countries as well.</li> </ul> <p>Re-reads:</p> <ul class="simple"> <li>"How to read a book" by M. Adler and C. Van Doren</li> <li>"Deep Work" by Cal Newport</li> <li>"The Moral Animal" by Robert Wright</li> <li>"The Haj" by Leon Uris</li> </ul> Summary of reading: April - June 20182018-06-30T05:34:00-07:002018-06-30T05:34:00-07:00Eli Benderskytag:eli.thegreenplace.net,2018-06-30:/2018/summary-of-reading-april-june-2018/ …</li></ul>.</li> <li>.</li> <li>.</li> <li>.</li> <li>.</li> <li>.</li> <li>.</li> <li>.</li> <li>.</li> </ul> <p>Re-reads:</p> <ul class="simple"> <li>"Billions and Billions" by Carl Sagan</li> </ul> Summary of reading: January - March 20182018-03-31T06:12:00-07:002018-03-31T06:12:00-07:00Eli Benderskytag:eli.thegreenplace.net,2018-03-31:/2018/summary-of-reading-january-march-2018/ …</li></ul>.</li> <li>.</li> <li>.</li> <li>.</li> <li>.</li> <li>.</li> <li>.</li> <li>.</li> <li>.</li> <li>.</li> <li>.</li> <li>.</li> <li>"Madame Curie - A Biography" by Eve Curie - a fairly good biography of Marie Curie. It's a bit starry-eyed, which is not surprising given that the author is Marie's younger daughter, but good writing.</li> <li>.</li> </ul> <p>Re-reads:</p> <ul class="simple"> <li>"Man's Search for Meaning" by Viktor Frankl</li> <li>"The Mythical Man-Month" by Frederick P. Brooks Jr. - added some modern impressions to my <a class="reference external" href="">super-old review from 2003</a>.</li> <li>"Coders at Work" by Peter Seibel</li> </ul> Summary of reading: October - December 20172017-12-31T23:25:00-08:002017-12-31T23:25:00-08:00Eli Benderskytag:eli.thegreenplace.net,2017-12-31:/2017/summary-of-reading-october-december-2017/ …</li></ul> great powers tried to enforce upon them, and ends with some speculation on the future of China's financial development.</li> <li>"The Remains of the Day" by Kazuo Ishiguro - a truly delightful book, I have no other word to describe it :) Having watched and enjoyed "Downton Abbey" just a short while ago, it was very interesting to draw the similarities with the world Ishiguro describes in this book - the world of British nobles and their servants.</li> <li>"Hillbilly Elegy: A Memoir of a Family and Culture in Crisis" by J.D. Vance - the author tells about growing up in a troubled family in the rust belt, and eventual "escape" to the Marine Corps and Yale law school. Interestingly, while I originally expected something different from the book, I did like it overall. It doesn't carry a lot of insight besides the very personal story, but perhaps such insight <em>is</em> best gained through individual stories and not editorial pieces that are prone to political biases.</li> <li>"Between the World and Me" by Ta-Nehisi Coates - a troubling auto-biographic account of the author growing up as a black man in Baltimore and later through studies in Howard University. Reading this book is difficult and poignant, as it paints an angry picture of the racial divide in modern-day America.</li> <li>"Exit Right: The People Who Left the Left and Reshaped the American Century" by Daniel Oppenheimer - a biographical account of 6 prominent Americans from the political left, their gradual estrangement from their party and eventual flip to conservatism (or neoconservatism). Provides a fairly interesting angle on major issues American politics of the 20th century.</li> <li>"Little Fires Everywhere" by Celeste Ng - a well-to-do family in Cleveland intersects with a nomad artist and her daughter, with a thorny adoption story in the background. Somewhat soap-opera-y, but not bad overall, with some interesting thoughts on choosing life paths.</li> <li>"To Make Men Free - A history of the Republican party" by Heather Cox Richardson - a detailed history of the Republican party from the times of Lincoln and until the end of Bush Jr's presidency. Very informative book overall, but far from great. For one, it's clearly biased towards the left - and bias makes such books lose a lot of credence; for another, it completely ignores the changes in the Democratic party - for example, very little is said about how the Democrats turned from racism to liberalism in the 1960s. In fact, the author insinuates that most of the changes in the political spectrum were either made by the Republicans, or as a mirror image of their views; clearly, the truth can't be so simple. On the positive side, I liked the exposition of the inherent conflict between the Declaration of Independence (equality of opportunity) and the Constitution (protection of property), which may help explain why there's no easy answers in American politics.</li> <li>"The Omnivore's Dilemma: A Natural History of Four Meals" by Michael Pollan - a really well written book about the current industrial food chain, organic foods, "free-range" farm foods and also modern hunter-gathering. I liked the balanced approach and the book not being too preachy even though the conclusions are fairly clear.</li> <li>"Programming Haskell" by Graham Hutton (2nd ed.) - <a class="reference external" href="">full review</a>.</li> <li>"The Namesake" by Jhumpa Lahiri - this time a full-length novel about the same topic of Bengal immigrants to the US in the 1970s, following their life through the early 2000s. The protagonist is Gogol, born in Boston to immigrant parents, trying to have a normative American life in the late 20th century. I liked the first ~half of the book much more, as it was more tied to Lahiri's main immigrant theme. The second half felt like a fairly generic "coming-of-age" story for a young intellectual in New York. Still a good read overall, though.</li> <li>"The Vital Question" by Nick Lane - another grand-sweeping account of fundamental bio-chemistry and evolution mixed with speculative musings on the big questions of the origin of life by Nick Lane. My opinion of this book is similar to that of the author's <em>Life Ascending</em> - really well written, tons of knowledge, but way too dense and way too speculative. I'd say the minimal education level for really understanding this book is being a grad student in biology or a related field; otherwise it's fairly hard to judge where the facts end and where the speculation begins. I did enjoy the book overall, however - there's tons of fascinating details in it that just aren't covered anywhere else in popular science.</li> <li>"In Defense of Food: An Eater's Manifesto" by Michael Pollan - the book that gave birth to the saying "Eat food, not too much, mostly plants". A nice and pragmatic treatment of the important question of "what should we eat".</li> </ul> <p>Re-reads:</p> <ul class="simple"> <li>"Travels with Charley in Search of America" by John Steinbeck</li> </ul> Book review: "Programming in Haskell" by Graham Hutton (2nd ed.)2017-12-05T05:27:00-08:002017-12-05T05:27:00-08:00Eli Benderskytag:eli.thegreenplace.net,2017-12-05:/2017/book-review-programming-in-haskell-by-graham-hutton-2nd-ed/ …</p> read so far.</p> <p>The author's deep understanding of functional programming concepts and Haskell shines through the writing on many occasions. He carefully sets up explanations and examples that build one on top of another, and manages to explain some of the thorniest ideas of Haskell (applicatives and monads, I'm looking at you) very clearly; most importantly, the <em>why</em> of things is often explained, along with some important historical background that sheds some light on the design choices made by the language.</p> <p>There's even space in this book for a few extended programming examples and exercises, both of which are very important for a programming book. Some of the exercises come with solutions in an appendix - a truly impressive information density for a ~250 page book.</p> <p>My favorite chapter is <em>Monadic Parsers</em>; parser combinators is a very interesting topic, and I went through several resources that tried to explain it in Haskell. The treatment in this book is much better than anything I read before (it even inspired a <a class="reference external" href="">blog post</a> to document my understanding).</p> <p>On the flip side, the last two chapters - on automatically proving programs correct, as well as deriving correct programs from definitions - were puzzling. Felt too academic and somewhat out of place in a book teaching a programming language. I suppose that when you write a book, it's your prerogative to include some of the research topics you're excited about and pitch them to a more general audience :-)</p> <p>It's hard to resist comparing this book to Learn You a Haskell (LYAH), which is based on the false premise that complete beginners want or need to learn Haskell (IMHO they don't). I would guess that most folks coming to Haskell have some programming experience in other languages and are looking to expand their horizons and get some functional and static typing enlightment. <em>Programming in Haskell</em> is <em>exactly</em> the book these programmers need to read.</p> Summary of reading: July - September 20172017-09-30T19:03:00-07:002017-09-30T19:03:00-07:00Eli Benderskytag:eli.thegreenplace.net,2017-09-30:/2017/summary-of-reading-july-september-2017/ …</li></ul> of experience I don't think it's worth the time.</li> <li>"Working Effectively with Legacy Code" by Michael C. Feathers - <a class="reference external" href="">full review</a>.</li> <li>"The Better Angels of Our Nature: Why Violence Has Declined" by Steven Pinker - a very thorough study of the steady decline in violence in the last few centuries (yes, including the world wars of the 20th). Interesting and convincing, but as is typical for Pinker, a bit too verbose. I think it would be a much better book if it dropped a bunch of marginally relevant information Pinker finds hard to stop talking about (brain structure, various social behavior studies, etc.) and as a result slimmed down from its 800+ dense pages to something like half that size. If you've never read any other books by Pinker or other authors who write on these subjects, though, you may found a larger portion of the book interesting. YMMV but recommended overall.</li> <li>"The Tao Is Silent" by Raymond M. Smullyan - Disguised as an introduction to Taoism, this is a scattered overview of the author's philosophy of life; Taoism is a prominent part of it, but there's no real attempt made to explain it. Unfortunately, the book's writing style is of a kind I dislike - too much cleverness, going in circles and jumping to seemingly unrelated topics. It did give me a taste to learn more about Taoism, but I'll have to look elsewhere for that.</li> <li>"Beneath a Scarlet Sky" by Mark Sullivan - a true-story novel about the Italian resistance during the final stages of WWII, focusing on a 18-year old protagonist and his role in helping smuggle Jews across the Alps and spying on a high-rank Nazi general while being his personal driver. Not the best writing, but an engaging read regardless. Very interesting story about a part of the war I haven't heard much before.</li> <li>"The Tao of Pooh" by Benjamin Hoff - An unusual approach to teaching philosophy; the author attempts to explain some of the basics of Taoism by setting Winnie the Pooh as an example. It feels a forced at times and a bit too hippie for me (all this science is ruining the earth, let's stop doing it, stories of people living to age 260, etc.), but overall an enjoyable little book. It's certainly a vastly better introduction to Taoism than Smullyan's nonsense. Very light, entertaining quick read.</li> <li>"The Complete Tales of Winnie-the-Pooh" by A.A. Milne - No my usual fare :) I read tons of children's books with my kids but don't really post reviews. Pooh is different because it's fairly long (comparable to adult books) and I also feel it really caters for older audiences. I was inspired to finally read it cover-to-cover after "The Tao of Pooh", being curious whether I can spot the Taoist stuff without annotation. I couldn't, and it seems like Benjamin Hoff extracted every bit of teachable Pooh moment into "The Tao of Pooh". I was surprised by the number of stories I haven't heard before - it seems that all modern depictions of Pooh focus on 3-4 of the most popular stories and leave all the rest out. Overall it's a great book as the characters all represent different human traits that are pretty easy to understand and follow.</li> <li>"The Way of Zen" by Alan W. Watts - An very good introduction to Taoism and Zen Buddhism, inasmuch as these topics can be described in book form. I particularly liked the author's explanation of how Westerners may perceive the world a bit differently from the cultures where Zen originated, due to fundamental differences in language, and how this can make spiritual topics more difficult to understand - one of the best examples of the Sapir Whorf hypothesis I read anywhere.</li> <li>"The Epigenetics Revolution" by Nessa Carey - A very interesting popular account of the recent advances in epigenetic research, which is anything pertaining to varying expression of genes in different cells and in the same cells throughout cellular development. Fairly well written, though occasionally it's clear the author is a biology PhD who assumes a bit too much about the readers' knowledge - I had to read parts of the book twice to understand them better. Epigenetics is an exciting field that expands our knowledge beyond the basic understanding of what genes are, and it's really cool to see how rapidly this field is advancing in the last 10-15 years.</li> <li>"Learn You a Haskell for Great Good" by Miran Lipovaca - a Haskell book for beginners. Feels more like a collection of introductory blog posts than a cohesive book; can serve as a quick, light introduction to Haskell but not much more than that. Haskell has one of the steepest learning curves in modern languages, and this book's approach is not helpful. It rarely goes deep into the <em>why</em> of things, mostly a sequence of <em>how</em>, uses silly childish examples instead of real, useful ones, and has no exercises.</li> <li>"NurtureShock: New Thinking about Children" by Po Bronson and Ashley Merryman - presents some interesting new research about child development, with varying topics from sibling rivalry to academic success. One of the key messages is that children's brains are inherently different from adults (until they fully develop), and hence strategies that make sense for developing adults don't necessarily make sense for children. Interestingly, it seems that some of the topics the book discusses were already implemented in mainstream education (for example praising for effort and not for intelligence). On the down side I found the last third or so of the book less interesting; especially the research on speech development in babies seems very inconclusive and page-fill-ery.</li> <li>"Nothing to Envy: Ordinary Lives in North Korea" by Barbara Demick - an awesome documentary account of the life in North Korea in the 1990s and early 2000s, recounted from interviews with North Korean defectors to South Korea. The books is written as a series of intertwined personal stories of 6 North Koreans about their life in the reclusive country. The result is an extremely well drawn portrait of North Korea - very interesting book, I learned a lot.</li> <li>"Naked Statistics" by Charles Wheelan - A simplified introduction to statistics, peppered with many anecdotes and real-life examples of statistical studies. Nice book, but far from the greatness of "Naked Economics" by the same author. Somehow I feel that far less material has been covered here, perhaps because the author tried to go deeper in some topics. For folks not familiar with statistics at all, this book may be more insightful and enjoyable.</li> <li>"El Prisionero del Cielo" by Carlos Ruiz Zafón (read in Spanish) - a continuation of La Sombra del Viento and Juego del ángel. Similar tone, style and beautiful writing, but somewhat short on content IMHO. There's just not much happening in this book except the story of Fermin's incarceration.</li> <li>"Everybody's Fool" by Richard Russo - a follow-up on "Nobody's Fool", happening about a decade later. Mostly the same characters, and a very similar vibe. Amazing writing, as usual, from Russo.</li> </ul> <p>Re-reads:</p> <ul class="simple"> <li>"A Random Walk Down Wall Street" by Burton Malkiel</li> <li>"A Guide to the Good Life: The Ancient Art of Stoic Joy" by William B. Irvine</li> </ul> Book review: "Working Effectively with Legacy Code" by Michael C. Feathers2017-07-20T21:00:00-07:002017-07-20T21:00:00-07:00Eli Benderskytag:eli.thegreenplace.net,2017-07-20:/2017/book-review-working-effectively-with-legacy-code-by-michael-c-feathers/ …</p> describes.</p> <p>Have <em>you</em> ever had to deal with <em>real</em> legacy code? What do I mean by <em>real</em>? OK, let's recap. Real legacy code is frequently called "spaghetti code"; another common nickname is the <a class="reference external" href="">big ball of mud</a> pattern. It's code that lacks sensible structure and - worst of all - lacks tests. Classes may have had some single responsibility a long time ago, but they no longer do. Years of maintenance and under-pressure bug fixes left them leaking abstractions left and right, sending ugly dependency tentacles into each other. Touch one character and the whole house of cards may fall down on you. Unit tests are unheard of.</p> <p>To be honest, having spent the last few years at Google I've been spoiled. I almost forgot how such projects look. "Luckily" I still remember, however, which makes this book an interesting and poignant read. If the above strikes no chord for you - I suggest to skip it. The book will just seem horrible to you - proposing all kinds of ungodly hacks to break up dependencies in code and make it refactorable and testable. The hacks are a good match to the foe - they're about as awful as the code itself, so young and innocent developers may find themselves (rightfully) horrified. It's only the unhealed scars of old battles that will make this book palatable.</p> <p>The basic premise of the book is simple, and can be summarized as follows:</p> <ol class="arabic simple"> <li>To improve some piece of code, we must be able to refactor it.</li> <li>To be able to refactor code, we must have tests that prove our refactoring didn't break anything.</li> <li>To have reasonable tests, the code has to be <em>testable</em>; that is, it should be in a form amenable to test harnessing. This most often means breaking implicit dependencies.</li> </ol> <p>... and the author spends about 400 pages on how to achieve that. This book is dense, and it took me a <em>long</em> time to plow through it. I started reading linerarly, but very soon discovered this approach doesn't work. So I began hopping forward and backward between the main text and the "dependency-breaking techniques" chapter which holds isolated recipes for dealing with specific kinds of dependencies. There's quite a bit of repetition in the book, which makes it even more tedious to read.</p> <p>The techniques described by the author are as terrible as the code they're up against. Horrible abuses of the preprocessor in C/C++, abuses of inheritance in C++ and Java, and so on. Particularly the latter is quite sobering. If you love OOP beware - this book may leave you disenchanted, if not full of hate.</p> <p>To reiterate the conclusion I already presented earlier - get this book if you have to work with old balls of mud; it will be effort well spent. Otherwise, if you're working on one of those new-age continuously integrated codebases with a 2/1 test to code ratio, feel free to skip it.</p> Summary of reading: April - June 20172017-07-01T12:33:00-07:002017-07-01T12:33:00-07:00Eli Benderskytag:eli.thegreenplace.net,2017-07-01:/2017/summary-of-reading-april-june-2017/ …</li></ul> great due to a somewhat boring last third and being outdated. It was written in the 1980s and I'd be very curious to read a modern account; since (by the author's own admission) this whole aspect of society is undergoing rapid change, I imagine the situation may be somewhat different 30 years later.</li> <li>"Nobody's Fool" by Richard Russo - A fictional account of the lives of several inhabitants of a small town in a decaying part of up-state New York, in the 1980s. Very good writing, even though there's not a single likable character in the whole book! The ambiance resembles "Empire Falls" overall - an enjoyable read.</li> <li>"The Everything American Government Book" by Nick Ragone - pretty good overview of how the government in the US works, from federal to state to local. It goes breadth-first rather to depth, mostly serving as an extended introduction to a wide range of topics.</li> <li>"The Beginning of Infinity" by David Deutsch - an ambitious sweep across biology, artificial intelligence, history of science, politics and quantum mechanics with an optimistic flavor. "Infinity" refers to the possibilities opened by the current open, scientific society. While the idea of the book is enticing and some parts in it are very interesting, some other parts are outright weird and very loosely related to the main theme. Specifically, from one of the fathers of quantum computing I would expect a better explanation of the many-world interpretation.</li> <li>"1984" by George Orwell - technically a re-read, but since I read this book before 2003 I never really wrote a review. I liked this book, overall. It's mostly fun to read, if you ignore all the parts that make no sense. Still, it's thought provoking in the positive sense and well worth a read if you've never read similar negative Utopias before. Studying a bit of history first would help though, since I imagine most young people these days (thankfully!) aren't even aware of the ideology dramatized in the book.</li> <li>"Unequal Childhoods: Class, Race and Family Life" by Annette Lareau - a sociological study of 3rd grade students and their families, focusing on the differences between social classes (middle class, working class, poor) and race; conducted in the mid 1990s. Pretty interesting account of the different life experiences of kids and parents; I found some accounts of middle-class kids troubling (over-scheduling), and accounts of poor-class kids scary in a different way (formidable financial struggles).</li> <li>"The Working Dad's Survival Guide: How to Succeed at Work and at Home" by Scott Behson - a book by the author of the <a class="reference external" href="">Fathers, Work and Family blog</a>, focusing on work-life balance for fathers who want to spend as much time as possible with family. A pretty good book overall, though a bit too self-help-y. I think the main encouragement coming from reading it is the realization that this is a common issue quite a few men are facing this problem nowadays.</li> <li>"The Plague" by Albert Camus - Fictional account about an outbreak of bubonic plague in a French city in coastal Algiers in the 1930s. A bit too philosophical and scattered to my taste. Not a bad read, but far from great.</li> <li>"Essentials of Programming Languages" by D. Friedman and M. Wand - <a class="reference external" href="">full review</a>.</li> <li>"Stress Test: Reflections on Financial Crises" by Timothy Geithner - an auto-biographic account of the former secretary of treasury and president of the NY Fed, describing his career in public service with a strong focus on the 2007-2009 financial crisis. A really excellent book - strikingly honest and well balanced. I found it very inspiring w.r.t. facing adversity and making hard decisions even if they can't be popular. Highly recommended.</li> <li>"Hidden Figures" by Margot Lee Shetterly - The book upon which the movie is based. I was surprised at how different the book is - it's a heavily researched biography, non-fictional. In retrospect, it makes perfect sense how they condensed the story into the movie, since such books don't really make for great Hollywood hits as-is (maybe documentaries). Regardless, both the book and the movie are great, in my opinion. Very cool example of cold, hard economics at work - how practical needs and competition with external powers made gentle but real cracks in the segregational craze of the South - especially in Virginia, which was one of the most ardent strongholds of segregation well into the second half of the 20th century.</li> <li>"The Forgotten Man" by Amity Shlaes - a history of the great depression and the New Deal, up until the middle of WWII. I was hoping to find some more economic insights in this book about why the great depression lasted so long, whether the New Deal made it better or worse, and what ultimately ended it. While the book is interesting and easy to read, I ended up being disappointed in these hopes since I didn't find much coherent explanation here. The author sort-of hints at possible causes and effects (from a conservative economic point of view) but doesn't really develop these thoughts into anything concrete. It's still not a bad book as a history of the USA in the 1930s, but I'll have to find some other source for economic insights.</li> <li>"Work Rules" by Laszlo Bock - Laszlo was Google's head of people operations (a.k.a. HR) for many years, and in this book he tells about the Google culture and approach to hiring, employee development and management. It's a pretty good book, and a fairly accurate description of how this aspect of Google ticks.</li> <li>"Predictably Irrational" by Dan Ariely - a very good and entertaining book exploring the irrational nature of human beings. The author describes his research in behavioral psychology, featuring many experiments that probe how people behave in various scenarios; these experiments unearth many instances of "irrational" behavior in the sense that people don't strictly act by maximizing their utility.</li> <li>"Interpreter of Maladies" by Jhumpa Lahiri - a collection of short stories about Indian immigrants (or children thereof) in the USA; a couple of the stories describe life in Calcutta. In general, all people in the book come from the Calcutta area, and in the US usually live in the New England area. Very good writing - each story, even if very short, is immediately captivating - I'd be happy to read a full novel based on each one. Very interesting glimpse into the lives of Americans of Indian origin and their view on the US society and way of life.</li> </ul> <p>Re-reads:</p> <ul class="simple"> <li>"Naked Economics" by Charles Wheelan</li> </ul> Book review: "Essentials of Programming Languages" by D. Friedman and M. Wand2017-05-08T05:28:00-07:002017-05-08T05:28:00-07:00Eli Benderskytag:eli.thegreenplace.net,2017-05-08:/2017/book-review-essentials-of-programming-languages-by-d-friedman-and-m-wand/ …</p> basic concepts like scoping, it gets into the ways state can exist in a language; then a whole chapter deals with continuation-passing style. This chapter also describes exceptions, coroutines and threads (all implemented with continuations). Other chapters discuss module systems and object-oriented programming.</p> <p>The book was created as a companion/summary of a university course. When read stand-alone, it's not very easy to get through due to the huge number of concepts presented and discussed in fairly limited space (~350 pages). IMHO the authors could spend more time explaining and demonstrating the concepts before jumping into implementation - but maybe as a companion to lectures it's not really necessary. This is definitely something to take into account if you're just reading the book; it's not for beginners.</p> <p>I went through this book in a fairly detailed way (not quite like my <a class="reference external" href="">SICP journey</a>, but not very far from it). It took me several months of spending a couple of hours a week on it, and I wrote almost 10,000 lines of Clojure code implementing most of the interpreters (and some of the exercises) described in it. Clojure was a good choice. It's sufficiently similar to Scheme that the concepts translate without too much trouble; on the other hand, it <em>is</em> a different language so mindless copy-pasting doesn't work. I ran out of steam somewhere in the middle of the penultimate chapter (on Modules) and stopped implementing, since the concepts of Modules and OOP seemed more familiar and less interesting.</p> <p>My favorite part was, no doubt, the chapter on continuations. It helped me understand concepts like <a class="reference external" href="">CPS and trampolines</a> to a deeper extent than ever before. For this, I'm forever thankful to the authors.</p> <p>It's hard to say how beneficial a lighter skimming of this book can be. Without implementing the interpreters, the discussion is fairly weak. Implementing them takes time and code in advanced chapters builds on top of more code in earlier chapters.</p> Summary of reading: January - March 20172017-04-01T16:31:00-07:002017-04-01T16:31:00-07:00Eli Benderskytag:eli.thegreenplace.net,2017-04-01:/2017/summary-of-reading-january-march-2017/ …</li></ul>.</li> <li>).</li> <li>.</li> <li>.</li> <li>.</li> <li>.</li> <li>.</li> <li>".</li> <li>).</li> </ul> <p>Re-reads:</p> <ul class="simple"> <li>"The Grapes of Wrath" by John Steinbeck</li> <li>"Peopleware: Productive Projects and Teams" by Tom DeMarco and Tim Lister</li> </ul> Summary of reading: October - December 20162016-12-31T14:01:00-08:002016-12-31T14:01:00-08:00Eli Benderskytag:eli.thegreenplace.net,2016-12-31:/2016/summary-of-reading-october-december-2016/ …</li></ul>.</li> <li>.</li> <li>.</li> <li>.</li> <li>.</li> <li>.</li> <li>.</li> <li>.</li> <li>.</li> <li>.</li> <li>.</li> <li>.</li> </ul> <p>Re-reads:</p> <ul class="simple"> <li>"Pale Blue Dot" by Carl Sagan</li> </ul> Summary of reading: July - September 20162016-09-30T05:10:00-07:002016-09-30T05:10:00-07:00Eli Benderskytag:eli.thegreenplace.net,2016-09-30:/2016/summary-of-reading-july-september-2016/ …</li></ul> the book is great, but I found the execution mediocre. The authors have some good ideas to pass along and interesting examples and demonstrations, but unfortunately these drown in incoherent rambling-like style of writing, jumping from topic to topic haphazardly and a tone reminiscent of tabloids more than of non-fiction books. I mean, it's OK to use some quips, jargon and personal tone from time to time, but this book overdoes it. It also comes across as a not-so-subtle advertisement of the authors' respective private consultancies for statistical analysis and training/coaching. On the brighter side, the book does appear well researched and there are plenty of references provided for every chapter if one is interested in deeper dives into the topics presented.</li> <li>"JavaScript & jQuery" by Jon Duckett - a companion to the author's "HTML & CSS" book, focusing on the other side of modern web programming - JS. It uses the same colorful glossy format, though with somewhat more information density, and manages to cover the web-scriptability aspects of JS pretty well. What I mean to say is that the book doesn't spend too much time discussing programming paradigms in JS, but rather focuses on how to <em>use</em> it to manipulate the DOM, process events and interact with users. There's also quite a bit of focus on jQuery and modern HTML5 elements and DOM-stuff relevant to JS. The book tries to cater to beginner programmers though I seriously doubt someone new to programming will really manage to learn JavaScript from it. However, if you already have some programming experience and want to learn about how to make your web pages dance without heavy libraries (jQuery is so ubiquitous to be considered as "vanilla" as JS itself these days). It does discuss Angular briefly in one chapter, but overall sticks to the basics.</li> <li>"Basic Economics" by Thomas Sowell - Cetainly one of the best books about economics I've read in the past few years, perhaps ever. This is maybe the second time I started re-reading the book (well, re-listening in this case) immediately after finishing it the first time - it's that good! The author sets out to explain basic micro and macro-economic phenomena in the world from first principles, based on such fundamental issues as pricing, supply and demand, and the allocation of scarce resources that have alternative uses - the latter being a central theme throughout the book. He touches on a huge spectrum of topics - from price controls, to differences in productivity between countries, to why payday loans charge so much interest, to the ways many countries shot themselves in the feet by imposing government control of the market. One sobering perspective that comes out from the book is that politicians are very often incentivised to make un-economical decisions, acting fully rationally from their point of view. The solution is, of course, restriction on the power the government has over the free market. Surprisingly, Sowell manages to bring his points across without going too much into polemics; instead, he sticks to facts, historical studies and economic first principles. I can't recommend this book highly enough - if you only ever read one book on economics in your lifetime, this should be it.</li> <li>"Black Rednecks and White Liberals" by Thomas Sowell - A collection of several loosely-connected essays on history, sharing the common thread of examining ingrained historic perceptions of race and ethnicity. For example, one of the essays explains how what we now know as "Black culture" isn't African in origin, but rather is peculiar to the South of the US, and originated in certain parts of Northern Britain. Another essay compares Jews to other "middlemen minorities" (like Chinese in South-East Asia and Armenians in the Ottoman Empire), concluding that the history, and treatment by other nationalities, of such minorities has a lot in common. Very well written and insightful book.</li> <li>"Python machine learning" by Sebastian Raschka - this would be an average plus book, had it not been published by Packt. Packt makes it a fairly low-quality work, as usual, with weird formatting and bad editing.</li> <li>"The girl on the train" by Paula Hawkins - a light, engrossing mystery novel about a missing woman in London, told in first person from the points-of-view of three women protagonists. Starting with about 2/3 into the book everything became painfully obvious, so it was a bit of a drag to read until the main character realized what's going on. Also, the main character - Rachel, must be one of the least likable protagonists I've encountered in books lately. It's very hard not to feel repelled by her behavior and general state of mind.</li> <li>"Applied Economics: Thinking beyond stage one" by Thomas Sowell - the central theme here is making decisions that make sense in the short term, but turn out badly in the long term. Sowell once again pounds politics as the obvious culprit here - passing laws and regulations that boost their status with the voters for the duration of their term in some political office, but leading to problems years later when no one remembers what the original cause was. Overall, much of this book is rehashed from Basic Economics, so I found it much less exciting. It's still a good read, but reading it shortly after Basic Economics feels a bit repetitive. There are some new topics, though, which are very interesting. One is the economics of discrimination. Sowell claims that it is government policies that enable discrimination, and that free markets are actually discouraging wide-spread discrimination. He makes a very well-argued case for this, with tons of examples from different countries and eras.</li> <li>"Economics" by Timothy Taylor (audio course) - pretty good, long course. I found Sowell's "Basic Economics" better, though the course spends much more time on macro economics, which Sowell covers less (with the caveat that macro economics in general doesn't feel very scientific to me).</li> <li>"Deep Work" by Cal Newport - a guideline to fighting procrastination and the modern distractions of the internet and social media on the way to greater productivity. I'd say this book has the same idea as Csíkszentmihályi's "Flow" but is much less thorough and much more handwavy. I did like it overall; in particular, the thesis of work quality often beating work quantity resonates strongly with my own experience. Still, "Flow" is significantly better IMHO.</li> <li>"One Flew Over the Cuckoo's Nest" by Ken Kesey - an excellent book about life in a mental asylum. I can see why this book is controversial w.r.t. teaching it in school - it certainly glorifies opposition to authority and the feel-good of defying rules.</li> </ul> <p>Re-reads:</p> <ul class="simple"> <li>"War and Peace" by Leo Tolstoy</li> </ul> Summary of reading: April - June 20162016-06-30T20:43:00-07:002016-06-30T20:43:00-07:00Eli Benderskytag:eli.thegreenplace.net,2016-06-30:/2016/summary-of-reading-april-june-2016/ …</li></ul>, and on the way tells the reader about his childhood, early adulthood and family. The focus is his weird relationship with his wife, shaped by a problematic pre-marital life. Though the book is an entertaining quick read, I enjoyed it less than the author's earlier books. Overall it's quite predictable both in terms of content and its ending - full of modern Israeli folklore on a relatively superficial level.</li> <li>"The Elephant Vanishes" by Haruki Murakami - a collection of short stories, some of which feel like incomplete drafts of longer novels. As usual, Murakami is fun to read though I enjoyed his other collections of short stories (like "After the Quake") much more.</li> <li>"Code" by Charles Petzold - Starting with Morse code and Braille, and moving through telegraph wiring, relays and basic electronics, Petzold slowly and surely develops a whole functioning computer on paper. As an experienced HW engineer in my past, I found less of this book novel than I'd expect to be the case for the average programmer. That said, parts of the book still were lots of fun - like the first part dealing with relays and telegraphs. The control and data logic of a basic CPU is also a fun reminder of the simple things that lay in the foundation of our profession. About the last third of the book I found to be not so interesting, as it's a broad overview of the personal computer industry in the late 1990s, which now in 2016 reads a bit archaic.</li> <li>"Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future" by Ashlee Vance - even if one tries to avoid hype, it's hard to not be impressed with Elon Musk's accomplishments in the past two decades. This book does a good job describing the biography of Musk in detail, from his ancestors to very recently (~2014). I was sort-of admiring Musk before reading the book, and I have to admit I admire him even more now. He has incredible vision and execution capabilities, probably unmatched by anyone else today. I've long been lamenting that today's brightest minds spend their time agonizing over social networks, myriads of redundant messaging apps and how to micro-optimize ad revenue instead of focusing on the world's real problems. It turns out that Elon Musk's vision started from exactly the same point of view, and he's been able to execute on it amazingly well - making real progress in several important areas of technology at the same time. It's also very sobering to read this biography right after I re-read "The Fountainhead", since it's easy to draw a lot of parallels, especially w.r.t. overturning existing dogmas and dedication to a clear vision.</li> <li>"HTML & CSS" by Jon Duckett - A beginner's guide to fairly modern HTML and CSS techniques for website design. This book has a very unusual layout - it's made of glossy magazine-like paper full of color. Very unusual for a technical book, and very apt for this particular topic. The presentation is flashy and full of images, so the content density is low. Even though the book looks very big (over 400 pages in fairly thick color paper), the actual amount of content couldn't span more than 100 pages in a regular book - so it's a quick read. Being so quick it doesn't go very deep, but does provide a good introduction and a bunch of interesting examples.</li> <li>"The Go Programming Language" by Alan Donovan and Brian Kernighan - <a class="reference external" href="">full review</a>.</li> <li>"Little Women" by Louisa May Alcott - a two-volume book covering about 15 years in the life of the March family in Concord MA, roughly from the mid 1860s to the late 1870s. The protagonists of the book are the four daughters in the family, each on its own unique life path. I have a feeling that had this book been written today it would be received very cynically; for sure, the overly saintly and sweet portrayals of some family members are something out of this era. Nevertheless, the book is written very well and a pleasure to read. In addition to its historic value, it also contains multiple insightful observations on human nature, relationships within families, among friends and between couples, as well as the value of money.</li> <li>"Life Ascending: The Ten Great Inventions of Evolution" by Nick Lane - the author is a professor of biochemistry focusing on evolution. In this book he goes in depth on topics ranging from sight, to breathing, to death - and I mean <em>in depth</em>. The book is extremely heavy on detail, and each chapter feels like a mini-book in itself, completely packed with biological and chemical details, speculations and updates on recent research. Indeed, it took me several months to finish the book in installments because I felt I can't digest too much of it at a time. Also, it's fairly obvious that much of the book is speculative - the author tackles topics we don't really have a full understanding of, and in many places he proposes his opinions and predictions that may or may not be true. As the author himself admits in the epilogue, the book "is full of judgements that stand on the edge of error". That said, overall the book is very interesting, insightful and extremely well written. It certainly leaves me with a curiosity to read more of the author's books.</li> <li>"The History of the United States" by A. Guelzo, G. Gallagher, P. Allitt (audio course) - a comprehensive, 43-hour introductory course to the history of the USA, with special emphasis on the Civil War era. Almost certainly the longest audio book I've ever listened to, and yet the course can cover but little of the subject, of course, mostly serving as a taste of many topics readers can later delve into in more specific courses and books. I liked how the course was put together overall, and learned a lot. Of the three parts, I liked the third one least - both because the lecturer seems to be less savvy in continuous audio speaking, and because I felt many things were left uncovered. That said, even the third part in itself is a pretty good course on the history of 20th century USA.</li> </ul> <p>Re-reads:</p> <ul class="simple"> <li>"To Kill a Mockingbird" by Harper Lee</li> <li>"The Fountainhead" by Ayn Rand</li> </ul> Book review: "The Go Programming Language" by Alan Donovan and Brian Kernighan2016-04-20T12:48:00-07:002016-04-20T12:48:00-07:00Eli Benderskytag:eli.thegreenplace.net,2016-04-20:/2016/book-review-the-go-programming-language-by-alan-donovan-and-brian-kernigh …<.</p> <p.</p> <p.</p> <p.</p> <p>Go is an opinionated language, and GOPL is an opinionated book. Both have the right to be, IMHO, due to the innate authority of their authors as veterans in the field who distill their collective experience into the language and the book.</p> <p>Overall, I enjoyed GOPL a lot. If you're looking to get a thorough and serious introduction to the Go programming language, I think it's an excellent choice.</p> Summary of reading: January - March 20162016-03-31T15:55:00-07:002016-03-31T15:55:00-07:00Eli Benderskytag:eli.thegreenplace.net,2016-03-31:/2016/summary-of-reading-january-march-2016/ …</li></ul> book would be to beginner programmers, but the author does cover things like configuring Emacs, which could be useful for absolute beginners. The book is full of whimsy, though it manages to be not too distracting except in a few cases where more realistic examples would be easier to understand and follow. In general, while I found this book to be not too bad on the basics, it's lacking in coverage of the more complex aspects of Clojure. The coverage of concurrency-related issues leaves much to be desired, for example. Overall, while it's a decent book I would recommend looking for something else if you want to get started with Clojure.</li> <li>"The Price of Privelege" by Madeline Levine - secondary title is "How Parental Pressure and Material Advantage Are Creating a Generation of Disconnected and Unhappy Kids", which pretty much summarizes the book's main thesis. If you feel that your kids may grow up being spoiled because your own financial situation is significantly better than what you had growing up, then this book is for you. I think it's interesting and raises some very important points. Specifically, the statistic of unhappiness in affluent teens it unravels is significantly more worrying than mere "being spoiled". I think that at this stage of my life it's a bit too early because my kids are young. It's important to start developing good habits early on, but I'll definitely need a refresher in a few years. I found the first third of the book especially interesting; the rest, significantly less so. That said, I did like the book overall. I think it raises an important issue and provides some good tips for how to handle it.</li> <li>"The Nightingale" by Kristin Hannah - a long and sad book about the lives of two sisters in France during WWII. Through them, the story of France in the war is told - the life of a small town occupied by the Germans, and life in the French underground that fought the Nazis throughout the war. Treatment of French Jews, concentration camps, different natures of German officers billeted within French homes, it's all in there. Mostly a very enjoyable read. A couple of small parts could be better, like the initial encounters between Isabelle and Gaetan. The author could spend a couple more pages on it. I heard the book through its Audible version and the reading was excellent - very good (as far as I can tell!) French pronunciation, different voices for different characters, etc. Overall, highly recommended.</li> <li>"C++ Concurrency in Action" - <a class="reference external" href="">full review</a>.</li> <li>"Structured Parallel Programming" - <a class="reference external" href="">full review</a>.</li> <li>"The First Amendment and You" by John E. Finn (audio course) - a deep dive into the first amendment to the US constitution, examining many landmark supreme court cases that focused on its different aspects. This book is fairly hard core, in my opinion, as far as newbie readers go. While I think I understood most of it, I certainly felt that a basic introduction to US law would be a good prerequisite - getting a more formal education of how the judicial system works (the supreme court in particular), and some basics of how arguments in courts are structured. Overall I did enjoy the course, though. It certainly gives you an appreciation of how complex fundamental laws are, even when it appears that they are fairly easy and brief to define. There are so many nuances and small difference to consider, that it's really hard (impossible?) to come up with an axiomatic system of reasoning and one has to rely on precedents set by past cases instead.</li> <li>"The American Revolution" by Allen C. Guelzo (audio course) - a pretty good overview of the American revolution, focusing on the military campaigns. The course doesn't go into much depth on any one subject, so it's best seen as an introduction and a "table of contents" for deeper delving into subjects of interest.</li> </ul> <p>Re-reads:</p> <ul class="simple"> <li>"Memoirs of a Geisha" by Arthur Golden</li> </ul> Book review: "Seven Concurrency Models in Seven Weeks" by Paul Butcher2016-03-29T05:14:00-07:002016-03-29T05:14:00-07:00Eli Benderskytag:eli.thegreenplace.net,2016-03-29:/2016/book-review-seven-concurrency-models-in-seven-weeks-by-paul-butcher/ …</p> complexity and safety issues of this approach. Clojure is presented as an example of a functional language which is immutable by default (through the use of efficient persistent data structures), demonstrating its concurrency abilities through threads (wrapped in futures and promises for convenience) and managing explicit state with atomics and STM. Elixir (a dialect of Erlang) is shown to demonstrate distributed, fault-tolerant programming. Then Clojure again for a presentation of CSP (the techique that's now back in vogue thanks to Go). It finishes off with OpenCL - a chapter that differs from the others since it discusses parallelism at a far more detailed granularity; finally, it presents "Lambda Architecture" - a collection of libraries on top of Java to implement map-reduces and variations thereof.</p> <p>I liked the book overall, though I do have a bit of criticism:</p> <ul class="simple"> <li>The code samples are not always clear, and sometimes too short. I feel the book could be better if it spent an extra 100 pages on showing more of the code and explaining it.</li> <li>Related to the above, the author is very eager to use libraries for fairly simple tasks instead of reinventing the wheel. Except that when you're learning about a new programming paradigm, in a new programming language, spending an extra 20 lines of text to define a function instead of importing it is well worth the effort, IMHO.</li> <li>Some of the Clojure samples for using CSP are outdated, even though the book is from 2014. Functions used in the book (such as <tt class="docutils literal">map<</tt>) are now marked deprecated in Clojure.</li> </ul> <p>Some key take-aways:</p> <ul class="simple"> <li>Java has a very impressive array of built-in tools for concurrent and parallel programming. Thread pools, a fork-join framework, varieties of concurrent queues, you name it.</li> <li>Clojure! The book helped me (re-)discover this language, and I'm very thankful. A modern, fully-featured Lisp with a thriving, friendly community and tons of interesting ideas - what a gem. And it implements a very feature-full and well-designed form of CSP as a library - Lisp's macro magic in its best. I will definitely blog more about this in the future.</li> <li>Erlang/Elixir left me unimpressed. It seems like the biggest advantage of these languages is in actor frameworks like OTP , which could be implemented as a library in any modern language (JVM-based languages have Akka, for example). Even though actors look very interesting and certainly have useful semantics and ideas in them, the language(s) they come with are quirky and I don't know if I wanted a new language just for distributing my application.</li> </ul> Book review: "C++ Concurrency in Action" by Anthony Williams2016-02-12T05:20:00-08:002016-02-12T05:20:00-08:00Eli Benderskytag:eli.thegreenplace.net,2016-02-12:/2016/book-review-c-concurrency-in-action-by-anthony-williams/ …</p> a reference, with a large chunk dedicated to a detailed encyclopedic listing of all the C++11 threading-related objects and their methods (I'm not sure how useful this is in 2016 when all these references are already online, but could certainly be more relevant in early 2012 when the book was initially published).</p> <p>The book is very comprehensive. It not only goes over the C++11 threading and concurrency features (of which there's a very good and thorough coverage), but also discusses general parallelism topics like concurrent data structures, including lock-free variants, thread pools and work-stealing. As such, it's not light reading and is definitely a book you go back to after finishing it to re-read some of the more complex topics.</p> <p>On the critical side, the book's age already shows. I imagine the author didn't have access to fully conformant compilers when he was initially writing it, so many C++11 features are not used when they should be: things like range loops, reasonable uses of <tt class="docutils literal">auto</tt>, even placing the ending <tt class="docutils literal">>></tt> of nested templates together without whitespace in between. Instead, there are occasional uses of Boost. All of this is forgivable given the book's publish date, but a bit unfortunate in a book specifically dealing with the C++11 standard.</p> <p>Other random bits of criticism:</p> <ul class="simple"> <li>The analogies the author uses are weird, and often unhelpful. The book is clearly aimed at seasoned programmers, so we should drop the dumbing down.</li> <li>Diagrams are sometimes ugly and sometimes nice.</li> <li>The explanation of memory ordering semantics wasn't amazing, IMHO. I realize it's a devilishly complex topic to explain, but feel it's important to mention in case someone wants to get this book solely to understand memory ordering.</li> <li>The code samples living in a <tt class="docutils literal">.zip</tt> file that you can download are sometimes slighly different from the listings in the book, and I found several occasions where they don't compile. Unfortunately, emails sent to the author about these were not answered.</li> </ul> <p>Overall, I liked the book. It's not perfect, but it's the best we've currently got to cover advanced concurrency and parallelism with modern C++. This book is hard to fully digest in a single reading because you're not likely to really need everything it covers. I expect it to be useful in the future as I need to refresh some specific topics.</p> Book review: "Structured Parallel Programming" by M. McCool, J. Reinders, A. Robinson2016-02-02T05:24:00-08:002016-02-02T05:24:00-08:00Eli Benderskytag:eli.thegreenplace.net,2016-02-02:/2016/book-review-structured-parallel-programming-by-m-mccool-j-reinders-a-robinson/ …</p> things going for it, I think that the end result is fairly mediocre.</p> <p>First, what I liked about the book:</p> <ul class="simple"> <li>The introductory part (first two chapters) are well written and paced just right to serve as a good introduction to the topic - not too wordy, not too terse.</li> <li>The list of patterns is comprehensive and definitely provides a good starting point for a common language programmers can use to talk about parallel programming. Folks experienced with parallel programming throw around terms like "map", "scatter-gather" and "stencil" all the time; if you want to know what they are talking about, this book has a good coverage.</li> <li>At least some of the examples are interesting and insightful. Especially the first example (Seismic simulation) is enlightening in its use of space-time tiling. This is actually the kind of topic I really wish the book spent more time on.</li> <li>The formatting is good: diagrams are well executed and instructive, and the many code samples use consistent C++ style and are comprehensive.</li> </ul> <p>Now, what I didn't like:</p> <ul class="simple"> <li>First and foremost, and this is a criticism that permeates throughout this review: this book is thinly veiled marketing material for Intel. Except for one OpenCL example, the authors just use OpenMP, TBB, ArBB and Cilk Plus to demonstrate the patterns - all Intel-specific techonologies that are only optimized for Intel CPUs. If you care about more HW than Intel CPUs, or are using a different programming model/library, you're out of luck.</li> <li>The book takes a <em>very</em> narrow view of parallel hardware, which is IMHO unforgivable for something written in 2012. For many parallel workloads these days, GPUs offer a much better performance/cost alternative to CPUs. Naturally, you won't have Intel engineers admit it in their book. GPUs are mentioned only very briefly in the introduction, and then only to shame them being less flexible than CPUs. This is despite the fact that <em>many</em> of the patterns presented in the book are actually perfectly good for GPUs. The authors do mention Intel's MIC, of course. But MIC, 4 years after the book has been written, still very much looks like a fringe technology which is inferior to Nvidia's server-class GPUs for number crunching.</li> <li>The book also takes a very narrow view of software. Some of the Intel technologies it presents are already defunct - like ArBB. Others, like Cilk Plus are so esoteric and rarely used that only Intel still doesn't realize it. TBB is probably the most reasonable of the technologies presented, since it's an open-source library. If the book used plain threads and then showed what TBB brings to the table, it would be significantly more useful, IMHO.</li> <li>The actual "meat" part of the book is extremely short. After the introduction, less than 200 pages are spent listing the patterns, and maybe half of that is dedicated to discussing the idiosyncracies of the specific Intel technologies the authors use to implement them.</li> <li>Distributed computing is completely neglected - only shared-memory models are discussed. If you want to break a task up into multiple machines that don't share memory, this book won't help you much (except a brief mention of map-reduce towards the very end).</li> </ul> <p>Overall, I won't say this is a bad book - there's certainly useful information within it and it's well written. But it's also far from being a great book. Maybe if all your parallelism needs are confined to a single multi-core Intel CPU and you're happy to use one of the Intel technologies it covers, the book can be great. Another audience which can take more from the book is relative beginners who had only basic exposure to parallel programming - then the patterns are truly useful to know about.</p> <p>I'll be happy to hear suggestions about <em>great</em> books on parallel programming. Being such an important topic these days, it's surprising to me that I can't find a single book that's universally recommended.</p> Summary of reading: October - December 20152015-12-31T07:20:00-08:002015-12-31T07:20:00-08:00Eli Benderskytag:eli.thegreenplace.net,2015-12-31:/2015/summary-of-reading-october-december-2015/ …</li></ul> a Pulitzer prize winner! I figured the name is 1776 because that's the year the declaration of independence was signed, but not so. The name is 1776 is because the book only tells about what happened in... that's right, 1776. Which is a tiny sliver of the actual story of the American revolution. To be sure, the story is very well told and enjoyable to read... but, this is not what I was expecting. It would be really awesome if this would be part of an 8-volume series (1775, 1776, 1777 etc.) but to the best of my knowledge it's not. A more to-the-point disappointing fact that the book spent very little on the actual declaration of independence, the process leading to it, and so on. Which is surprising, right? The book instead covers, in excruciating detail, the battle of New York / Brooklyn and the ensuing flight of the continental army to Philadelphia (with some coverage of the siege of Boston in the beginning). So if these specific events interest you in depth, the book is great. Otherwise, this would definitely not be the first (or second, or third) book I'd read on the topic of the revolutionary war.</li> <li>"The Worst Journey in the World" by Apsley Cherry-Garrard - a semi-autobiographic account of Scott's expedition to reach the South Pole in 1911. This book was written in 1922 by one of the expedition's team members, and collects his own diary entries with selected entries from other members (including Scott himself). Overall well written and told - fascinating insight into early Polar exploration. One thing I think was lacking is some more context on how and why certain things were done (i.e. setting up rations, depots and so on) - these things would certainly be very interesting from an engineering/planning point of view.</li> <li>"The Achievement Habit" by Bernard Roth - a huge disappointment. I've fallen for an overly pompous short description and not reading the reviews carefully enough on Amazon. The author of the book is certainly an impressive individual, but instead of a systematic approach I expected, this book is just a biographic diary of a guy reminiscing on his life experiences. Sure, as a general feel-good book it has the usual set of good ideas - but nothing special or insightful.</li> <li>"Do No Harm: Stories of Life, Death, and Brain Surgery" by Henry Marsh - an auto-biographic account of a senior neurosurgeon in the UK about his work experiences, focusing on dealing with patients and with inevitable mistakes. Extremely well written in an emphatic and thoughtful voice. The British directness and sense of humor is also much appreciated. I also think his notes on the difference between public healthcare (NHS in the UK) and private care (in the US and the UK) are poignant. Highly recommended overall, with a word of caution for hypochondriacs - the book describes quite a few very scary medical conditions in gory detail.</li> <li>"Diaspora" by Greg Egan - a "hard science fiction" book (meaning - strong emphasis on scientific detail) following the quest of humanity's descendants (conscious software, what else...) to better understand the universe and probe beyond the edges of known physics. Pretty heavy reading, replete with hard-core physics and mathematics to such extent that it makes one wonder whether the information is real or the author is making it all up. One of the weirder books I've read in a while, for sure.</li> <li>"Battle Cry of Freedom: The Civil War Era" by James McPherson - an amazingly thorough and well-written history of the American civil war. At almost 1000 pages this book is very long and dense, but totally worth it. The author spends a lot of time on all the important aspects surrounding the war - including the political situation that led to it, the economical situation in both South and North before and during the war, the personalities involved and of course all the major battles of the war.</li> </ul> <p>Re-reads:</p> <ul class="simple"> <li>"Dreaming in Code" by Scott Rosenberg</li> <li>"The Making of the Atomic Bomb" by Richard Rhodes</li> <li>"Kafka on the Shore" by Haruki Murakami</li> </ul> Summary of reading: July - September 20152015-10-03T14:05:00-07:002015-10-03T14:05:00-07:00Eli Benderskytag:eli.thegreenplace.net,2015-10-03:/2015/summary-of-reading-july-september-2015/ …</li></ul> of naturally intuitive ideas and pop-culture stereotypes (the whole right-brain vs. left-brain issue, for example). That said, even though most of the book is Tony Robbins-style general feel-good advice, there <em>are</em> a few interesting suggestions in it that are worth reading the book for.</li> <li>).</li> <li> <em>any</em>.</li> <li> <a class="reference external" href="">The Mysterious Island</a> by Jules Verne. "The Martian" is finally something to match it, but transported onto a much more modern setting.</li> <li>.</li> <li>.</li> <li>.</li> </ul> Summary of reading: April - June 20152015-06-30T20:45:00-07:002015-06-30T20:45:00-07:00Eli Benderskytag:eli.thegreenplace.net,2015-06-30:/2015/summary-of-reading-april-june-2015/ …</li></ul>.</li> <li>).</li> <li>"How To Prove It" by Daniel Velleman - <a class="reference external" href="">link to full review</a>.</li> <li>.</li> <li>"Starship Troopers" by Robert Heinlein - Very nice science fiction about a mobile infantry fighter in a futuristic interstellar setting.</li> <li>.</li> <li>.</li> </ul> <p>Re-reads:</p> <ul class="simple"> <li>"The soul of a new machine" by Tracy Kidder</li> <li>"The rational optimist" by Matt Ridley</li> </ul> Book review: "How To Prove It" by Daniel Velleman2015-05-22T06:20:00-07:002015-05-22T06:20:00-07:00Eli Benderskytag:eli.thegreenplace.net,2015-05-22:/2015/book-review-how-to-prove-it-by-daniel-velleman/ …</p> have so many impressions of it that it certainly won't fit into a summary.</p> <p>"How To Prove It" is one such book. It takes upon itself the challenging task of teaching readers how to read, understand and come up with mathematical proofs. An audacious goal for sure, but the author succeeds beautifully.</p> <p>The book starts with basic chapters on mathematical logic, as a prelude to proof-writing - translating simple word problems to logical equations and showing how to reason about them with tools like truth tables, operations on sets and logical quantifiers. In chapter 3, a structured approach to proof writing is presented. The approach is very mechanical - <em>just the way it should be</em>. You're taught to carefully consider and analyze each and every logical statement in your proof. Every statement comes from applying known rules to previously discovered statements. Careful proofs are beautiful things - they leave nothing unexplained. I recall back from my math courses in University, when I was losing points on proofs it was always because I missed to explain some crucial steps. The approach demonstrated in this book is a very nice way to keep track of everything and make sure nothing goes amiss. In fact, as a programmer it's immediately obvious how such proof techniques can be automated (and indeed the author has developed a proof-aid system called Proof Designer, which he talks about in an appendix).</p> <p>After the reader has basic proof-writing tools in his hands, subsequent chapters present more constructs from logic and set theory, with new proof techniques and examples relevant to them. There's a chapter on relations, including closures. Another chapter on functions. Finally, the book ends with a chapter on induction (including strong induction) and a chapter on infinite sets.</p> <p>Throughout the book, there are numerous examples, carefully crafted to take the reader through the material in tiny steps, and encouraging experimentation. There's a huge number of exercises (only some of which have solutions, unfortunately). The exercises smartly progress from very simple to challenging, giving the reader an opportunity to practice proof techniques and really understand how to use the matehmatical tools needed for the task.</p> <p>I thoroughly enjoyed this book - it took me months to work through it, and it was an immensely rewarding experience. I worked through most exercises in the first chapters and at least some in the later chapters. I wish I had more patience to do even more exercises on my own - but it's quite a time-consuming endeavor. Overall, I think that if one is looking for a great book to explain what mathematics is all about, there's no need to look further.</p> <p>I want to emphasize this last point. There are many books out there contending to be "the introduction to mathematics" for sophisticated readers. Almost always, these books turn out to be roller-coaster rides through dozens of mathematical topics, shortly touching on each. IMHO that's absolutely the worst possible approach to present mathematics, and this is where Velleman gets this right. In terms of total presented material, this book wouldn't even cover a single undergrad course in logic and set theory. But you see, there's no need to cover as many topics as possible. On the contrary, that would be detrimental. "How To Prove It" brilliantly chooses a small number of key topics and ideas and really focuses on teaching readers how to undestand them and reason about proofs using them. Since these ideas are fundemental in mathematics, they permeate every topic studied and are universally useful. Not much different from teaching a man to fish as opposed to dumping a truckload of exotic fish on him, really.</p> <p>To conclude, if you are looking for a book to understand what mathematics is about and get a great introduction to mathematical thinking - I highly recommend "How To Prove It".</p> Summary of reading: January - March 20152015-04-02T05:55:00-07:002015-04-02T05:55:00-07:00Eli Benderskytag:eli.thegreenplace.net,2015-04-02:/2015/summary-of-reading-january-march-2015/ …</li></ul> good read, though the ending could be better.</li> <li>"The Paradox of Choice: Why more is less" by Barry Schwartz - one of the most useless books I read recently. Though it was obvious from very early on that this book is going to be disappointing, I bravely plowed through, finally giving up at around 65%. Seriously, I don't need a whole book to tell me how we have many choices in modern life, and how it can be detrimental. Not sure what I was looking for when I set out to read it, really; mental note - be more careful in the future. I suppose for some people this book can be an eye opener (divert them from their foul ways), but not being one to succumb to a plenitude of options myself, I just don't see the point.</li> <li>"The Emperor of all Maladies: A Biography of Cancer" - a very well written book (Pulitzer prizes don't get handed out for nothing), but a very troubling one at the same time. The long chapters about leukemia in children will make any parent squirm, and the general feeling of hopelessness drags throughout this long book. Yes, the final couple of chapters are hopeful, but the vast majority of the book leaves a stronger impression. I wouldn't call the book perfect - some parts (like the legal battles surrounding smoking) could be made much shorter, and I wish some other parts (like the biology of the cancer cell and recent research in general) could be longer. Overall, however, it's a great read.</li> <li>"The Autobiography of Bertrand Russell, Vol I" by Bertrand Russell - Russell was one of the most brilliant minds of the early 20th century, with large contributions to mathematics, philosophy and politics (for some reason, there are fewer such polymaths lately - perhaps because the areas of knowledge became too specialized). This book is the first volume of 3 in his autobiography, from childhood to age ~40 (he lived to 97, so there's plenty more to tell). It's a surprisingly readable and enjoyable book in most parts. I found the letters to be a bit tiresome, especially given that many are addressed <em>to</em> Russell and weren't written by him.</li> <li>"John von Neumann and the Origins of Modern Computing" by William Aspray - a biography of John von Neumann's various contributions to early computing. The man this book describes was extremely impressive - a truly inspiring breadth and depth of knowledge. The book itself is somewhat dry though, somewhat academic with many hundreds of detailed notes and references, and very matter-of-factual presentation. Reads a bit like a long encyclopedia entry.</li> <li>"The Goldfinch" by Donna Tartt - An exceptionally well-written and captivating book. It's obvious that the author made a huge investment in research - there are tons of small but interesting details sprinkled all over. I'm not sure I like the ending, but overall the book is excellent.</li> <li>"Mastery" by George Leonard - in the preface, the author mentions that he wrote this book after being asked by readers for more information following a magazine article he wrote earlier on the same topic. This shows :) Though the book is fairly short (less than 200 pages in a small format), its essence could really be summarized in just a few pages. This is the first part of the book. The rest feels like filler. That said, the essence is well worth the read. The main thesis is that on the path to mastery (e.g. any learning or development journey) what matters is the path itself, not the end goal. This lesson pops up in various guises in self-help materials, but Leonard really has a nice way to describe it and teach it, so I'd say the book is recommended. If you also happen to be an auto-didactic introvert who enjoyed Csikszentmihalyi's "Flow", then this book is a must-have complement.</li> <li>"Quiet: The Power of Introverts in a World That Can't Stop Talking" by Susan Cain (audiobook) - the author sets on a quest to define the differences between introverts and extroverts, and most importantly to help introverts self-validate about their place in life and society. I think this book could be quite important to introvert persons who aren't feeling good about their social skills and are in a defining period of their lives (say high-school to early years of career). For older individuals, the most interesting insights will be about how to properly raise children who are introverts. It's not a bad book in general, but it has some suboptimal properties. For example, trying to cleanly divide the world into two personality types is a stretch, and the gross generalizations that result from this are unavoidable. It also feels rather unscientific in places. And too much bashing of extroverts, IMHO.</li> <li>"The Son" by Philipp Meyer - a very well-written saga about a Texan family, spread over half of the 19th century and most of the 20th. Tells the story from the POV of three different family members - the original patriarch, his son, and his grand-granddaughter. The writing is excellent, though I dislike explicit tricks that make it hard to put a book down (intersperse several plot lines and always end each segment with something that leaves you in suspense). What I found most interesting is the description of life among the Comanches; it's either all imagined or the author did some serious research (I certainly hope for the latter). The ending of the book is kind-of unexpected and anti-climatic, but then again, it also makes sense, in a way.</li> <li>"Make It Stick: The Science of Successful Learning" by P. Brown, H. Roediger and M. McDaniel (audiobook) - the authors present several techniques for more effective learning. The first half (or so) of the book is very interesting and enjoyable - the efficacy of testing as opposed to repeated reading, the advantages of forcing recall and spaced repetition - it all makes a lot of sense and represents the same central theme from different angles. The second part of the book is less useful, IMHO, and feels like filler.</li> <li>"A Student's Guide to Vectors and Tensors" by Daniel Fleisch - a short book that attempts to provide an introduction to the mathematics and uses of vectors and tensors. I mainly picked it up for the latter. While the book is well written in general and does a fairly good job explaining the material, IMHO the division of effort between vectors and tensors is wrong. Vectors are a much more basic and familiar concept, and much easier to relate to the physical world. Hence, spending significantly more time on vectors and providing more examples isn't the best choice here. If this is someone's first exposure to <em>vectors</em>, he's unlikely to get tensors on a first reading of this book. And for someone already familiar with vectors and looking for more information on tensors, the first part of the book is almost useless. I was also disappointed by the lack of rigor - important concepts are presented without any proof or motivation. The exercises do a good job of complementing the material - the online solution manual is awesome, though I felt the exercises are a bit on the easy side.</li> </ul> <p>Re-reads:</p> <ul class="simple"> <li>"A Certain Ambiguity: A mathematical novel" by G. Suri and H. Bal</li> <li>"The varieties of scientific experience" by Carl Sagan</li> </ul> Summary of reading: October - December 20142015-01-01T06:02:00-08:002015-01-01T06:02:00-08:00Eli Benderskytag:eli.thegreenplace.net,2015-01-01:/2015/summary-of-reading-october-december-2014/ …</li></ul> farther species like peacocks. This book is fairly dense for a popular science work, and shouldn't be the first book about evolutionary psychology one reads, IMHO. Also, as the author himself admits, the hypotheses presented within are mostly opinions and attempts of explanations, not proven theories.</li> <li>"Big History - The Big Bang, Life on Earth, and the Rise of Humanity" by David Christian (audio course) - a long, thorough and ambitious course on world history that tries to encompass the whole history of the universe and humanity in one fell swoop. I'd say it's a very good <em>first</em> course on world history for persons not too familiar with the matter, but if you do have some background knowledge already, it would be much more useful to read two other books that cover most of the same material and much more - "The Big Bang" by Simon Singh and "Guns, Germs and Steel" by Jared Diamond.</li> <li>"Killer Angels - The Classic Novel of the Civil War" by Michael Shaara - a very detailed historic novel describing the battle of Gettysburg. The whole book covers just the 3 days of the battle. Extremely well written - no wonder this book won its author the Pulitzer prize.</li> <li>"The Victorian Internet" by Tom Standage - a pretty good book about the history of the telegraph and its various applications and implications on the world in the second half of the 19th century. I wish the first part had a bit more beef in it, though. For instance, the author spends very little time to explain how a telegraph works (even though he mentions it took years to overcome some of the initial technical problems). Not even a table describing the Morse code or its interesting mathematical properties!</li> <li>"Understanding Genetics: DNA, Genes, and Their Real-World Applications" by David Sadava (audio course) - great introduction to genetics and its uses in the modern era. It's hard to get deep into scientific theory in an audio-only course, and I can't really judge how well the lecturer did this here since I was familiar with the material already. But all the stories around the discoveries, and about how genetics are used in medicine and agriculture are fascinating.</li> <li>"The Path Between the Seas" by David McCullough - a very detailed "biography" of the Panama Canal. Although I liked the book overall, I would prefer it focused more on engineering and geography than politics and personalities. I feel some details about the canal's construction and workings were left unexplained, while the accounts of the personalities involved could be made less detailed without real loss in useful content. Still, it's awesome to read a well-written non-fiction book about a topic you know very little about it (I didn't even know about the French involvement!)</li> <li>"Effective Modern C++" by Scott Myers - the long awaited C++11/14 version of the popular "Effective C++" series. My main impression from the book is - gah, C++11 is complex. Yes, a lot of features were added that make C++ more pleasant to write; but on the other hand, some new features like move semantics and forwarding are so complex that Myers dedicates 20% of the book to them. One nagging feeling I couldn't get rid of while reading this book is that, in contrast to the original Effective books, which focused on best practices, this one spends a good chunk of its time explaining the language, which leaves less room to best practices. All in all though, this book is important and very useful for getting acquainted with the new features of C++11. I can think of no better single resource for answering the common question "so, what does C++11/14 bring to the table".</li> </ul> <p>Re-reads:</p> <ul class="simple"> <li>"Norwegian Wood" by Haruki Murakami</li> <li>"Big Bang" by Simon Singh</li> <li>"The Man Who Changed Everything" by Basil Mahon</li> </ul> Summary of reading: July - September 20142014-10-04T11:23:41-07:002014-10-04T11:23:41-07:00Eli Benderskytag:eli.thegreenplace.net,2014-10-04:/2014/summary-of-reading-july-september-2014/ …</li></ul>.</li> <li>.</li> <li>.</li> </ul> <p>Re-reads:</p> <ul class="simple"> <li>"La Soledad de los Numeros Primos" by Paolo Giordano</li> </ul> Summary of reading: April – June 20142014-06-29T14:39:57-07:002014-06-29T14:39:57-07:00Eli Benderskytag:eli.thegreenplace.net,2014-06-29:/2014/06/29/summary-of-reading-april-june-2014 …</li></ul> mind that people more than 2000 years ago were concerned about very similar things as today, and came up with very similar solutions. I can't say I liked how the book itself was written, but Stoicism is definitely interesting.</li> <li>"Mastery" by Robert Greene - The author explores what it takes to become a master in some profession/craft, and provides mini-biographies of a number of successful historical figures, such as Mozart or Benjamin Franklin. While the biographical parts are pretty interesting, I found the thesis overall not too insightful. Greene tries to demonstrate that mastery comes from practice rather than inborn talent, but his biographical examples mostly show the opposite, it seems to me. That said, the book isn't bad all in all. A particularly positive mention goes to the chapter about social intelligence which is pretty good.</li> <li>"The Five Major Pieces to the Life Puzzle" by Jim Rohn - not what I expected :-/ This book is a brief rehash of common self-help slogans, without a lot of substance.</li> <li>"Got Fight?" by Forrest Griffin - extremely funny and extremely politically incorrect. Don't expect too much real advice from this book - mainly jokes and anecdotes. I've actually liked the last part of the book, where Griffin shows "tips" for actual MMA moves and techniques the least useful. You can't really learn martial arts from a book... If you're up for a quick read and a good laugh, though, this book will certainly deliver.</li> <li>"Dr. Atkins' New Diet Revolution" by Robert Atkins - The new edition of the classic book that launched the low-carb diet revolution. My expectations weren't high here, and I was mainly looking for a more practical complement to Gary Taubes's books, which explain why refined carbs are bad very well, but don't give much in terms of practical advice of how to eat. While Atkins easily hits all the check-points of a sleazy self-help book, in terms of practical advice and todo-s it's not bad. It provides reasonably detailed paths to weight loss and maintenance using a ultra low-carb diet, as well as helpful advice in terms of what foods to use to achieve it. One thing that really bugs me about the book, though, is that its claims of "no caloric restrictions" are disingenuous. On one hand the author says you don't have to count calories at all when limiting carbs; on the other hand, he uses every opportunity to mention that all meals should be small. The "recommended daily menus" at the end of the book are very ascetic indeed. I'd say that it you eat three small meals a day, and your only snack in between is cantaloupe, that's a diet any way you want to call it, because it will be very low calorie too.</li> <li>"Two Scoops of Django: Best Practices For Django 1.6" by Daniel Greenfeld and Audrey Roy - with the surging popularity of Python for web development, and Django being its most widely used framework, this book fills a needed niche. The Django documentation is known for its high quality, but after having gone through it and having built a few simple applications, one may wonder about some of the more advanced techniques used by experienced Django developers. "Two Scoops" aims to provide a wide overview of such techniques. It has a lot of useful information, which can only be fully appreciated when you intend to apply it to a real project. An interesting fact about this book is that it's self published - while the efforts of the authors with this aspect are admirable, the quality leaves something to be desired (both proofreading and in general the way the content is laid out graphically). That said, I've seen lower-quality books from established publishers, so this may not mean much.</li> <li>"The Invisible Man" by H.G. Wells (Audiobook) - A very light and entertaining read. The audio recording available for free on Librivox is outstanding.</li> <li>"Winter of the World" by Ken Follett - Second part of the trilogy, and also very good. The only thing that really bothered me is how involved the main characters are in the key historic events around World War II. I think the author went a bit overboard on this one. I realize it would be difficult to describe these events with the same amount of intimacy, but he did succeed in one of the cases. For example, Greg was a secret service supervisor on the Manhattan project - he didn't have to be one of the scientists in it. A similar role could be carved for the other characters, putting them a bit away from the lime-lights. In general though, it's a very good book.</li> </ul> Summary of reading: January - March 20142014-03-30T16:03:35-07:002014-03-30T16:03:35-07:00Eli Benderskytag:eli.thegreenplace.net,2014-03-30:/2014/03/30/summary-of-reading-january-march-2014 …</li></ul> as an introduction, and read the much longer and more comprehensive "Good Calories, Bad Calories" later if I this one went well. Yeah, it did. The book starts kind-of slow and repetitive and by the end of the first third I wished he would get to the point. But the middle third or so of the book is awesome. What's most awesome about it is the no-nonsense science. At some point after reading a particularly interesting explanation (of how insulin affects fat intake and processing), I flipped open my copy of "Life: The Science of Biology" and read the relevant few pages. It matched exactly, and from that moment I was hooked. I'm not very easily convinced, but this book definitely made me reconsider some notions I wasn't questioning before. The author's points in favor of low-carb diets are definitely convincing, and his attempts to actually unravel and understand relevant research and trials is very commendable. I definitely plan to read "Good Calories, Bad Calories" now to learn more on the hypothesis Taubes presents.</li> <li>"Juvenilia" by Miguel Cané (read in Spanish) - an autobiographic account of the author's years in the prestigious Colegio Nacional de Buenos Aires. Occasionally amusing, I presume this account is much more interesting to persons familiar with the school in question, or at least the situation in Argentina in the 1860s.</li> <li>"Everything I Know" by Paul Jarvis - your typical motivational book, on how one should stop being afraid and JFDI. Can't say I related to it too much. Maybe I should try reading it again some time.</li> <li>"Good Calories, Bad Calories: Fats, Carbs, and the Controversial Science of Diet and Health" by Gary Taubes - The earlier, much more comprehensive and encyclopedic precursor to "Why We Get Fat" by the same author. This book provides the same conclusions, but spends significantly more ink on dissecting the published nutritional research of the past century. The author methodically addresses all the common "best practices" of high-carb, low-fat diet for healthy living. This deeper definitely look makes Taubes's position appear even stronger. I really wish someone would be running the experiments suggested by the author in the epilogue - this definitely seems like an area that needs some proper, peer-reviewed scientific method-backed research.</li> <li>"The Power of Habit" by Charles Duhigg - while the core ideas presented in the first third of the book are interesting, the rest of it felt like an empty filler full with anecdotes that, while well written, have little to do with the main theme of the book. Still, it's an intriguing and enjoyable book to read; just don't expect too much from it.</li> </ul> <p>Re-reads:</p> <ul class="simple"> <li>"Masters of Doom" by David Kushner</li> </ul> Summary of reading: October – December 20132013-12-30T06:00:26-08:002013-12-30T06:00:26-08:00Eli Benderskytag:eli.thegreenplace.net,2013-12-30:/2013/12/30/summary-of-reading-october-december-2013 …</li></ul> .</li> <li>.</li> <li>.</li> <li>.</li> <li>.</li> </ul> Summary of reading: July - September 20132013-10-04T05:43:17-07:002013-10-04T05:43:17-07:00Eli Benderskytag:eli.thegreenplace.net,2013-10-04:/2013/10/04/summary-of-reading-july-september-2013 …</li></ul> . <b>Update Feb 2018:</b> Read the book once again and liked it much more. </li> <li>.</li> <li>.</li> <li>.</li> <li>.</li> </ul> <p>Re-reads:</p> <ul class="simple"> <li>"Travels with Charley" by John Steinbeck</li> <li>"Eat, Drink, and Be Healthy" by Walter C. Willett</li> </ul> Summary of reading: April - June 20132013-07-02T05:47:10-07:002013-07-02T05:47:10-07:00Eli Benderskytag:eli.thegreenplace.net,2013-07-02:/2013/07/02/summary-of-reading-april-june-2013 …</li></ul> .</li> .</li> <li>.</li> <li>"A Winter's Tale" by Stephen King - a rather weird short book. Feels like a creepy experiment for a story-within-a-story writing.</li> <li>.</li> </ul> Summary of reading: January - March 20132013-04-03T20:07:14-07:002013-04-03T20:07:14-07:00Eli Benderskytag:eli.thegreenplace.net,2013-04-03:/2013/04/03/summary-of-reading-january-march-2013 …</li></ul> novels so I don't really have good points for comparison, but this book is very good. The first 3/4ths are excellent, even. Alas, the ending was not entirely to my taste. I think the author could have done better. Overall though, highly recommended.</li> <li>).</li> <li> <em>everything</em> to debt if you try hard enough (try harder and you can link everything to broccoli). Hence I found the book lacked a meaningful theme, and isn't written in a particularly interesting way. Perhaps it appears more exciting to people with interest in certain domains of economics.</li> <li>.</li> <li>"Memories after my death" by Yair Lapid (read in Hebrew) - a biography of the author's father, <a class="reference external" href="">Yosef (Tommy) Lapid<.</li> <li>.</li> </ul> <p>Re-reads:</p> <ul class="simple"> <li>"A pale blue dot" by Carl Sagan</li> </ul> Summary of reading: October – December 20122012-12-29T06:00:01-08:002012-12-29T06:00:01-08:00Eli Benderskytag:eli.thegreenplace.net,2012-12-29:/2012/12/29/summary-of-reading-october-december-2012 …</li></ul> .</li> <li>"Thomas Jefferson" by R. B. Bernstein - a well-written biography of Thomas Jefferson.</li> <li>.</li> <li>.</li> </ul> Summary of reading: July - September 20122012-09-30T17:55:59-07:002012-09-30T17:55:59-07:00Eli Benderskytag:eli.thegreenplace.net,2012-09-30:/2012/09/30/summary-of-reading-july-september-2012 …</li></ul> .</li> <li>.</li> <li>...</li> <li>.</li> <li>"The Abysmal Brute" by Jack London - a nostalgic return to a book I remember fondly from childhood. I don't say this often, but it's a shame this book isn't actually <em>longer</em>. London hit on a very good story here, and he could've easily developed it into a full-length novel.</li> </ul> <p>Re-reads:</p> <ul class="simple"> <li>"Of mice and men" by John Steinbeck</li> <li>"Cannery row" by John Steinbeck</li> </ul> Summary of reading: April - June 20122012-06-30T11:55:32-07:002012-06-30T11:55:32-07:00Eli Benderskytag:eli.thegreenplace.net,2012-06-30:/2012/06/30/summary-of-reading-april-june-2012 ></ul> > <li>"Why evolution is true" by Jerry A. Coyne - Discusses the evidence for evolution found by science in the past couple of centuries, examining the fossil record, biogeography, embryology, suboptimal design, etc. A well written book - similar to Dawkins, but without the militant tone.</li> <li>"Rational optimist" by Matt Ridley - <a class="reference external" href="">link to full review</a>.</li> <li>"On wings of eagles" by Ken Follett (read in Spanish) - It's probably since it's based on real events, but this novel has much fewer twists and turns than I expected from a typical "thriller". This is kind-of nice because it makes the book less artificial. On the other hand, stretching the plot to 600 pages is really too much. Do authors get paid by paper weight these days? Otherwise, I don't see why a 250-page book wouldn't do for this story.</li> <li>"How to Ace the Brainteaser Interview" by John Kador - a nice collection of brain teasers. My only gripe is that the solutions are printed right after the questions, so it's hard to read the question without involuntarily seeing the solution (at least in the Kindle version). I had to use a piece of paper sliding down the page to hide the solution while reading the question.</li> </ul> <p>Re-reads:</p> <ul class="simple"> <li>"The Pastures of Heaven" by John Steinbeck</li> </ul> Book review: "The rational optimist" by Matt Ridley2012-05-27T04:51:48-07:002012-05-27T04:51:48-07:00Eli Benderskytag:eli.thegreenplace.net,2012-05-27:/2012/05/27/book-review-the-rational-optimist-by-matt-ridley …</p> contrast objectively puzzling, then this book is for you.</p> .</p> <p>Fully a quarter of the book is a section of references for all the data he presents, so the author did his reading - these are not just statements made up of thin air - there are books, articles and otherwise published research to back up every claim.</p> <p.</p> <p>The warning above notwithstanding, I must admit I'm completely sold on the authors' ideas, simply because the basic view of life he tries to promulgate <em>exactly parallels</em>> <p>Phew, enough philosophy. Now I want to list a few quotes I found really interesting in this book:</p> <p><a class="reference external" href="">"Ricardo's law"</a> - noted by David Ricardo in 1817 [emphasis here and in other quotes is mine]:</p> <blockquote> <p.</p> <p. <strong>This exchange might even take place, notwithstanding that the commodity imported by Portugal could be produced there with less labour than in England</strong>..</p> </blockquote> <p>From page 276, about the limits of knowledge. This should be useful when explaining to others why software is so complex:</p> <blockquote> The wonderful thing about knowledge is that it is genuinely limitless. There is not even a theoretical possibility of exhausting the supply of ideas, discoveries and inventions. <strong>This is the biggest cause of all for my optimism</strong>. It is a beautiful feature of information systems that they are far vaster than physical systems: the combinatorial vastness of the universe of possible ideas dwarfs the puny universe of physical things.</blockquote> <p>A quote borrowed from John Stuart Mill:</p> <blockquote> I have observed that not the man who hopes when others despair, but the man who despairs when others hope, is admired by a large class of persons as a sage.</blockquote> <p>It kind-of makes sense that being cautious and pessimistic is an evolutionary advantage, but apparently this has backing in actual gene frequencies as well:</p> <blockquote>.</blockquote> Summary of reading: January - March 20122012-04-05T04:30:46-07:002012-04-05T04:30:46-07:00Eli Benderskytag:eli.thegreenplace.net,2012-04-05:/2012/04/05/summary-of-reading-january-march-2012 …</li> second part is about him reaching Israel, studying in several Yeshivas and moving higher and higher in the rung of religious leadership. Most of the last chapters of the book are spent on various events he recalls from his life, meeting political and religious leaders around the world. Although I can certainly admire Mr. Lau's achievements and obvious intelligence, I couldn't shake the feeling while reading the book that we just live in different ideological worlds. I suppose he would think the same, since he explicitly lists atheism as one of the humanity's biggest enemies, together with AIDS, cancer, nuclear weapons and crime. The book itself is reasonably readable, although it's tiring to read too much of it in a single sitting, so I smeared its reading over a couple of months.</li> <li>"Being Geek" by Michael Lopp - based on the blog "Rands in repose", this book claims to be "the software developer's career handbook". In my opinion this mostly applies to developers who plan to become managers, and even more so to fresh development managers who just recently stopped being engineers. The book is essentially a loose collection of blog posts, and thus the chapters wildly vary in quality. Some chapters are insightful, and some are just a waste of time. In addition, the book is written in a very specific style that may be entertaining and fun for some people, while being unbearable for others. I'm closer to the latter in this spectrum :) To conclude, depending on your style preferences and career goals you may either like or hate this book. Personally, I don't feel I gained very much by reading it.</li> <li>"Crucial Conversations" by K. Patterson, J. Grenny, R. McMillan and A. Switzler - self improvement books usually have a very clear pattern. There's an idea or two, that would perhaps take a 10-15 page article to describe. Then, for the sake of book-format-publish-ability, that idea is smeared over 200 pages in generous font with generous margins and a few meaningless diagrams. The absolute <em>key factor</em>, however, is whether the original idea is really worth knowing about. If it is, then wasting an extra few hours on such a book usually still pays off. If it isn't, well then you get pissed. Anyway, "Crucial Conversations" is basically a caricature of the smeared-to-a-book idea I was talking about. On the other hand, the basic ideas it tries to get across are pretty good. So, I do recommend to read it, especially to those who don't find conversations (with other human beings) easy.</li> <li>"The Last Song" by Nicholas Sparks (read in Spanish) - a typical Sparks novel. Not bad as far as these books go, although the third quarter is a drag. The end was worth it, however. I read it for the Spanish, of course ;-)</li> <li>"Genes, Peoples and Languages" by Luigi Luca Cavalli-Sforza - I picked this one up because several books I enjoyed in the past referred to Cavalli-Sforza as a father of a lot of ground-breaking research on genetics, population migrations and linguistics in the second half of the 20th century. So this was an attempt to read some material "straight from the horse's mouth". I must admit I didn't enjoy it, though. Cavalli-Sforza writes with confidence and ambition, but not in a very readable way. This book tries to hit somewhere between popular science and a textbook, and misses both targets.</li> </ul> <p>Re-reads:</p> <ul class="simple"> <li>"Exodus" by Leon Uris</li> </ul> Summary of reading: October – December 20112011-12-30T08:08:47-08:002011-12-30T08:08:47-08:00Eli Benderskytag:eli.thegreenplace.net,2011-12-30:/2011/12/30/summary-of-reading-october-december-2011 …</li></ul> Akunin (read in Russian) - this book's English translated name is "The winter queen". The first in the Fandorin series, a fast-moving detective story. Although enjoyable overall, I found this book a bit chaotic, with a lot of events happening too quickly.</li> <li>"The Little Golden Calf" by Ilf and Petrov (read in Russian) - another satirical novel by these authors, somewhat similar to the "12 chairs" (and featuring the same protagonist). Also fun to read, addressing pretty much the same issues of early Soviet society. Somewhere in the second half it gets a bit slow and unfocused, but eventually the ending is very good.</li> <li>"Start-up Nation, The Story of Israel's Economic Miracle" by Dan Senor and Saul Singer - The main goal of this book is to explore the phenomenon that is Israel's technological start-up scene (Israel has by far more per-capita startups and VC investments than any other nation on earth). It appears to be very well researched (with dozens of references, with sources, for each chapter) and also is quite well written. We Israelis have a lot of self-criticism, so I was fearing this book would be romanticizing Israel too much. Well, it does romanticize, but not <em>too</em> much :-) The authors also succeed in staying a-political, which is very important - keeping it objective can actually reach larger audiences. Nevertheless, no matter how you look at it, the book is awesome PR for Israel, which probably explains why they got Shimon Peres writing a forward for it, and interviews with many seniors in the country.</li> <li>"The Art of Learning" by Josh Waitzkin - an auto-biographic book by a very impressive guy who was junior US chess champion for a few years, and as an older man (in his 20s) became a world champion in the combat form of Tai Chi Chuan. His aim in the book is to explain some of the insights he gained about learning and reaching excellence. The book is very well written and fun to read. However, I didn't really find anything too profound in the author's advice. The only real lesson I learned from this book is that to be really good, in addition to talent, one needs a lot of dedication and hard work. It's incredible how much time Waitzkin was spending on chess as a child, and how much time he was spending on his martial arts training later in life, constantly striving to improve. Finishing one training day, pondering about his weak points and working for hours on each of them in the next days. It really shows you what it takes to be among the best in your craft. <em>That's</em> inspiring.</li> <li>"The Young Yagers" by Mayne Reid - this was a nostalgic read for me. I fondly recall days spent engrossed in Mayne Reid's adventure books as a kid, so I decided to give it a shot as an adult. Alas, I probably picked the wrong book - this one is a loose collection of hunting stories (with apparent continuity, but really it's unrelated stories strung together). Each story in itself is fun, very well written, with a lot of interesting details about the flora and fauna of Africa. But when reading them in succession, the stories quickly become boring so I had to pace my reading (reading no more than two or three stories per day). Mayne Reid is a very good writer - I just have to find one of his better books.</li> <li>"The Greatest Show on Earth" by Richard Dawkins - Dawkins's attempt to explain in layman language why evolution is correct and creationism is not. His style is characteristically militant towards creationism, and similarly to "The God Delusion" it's hard to envision a devoted creationist reading past the first few pages. If anything is preaching to the choir - it's this book. Not that there's anything wrong with it, and the author freely admits that the main goal of the book is to provide "intellectual ammunition" to proponents of evolution in debates versus creationists. I personally am not really into debates, but I enjoyed this book a lot. It truly has a lot of interesting scientific information in it, written in Dawkins's typical lively style; it's lots of fun to read. The parts I liked best are the one about the (non-)missing fossil record, and the one about various imperfections in living bodies and why they exist.</li> </ul> Summary of reading: July – September 20112011-10-01T09:28:23-07:002011-10-01T09:28:23-07:00Eli Benderskytag:eli.thegreenplace.net,2011-10-01:/2011/10/01/summary-of-reading-july-september-2011 …</li></ul> long-term effects of different kinds of foods and vitamins on our health. The author bases most of his conclusions on just a small number of trials, and freely admits that new trials often turn some recommendations around.</li> <li>"From the Wilderness and Lebanon" by Asael Lubotzky (read in Hebrew) - a memoire of the 2006 Israel-Hizbullah war, from the eyes of a young infatry lieutenant. Asael tells about the missions his platoon was assigned, and later about his recovery from difficult injury when an anti-tank missile hit the vehicle he was riding in. The book is of variable quality, and isn't suitable for understanding the war since it explicitly focuses on the experiences of the author. On the other hand, it's a relatively unique account of military action as seen by a low-rank field commander.</li> <li>"Krakatoa" by Simon Winchester - an account of the great Krakatoa volcano eruption in 1883. Well written, although tiresome at times. The relevant details of plate tectonics could be explained much better, I think.</li> </ul> Summary of reading: April – June 20112011-07-01T09:14:59-07:002011-07-01T09:14:59-07:00Eli Benderskytag:eli.thegreenplace.net,2011-07-01:/2011/07/01/summary-of-reading-april-june-2011 …</li></ul> , but unfortunately doesn't quite cut it. As the author keenly admits in the beginning, the topic he's tackling is just too complex, and the book is meant more as "food for thought" than rules to live by. Personally, I didn't find anything new in there. I'm well familiar with gene and meme theories from Dawkins, evolutionary psychology, as well as Csikszentmihalyi work on flow. So it was all familiar material rehashed in different words.</li> <li>"Core Java Programming, Vol I" by Cay S. Horstmann and Gary Cornell - a pretty good introductory book on Java, aimed at programmers experienced in other languages. Provides a well-collected overview of the most important aspects of Java, with glimpses into specialty topics such as GUI programming and applet deployment. I especially liked the "C++ Note" side-notes, in which the authors spent a couple of sentences explaining how the topic at hand translates to C++. Really helpful.</li> <li>"The Twelve Chairs" by Ilf and Petrov (read in Russian) - a satirical treasure-hunt novel from the 1920s in early Soviet Union. Usually satirical novels from very different periods don't make much sense, but here the authors managed to make fun of enough universal human principles to make the story very funny and relevant even today. I found the last third of the book a bit more boring than the first two, but overall it was very enjoyable.</li> </ul> Book review: "The Voyage of the Beagle" by Charles Darwin2011-04-17T06:20:31-07:002011-04-17T06:20:31-07:00Eli Benderskytag:eli.thegreenplace.net,2011-04-17:/2011/04/17/book-review-the-voyage-of-the-beagle-by-charles-darwin …</p> influenced him greatly in forming the theory of evolution. Naturally, if you expect a "aha, evolution" chapter, you'll be disappointed. The theory itself was formed by Darwin long after the voyage - but in his writing you can see signs of first understanding that could lead to its formation. It's important to keep in mind (and is obvious from the reading) that "The Voyage" is an important scientific work in itself. Even without playing the role in the formation of the theory of evolution, this book must have advanced a few sciences forward - botany, zoology and geology - since Darwin went into places no such scientist went before, and documented his findings (as well as collecting samples) very thoroughly. </p><p> Even putting its historical and scientific value aside, it's a pretty good book. Travel writing is popular nowadays - but who can boast a 5-year journey around the world, most of which was spent on land trekking into the then-almost-unknown territories (mainly in South America)? Darwin's writing style is surprisingly readable here, and apart from longish digressions into botanical observations, the book is interesting and easy to follow. </p><p> Curiously, reading the book I couldn't help thinking how great a hacker Darwin would be had he lived today :-) The man had an insatiable appetite for discovery, tinkering, exploring and cogitating. He also displays comprehensive knowledge and deduction skills in several fields of science. That he was in his early twenties during this journey simply amazes me. Clearly, from the theory he formed it is obvious that Charles Darwin was a genius, but reading this book it just strikes you again and again. </p><p> Darwin's personality also shows itself through the book. In many ways, it is heavily influenced by his upbringing and the historic period he lived in. While in a lot of aspects Darwin is an enlightened human being, in some he wouldn't be considered politically-correct nowadays. One example is his account of the savage indigenous populations, whom Darwin perceived to be lower in grade than domestic animals. Another, his cheerful account of knocking various animals on the head with his geological hammer to examine them. On the other hand, what would you expect from a naturalist exploring uncharted territories with animals that weren't properly documented until that day. </p><p> To conclude, I really enjoyed this book. I will admit I skimmed through a few tiring sections here and there, but overall it was a good read. </p> Summary of reading: January - March 20112011-04-02T12:03:49-07:002011-04-02T12:03:49-07:00Eli Benderskytag:eli.thegreenplace.net,2011-04-02:/2011/04/02/summary-of-reading-january-march-2011 …</li></ul> almost grotesque towards its end. Not to say that I didn't like it at all, I did, but less than his other books.</li> <li>"Why do babies cry?" by Sivan Ofiri and Irit Shaked (read in Hebrew) - This book is full of strange conflicts. On one hand it tries to preach a scientific approach - teaching the reader to take into account the tribal hunter-gatherer origin of homo sapiens when approaching the way babies should be handled. On the other hand it's full of ridiculous, unexplained homeopathic and naturopathic tips. It looks like someone edited the book and sprinkled "Bach flowers / Rescue Remedy can help" all around it. Even ignoring these parts, the book is of mixed quality. While tedious at times, it does contain some advice that sounds useful here and there.</li> <li>"The Boy Who Harnessed the Wind" by William Kamkwamba and Bryan Mealer - An inspiring story of a science-savvy kid in poverty-stricken Malawi building a windmill to generate electricity for his family's home, using junk from the scrapyard and a bicycle dynamo. Well written and fun to read.</li> <li>"Writing Places" by William Zinsser - A condensed autobiography, focused on the different places in which he did his writing. As you would expect from Zinsser, it's extremely well-written. No matter what he writes about, it makes a good reading.</li> </ul> <p>Re-reads:</p> <ul class="simple"> <li>"The Demon-Haunted World" by Carl Sagan</li> </ul> Summary of reading: October - December 20102010-12-31T06:42:34-08:002010-12-31T06:42:34-08:00Eli Benderskytag:eli.thegreenplace.net,2010-12-31:/2010/12/31/summary-of-reading-october-december-2010 …</li></ul> actually did. I feel that Murakami left too many loose ends there, whether on purpose or not.</li> <li>"Large-Scale C++ Software Design" by John Lakos - Now here's a book that shows its age. Written when CFRONT was the most popular C++ compiler, the book ignores namespaces (which weren't widely supported at the time), advising to use C-style prefixes instead. It's also oblivious to IDEs or at least some Vi/Emacs + ctags combination, teaching to grep through code. Although there are some very good parts in it (in particular chapter 5 with its concrete examples of decoupling, and the general idea of thinking in terms of components), most of the book is very tedious and repetitive. I frankly don't think it's worth the time invested into reading all of it. For a good sample, read the intro on components and then chapter 5.</li> <li>"Sefarad" by Antonio Muñoz Molina (read in Spanish) - A collection of short stories, mostly about the persecution of Jews in Europe by the Nazis before and during WWII, and about people in the Soviet Union hunted by the regime after the war. The stories vary in quality - some are very hard to follow.</li> <li>"Son of Hamas" by Mosab Hassan Yousef (read in Hebrew) - The author is the eldest son of one of Hamas's founders - Sheikh Hassan Yousef. He tells his story of growing up in Ramallah in the 1970s and 1980s, and how he became a Shin Bet agent undercover in the Hamas leadership. The book is not bad overall, although I found the writing somewhat naive.</li> <li>"Continuous Integration: Improving Software Quality and Reducing Risk" by Paul Duvall, Steve Matyas, Andrew Glover - Covers CI quite well from all possible angles. While parts of the book were definitely interesting and worth a read, I think CI is just not a large enough topic to justify a 350+ page book. So it's quite repetitive and a bit tedious. A long, succinct article would probably be more suitable.</li> <li>"One wish to the right" by Eshkol Nevo (read in Hebrew) - an enjoyable novel about a group of friends living their lives in Israel in the 1990s and early 200s.</li> </ul> Summary of reading: July – September 20102010-10-02T14:52:58-07:002010-10-02T14:52:58-07:00Eli Benderskytag:eli.thegreenplace.net,2010-10-02:/2010/10/02/summary-of-reading-july-september-2010 …</li></ul> it - the scope is too wide. Not badly written, but in reality quite a useless book in these days of Wikipedia.</li> <li>.</li> <li>.</li> <li>"Programming Collective Intelligence" by Toby Segaran - Packs an incredible amount of information into a relatively compact book - a lot of very useful algorithms for common problems of machine learning. What I liked less is the "ad-hoc"-ish un-idiomatic Python code.</li> </ul> <p>Re-reads:</p> <ul class="simple"> <li>"Les Miserables" by Victor Hugo</li> <li>"The seven daughters of Eve" by Bryan Sykes</li> <li>"The journey of man" by Spencer Wells</li> <li>"Flow – the psychology of optimal experience" by Mihály Csíkszentmihályi</li> </ul> Summary of reading: May - June 20102010-07-08T19:33:37-07:002010-07-08T19:33:37-07:00Eli Benderskytag:eli.thegreenplace.net,2010-07-08:/2010/07/08/summary-of-reading-may-june-2010 books may encourage me to write fuller reviews, which I'll just do in the usual manner. So, for May - June 2010 the list is: <ul class="simple"> <li>"On Writing Well" by William Zinsser - A good book on improving one's writing skills.</li> <li>"Burning Bright" by John Steinbeck - An unusual short story structured as a play. Quick and fun to read.</li> <li>"The Solitude of Prime Numbers" by Paolo Giordano (read in Spanish) - a sad story, beautifully written. Though rough-edged in some places, this book was very enjoyable.</li> <li>"Sweet Thursday" by John Steinbeck - Sequel to "Cannery Row". Though somewhat less original, it's still fun to read.</li> </ul> <p>Re-reads (books I've already read in the past, and have recently re-read):</p> <ul class="simple"> <li>"Programming Pearls" by Jon Bentley</li> <li>"The Moral Animal" by Robert Wright</li> </ul> Book review: "Time to Kill" by John Grisham2010-05-15T07:12:51-07:002010-05-15T07:12:51-07:00Eli Benderskytag:eli.thegreenplace.net,2010-05-15:/2010/05/15/book-review-time-to-kill-by-john-grish …< great for this purpose - when the plot is flowing, the chances to enjoy the book in spite of not understanding some words is higher. </p><p> So without further ado, here's the review for "Time to Kill". I've seen the movie a couple of times, and actually expected the book to be really good. Unfortunately, I was direly disappointed. This book's plot is very shallow, almost as shallow as the characters in it. I don't remember another book I've recently read where there was not a single likable character to connect to. Jake Brigance certainly isn't - he's an irritable, irksome and unstable individual without any real apparent talent. </p><p> The plot itself is patchy - with some leads starting and never ending. Ellen Roark is one great example - she became an integral part of the plot and then completely vanished after the attack on her. Yes, Grisham mentioned a short visit to the hospital and a phone call in the end of the book, but this was hardly enough. </p><p> Another qualm is about the trial itself. The whole book prepares you to the trial, but then leaves you disillusioned. As far as trial books and movies go, this one was extremely shallow, with barely an interesting twist and skill displayed. From Jake's performance in the trial, I would think he's a lousy attorney who won it by chance and not the hero Grisham tries to show him as. </p><p> To conclude, I've read much better fiction. "Time to Kill" was Grisham's first novel and it does show a few positive signs - sometimes the plot manages to grip you, but overall it's not a book I would recommend to anyone trying to actually get something from the reading. Its only utility is to practice a foreign language or just to kill some time (sic). </p> Book review: "The Power of Babel" by John McWhorter2010-05-05T17:03:34-07:002010-05-05T17:03:34-07:00Eli Benderskytag:eli.thegreenplace.net,2010-05-05:/2010/05/05/book-review-the-power-of-babel-by-john-mcwhorter …</p> to parallel to biological evolution. Much of the book discusses how languages change by combining, splitting, simplification and so on, with many examples taken from a multitude of languages - some well known, and some most people have never heard of. </p><p> A fascinating topic in linguistics is pidgin and creole languages. Since there are relatively many examples of the formation of such languages in recent history (mainly after the beginning of the European expansion in the 15th century), the topic has been studied well, and the author dedicates many pages to it. </p><p> Another thing I found really interesting is the discussion of the relative complexities of languages. Modern languages (especially the European ones) are much simpler than many primitive languages. As the author says (and his examples powerfully demonstrate), some of the world' languages are so complicated that one has to wonder how anyone is able to speak them. One example is a native-American language spoken in the north-western part of the U.S.A. that's so convoluted that children learn to fully speak correctly it with all the nuances only at the age of 10. There are actually reasons for this being so, and they are presented in the book. </p><p> There's a lot more to say about this book, but I'll stop here. It's definitely recommended. </p> Book review: "Marina" by Carlos Ruiz Zafón2010-03-25T19:39:44-07:002010-03-25T19:39:44-07:00Eli Benderskytag:eli.thegreenplace.net,2010-03-25:/2010/03/25/book-review-marina-by-carlos-ruiz-zaf …< Shadow of the Wind". And now it heralds the subtitle "the unforgettable history that preceded The Shadow of the Wind", although it's a completely different story. OK, so the Spanish publishers use all the same dirty tricks as American ones :-) </p><p> As commonly happens with such books (again, Dan Brown is an example) - they are quite similar. Indeed, "Marina" is in many senses similar to "The Shadow of the Wind". It has a teenage protagonist that narrates the novel, a touching love story, a mystery that gets uncovered step by step, and it's also somewhat dedicated to the city of Barcelona. It's a much shorter book (only 290 pages in a rather small format), but has most of the basic elements. It's almost as if this was Zafón practicing for his real novel. </p><p> But enough comparisons. By itself, "Marina" is a pretty good book as well. Nicely written, interesting and unusual. It delves into science fiction a little, which I didn't like, but other than that it was quite enjoyable. </p> Book review: "The Wayward Bus" by John Steinbeck2010-03-20T12:08:09-07:002010-03-20T12:08:09-07:00Eli Benderskytag:eli.thegreenplace.net,2010-03-20:/2010/03/20/book-review-the-wayward-bus-by-john-steinbeck …</p> couple of their servants. In the morning, they manage to continue their journey through bad weather and a detour around a shaky bridge. </p><p> Like many books written a while ago, it can be probably appreciated best in a historic context. However, even if you aren't familiar with that period (which is likely for readers in 2010, and for me as well), you can't help falling for Steinbeck's amazing skills in presenting his characters. Each one is unique, with an interesting story, depicted well enough in just a few pages to allow a connection with the reader. In character development, Steinbeck is an unsurpassed master, at least from my point of view. </p><p> While probably not one of my favorite Steinbeck works, it's certainly a good one - a recommended read. </p>
|
https://eli.thegreenplace.net/feeds/book-reviews.atom.xml
|
CC-MAIN-2019-22
|
refinedweb
| 23,607
| 51.28
|
Odoo Help
This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers.
Click on Kanban doesn't work after auto refreh kanban view in Odoo9
Hello,
I have set the auto refresh for the kanban view.
Auto refresh is doing well but when it auto refresh, i can't able to click on kanban view.
Here is my code for auto refresh :-
def return_action(self):
model_obj = self.env['ir.model.data']
data_id = model_obj._get_id('work_order', 'filing_main_menu_bag_detail_kanban_view')
view_id = model_obj.browse(data_id).res_id
return {
'type': 'ir.actions.act_window',
'name': _('Bag Detail'),
'res_model': 'bag.detail',
'view_type': 'kanban',
'view_mode': 'kanban',
'view_id' : view_id,
'target': 'current',
'nodestroy': True,
}
Hope for satisfactory reply.!
|
https://www.odoo.com/forum/help-1/question/click-on-kanban-doesn-t-work-after-auto-refreh-kanban-view-in-odoo9-99342
|
CC-MAIN-2016-44
|
refinedweb
| 127
| 51.85
|
Setup Network¶
A
distributed network consists of one
Scheduler node and several
Worker nodes. One can set these up in a variety of ways
Using the Command Line¶
We launch the
dscheduler executable in one process and the
dworker
executable in several processes, possibly on different machines.
Launch
dscheduler on one node:
$ dscheduler Start scheduler at 192.168.0.1:8786
Then launch
dworker on the rest of the nodes, providing the address to the
node that hosts
dscheduler:
$ dworker 192.168.0.1:8786 Start worker at: 192.168.0.2:12345 Registered with center at: 192.168.0.1:8786 $ dworker 192.168.0.1:8786 Start worker at: 192.168.0.3:12346 Registered with center at: 192.168.0.1:8786 $ dworker
The convenience script
dcluster opens several SSH connections to your
target computers and initializes the network accordingly. You can give it a
list of hostnames or IP addresses:
$ dcluster 192.168.0.1 192.168.0.2 192.168.0.3 192.168.0.4
Or you can use normal UNIX grouping:
$ dcluster 192.168.0.{1,2,3,4}
Or you can specify a hostfile that includes a list of hosts:
$ cat hostfile.txt 192.168.0.1 192.168.0.2 192.168.0.3 192.168.0.4 $ dcluster - s = Scheduler(loop=loop) s.start(port)
On other nodes start worker processes that point to the IP address and port of the scheduler.
from distributed import Worker w = Worker('192.168.0.1', 8786, loop=loop) w.start(0) # choose randomly assigned port
Alternatively, replace
Worker with
Nanny if you want your workers to be
managed in a separate process by a local nanny process.
If you do not already have a Tornado event loop running you will need to create and start one, possibly in a separate thread.
from tornado.ioloop import IOLoop loop = IOLoop() from threading import Thread t = Thread(target=loop.start) t.start()
Using Amazon EC2¶
See the EC2 quickstart for information on the
dec2 easy setup
script to launch a canned cluster on EC2.
|
http://distributed.dask.org/en/1.10.2/setup.html
|
CC-MAIN-2021-17
|
refinedweb
| 348
| 66.74
|
You can import data from a database into MATLAB® using the Database Explorer app or the command line. To select data for import, you can build an SQL query visually by using the Database Explorer app. Or, you can use the command line to write SQL queries. To achieve maximum performance with large data sets, use the command line instead of the Database Explorer app.
After importing data, you can repeat the steps in the process, such as connecting to a database, executing an SQL query, and so on, by using a MATLAB script to automate them.
To open multiple connections to the same database simultaneously, you can create multiple SQL queries using the Database Explorer app. Or, you can connect to the database using the command line.
If you do not have access to a database and want to import your data quickly, you can use the MATLAB interface to SQLite. For details, see Working with MATLAB Interface to SQLite.
If you have minimal proficiency writing SQL queries or want to browse the data in
a database quickly, use the Database Explorer app. To build queries,
see Create SQL Queries Using Database Explorer App. After
creating a query using the Database Explorer app, you can generate the SQL code for
the query. For details, see Generate SQL Query. You can
embed the generated SQL code into the SQL query that you specify in the
fetch function. Or, you can create
an SQL script file to use with the
executeSQLScript function.
If you want to automate the current task after you create the SQL query, then generate a MATLAB script. For details, see Generate MATLAB Script.
If you are not familiar with writing SQL queries, then use the Database Explorer app to select data to
import from your database. Or, you can use the
sqlread function at the command line. This function needs only a
database connection and the database table name to import data. Furthermore, the
sqlread function does not require you to set database
preferences.
If you know how to write SQL queries, you can write basic SQL statements as character vectors or string scalars. For a simple example, see Import Data from Database Table Using sqlread Function.
When writing SQL queries, you can import data into MATLAB in one of two ways. Use the
select function for maximum memory
efficiency and quick access to imported data. Or, use the
fetch function to import numeric
data with double precision by default or define the import strategy for the SQL
query.
For memory management, see Data Import Memory Management.
If you have a stored procedure that imports data, then use the
runstoredprocedure or
fetch functions.
When importing data from a database, Database Toolbox™ functions return custom data types, such as Oracle® ref cursors, as Java® objects. You can manually parse these objects to retrieve their data
contents. Use the
methods function to access all the
methods of a Java object. Use the available methods to retrieve data from a Java object. The steps for your object are specific to your database. For
details, refer to your JDBC driver or database documentation.
If you have a long SQL query or multiple SQL queries that you want to run
sequentially to import data, create an SQL script file containing your SQL queries.
To execute the SQL script file, use the
executeSQLScript function. If you have SQL queries stored in
.sql or text files that you want to run from MATLAB, you can also use this function.
database |
fetch |
sqlread
|
https://ch.mathworks.com/help/database/ug/data-import-using-database-explorer-app-or-command-line.html
|
CC-MAIN-2021-39
|
refinedweb
| 588
| 64.61
|
Back to article
I know, I know. All the cool kids have been doing it for months now.
I'm way behind. But a couple of weeks ago, I finally took the plunge
and signed up for Twitter.
I was skeptical, but it's actually pretty interesting. It didn't take
me long to build up a list of people to follow. But I didn't want
to keep a tab open in my browser all day every day just to check on them.
I investigated the available Linux clients,
but none was quite what I was looking for. Either they required that
I install some big piece of infrastructure like Mono or Adobe Air,
or they had nasty bugs.
And then I found out about Python-Twitter. It provides Python
bindings for Twitter's public API, and it's super easy to use.
Turns out you can write your own Twitter client in less time than
it takes to read about the existing ones!
The first step is installing Python-Twitter. Cutting-edge distros like
Ubuntu 9.04 have it as a package; install python-twitter and you're done.
On older distros, you might have to get it from the source.
First, you'll need a package called SimpleJSON. Your distro
may have it already (it might, even if it doesn't have python-twitter.
But if not, get it from cheeseshop.python.org/pypi/simplejson.
Then go to
code.google.com/p/python-twitter,
click on Downloads
and grab the latest tarball. Expand it, build and install:
tar xvf python-twitter-0.6.tar.gz
cd python-twitter-0.6/
python setup.py build
sudo python setup.py install
Once it's installed, you're ready to start programming.
To do anything with Twitter, you need to log in with a username
and password. That looks like this:
import twitter
username = "your_username"
password = "your_password"
api = twitter.Api(username=username, password=password)
Once you're logged in, you can get your "friends timeline" -- the
list of tweets you'd see if you went to twitter.com and logged in --
like this:
statuses = api.GetFriendsTimeline(username)
statuses is a list of the Python-Twitter status object.
A status represents one tweet, and has properties like user
(the author), text (the contents) and
created_at_in_seconds (the time when it was posted).
user is an object representing a Twitter user.
It has properties like name, screen_name and
profile_image_url.
So you can print that list of tweets like this:
for s in statuses :
print s.user.name, "(", s.user.screen_name, ") :"
print s.text
print
Well, almost. There's one more thing you need to do. Tweets
come in as unicode, and Python's print can't handle
unicode characters unless you tell it how to encode them.
If you're not sure, UTF-8 is a good choice:
for s in statuses :
print s.user.name.encode("utf-8"), "(", s.user.screen_name, ") :"
print s.text.encode("utf-8")
print
Run it, and you'll see output something like this:
/var/lib/python-support/python2.6/twitter.py:10: DeprecationWarning: the md5 module is deprecated; use hashlib instead
import md5
donttrythis ( donttrythis ) :
I'm playing with a Canon 5D Mark II if you must know @el_es_gato. The footage it shoots is UNBELIEVABLE! I'm gonna make me a film.
Tim O'Reilly ( timoreilly ) :
Chevron patents on NiMH batteries holding back future of plug-in hybrid cars. Learned about in conversation at #aif09
PZ Myers ( pzmyers ) :
I have now been traveling for a full 24 hours...and still 2 hours from home.
Wall Street Journal ( WSJ ) :
Companies, Workers Tangle Over Law
Science News ( SciNewsBlog ) :
China's Internet Users Force Government to Back Down on Censorship: China's Internet Users Force Government to B..
Voil�! Your very own customizable Twitter client.
Add a loop and a time.sleep(300) to make it update every
five minutes (300 seconds).
Don't worry too much if you see that DeprecationWarning at the
beginning. That's a minor bug in python-twitter, but it doesn't hurt
anything; hopefully they'll fix it soon.
|
http://www.linuxplanet.com/linuxplanet/print/6792
|
CC-MAIN-2015-48
|
refinedweb
| 685
| 68.77
|
.
From case:
When compiled with aot, the following simple code failed with fatal
exception.
namespace iOS_Test
{
public class ClassA<T>
{
public long? Amount { get; set; }
}
public class ClassB
{
public string Hello { get; set; }
}
public partial class iOS_TestViewController : UIViewController
{
public iOS_TestViewController() : base("iOS_TestViewController",
null)
{
var test = new ClassA<ClassB>();
test.GetType().GetProperty("Amount").SetValue(test, 48L, null);
}
}
}
The error message is in the attachment.
You can simply create a blank new iOS project, paste in the code, and run it
with aot compilation (or deploy to device). Note that simulator did not
exhibit this problem.
Googling this issue, this seems to be an issue already fixed sometime ago,
but apparently re-appear in recent release. FYI, this is a show-stopper to
our client¹s development which is planning a demo of their app this week.
Please get back to me at the earliest.
This still happens with master even with:
* generic value type sharing; and
* linker disabled
-> Zoltan
Fixed in mono master 32861b70dd0694883b42e0f78746918d66cd29db/mt master 004c04c1a8d4cc681147f4974ee750cbad813dc9.
Thanks for the fix Zoltan. Will the fix be included in 3.2.7 release?
It will be in some future xamarin.ios release.
FIXED.
I verified this issue on below environments and noticed that application is deployed on simulator and device successfully.
Refer screen cast:
Build Info:
Application deployed successfully on device, Hence I marked this as verified
|
https://xamarin.github.io/bugzilla-archives/16/16747/bug.html
|
CC-MAIN-2019-43
|
refinedweb
| 224
| 57.27
|
What is an RSS feed?
RSS (RDF Site Summary or Really Simple Syndication) feed is an XML file used for providing users with frequently updated content in a standardized, computer-readable data format.
Why do you need an RSS feed?
Millions of users every day enjoy reading from several websites through a feed reader, such as Feedly. You need to provide an RSS feed for your blog not to give up a potentially large share of audience.
Furthermore you can use your RSS feed to cross-post to other websites, such as the popular dev.to (see here).
The Kessel Run in less than twelve parsecs
Create a file
rss.js in
lib folder, or another directory of your preference other than
pages, which as you probably know is a special directory reserved to routable content.
The new file would look like this (source here):
The mechanism is the following:
generateRss:
- accepts the full list of posts of your blog as input
- prepares the structure of the RSS feed
- injects a data structure for each post (see below)
- returns the RSS feed as string
generateRssItem:
- accepts a single post as input
- prepares the structure of an RSS feed item
- returns the RSS feed item as string
I will help you, I have spoken
"What's going on in here?" you might be wondering.
Let's break it down, starting from the imports:
import { BLOG_URL, BLOG_TITLE, BLOG_SUBTITLE} from '@lib/constants' import markdownToHtml from '@lib/markdownToHtml'
I decided to stash some general information about the blog in
/lib/constants.js, it's really up to you where you prefer to store these information.
The method
markdownToHtml is a helper provided by Next.js blog starter, which allows you to convert content from markdown to HTML as you have surely guessed.
Side note:
@lib resolves to
/lib from anywhere, regardless the depth of the folder structure, thanks to
jsconfig.json in the root folder (see here).
Since
markdownToHTML is asynchronous,
generateRssItem needs to be asynchronous as well in order to
await for its result:
export async function generateRssItem(post) { const content = await markdownToHtml(post.content || '') // more code here }
An RSS feed item, which represents a post on your blog, is structured as follows:
guidis a unique identifier for the post, you can generate it the way you see fit
titleis the title of the post, of course
descriptionis the excerpt of the post
linkis the absolute URL of the post, including protocol and base URL of your blog
pubDateis the date published
content:encodedis the full content of your post, which applies the encapsulation CDATA to assure your HTML to be correctly parsed
You can find a thorough list of properties here.
To assemble the whole RSS feed, all posts need to be scanned and converted to RSS items. The trick is using
await Promise.all(posts.map(generateRssItem)) in
generateRss to make this happen:
export async function generateRss(posts) { const itemsList = await Promise.all(posts.map(generateRssItem)) // more code here }
Now that you have all RSS items, you can add some general information to the RSS feed:
rssis the root element of the XML document
channelis an XML element containing the whole blog data, comprising general information and all posts
titleis the title of the blog
descriptionis the description of the blog
lastBuildDateis the date of the most recent post
atom:linkelement needs your absolute blog URL in its
hrefproperty
Finally you can inject the RSS items generated in the previous step.
This is the Way
In
/pages/index.js there's a method
getStaticProps, which is called by Next.js at build time (see here).
The idea is generating your RSS feed as an XML file precisely at build time.
To do so, first import
generateRss and
fs, then modify
getStaticProps as follows:
export async function getStaticProps() { const allPosts = getAllPosts([ 'title', 'date', 'slug', 'author', 'coverImage', 'excerpt', 'content', ]) const rss = await generateRss(allPosts) fs.writeFileSync('./public/rss.xml', rss) return { props: { allPosts }, } }
A few tweaks have been introduced:
- adding
contentto the array of fields passed to
getAllPosts, used by
generateRssItemto inject the full content of a post
- generating your RSS using
const rss = await generateRss(allPosts)
- writing the result in an XML file, which must be placed in the
publicfolder to be reachable by users and applications
A final touch would be updating your
.gitignore to exclude
rss.xml, which doesn't really need to be versioned.
An example of the final result is the RSS feed I generated for this very blog.
Who controls the spice controls the universe
You can now add a link to
/rss.xml on your homepage to provide your users with a link they can add to their favorite feed reader.
Moreover on dev.to, in Settings > Extensions, you can specify your RSS feed URL for cross-posting.
This way every time you publish on your blog
main branch, a draft will be automatically created on your dev.to dashboard. You might need to re-add the cover image, but your post will be there from top to bottom.
Conclusion
Next.js is still young and is missing some capabilities, such as RSS feed generation. However its design comprises only a handful of moving parts, which makes adjusting the project to meet your needs quite doable.
I'm fairly sure more features will come in time from Vercel and from Next.js community. Until then, don't be afraid to tinker with it.
Somewhere, something incredible is waiting to be known.
― Carl Sagan
Discussion (0)
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/riccardobevilacqua/how-to-create-an-rss-feed-in-next-js-10-12la
|
CC-MAIN-2022-21
|
refinedweb
| 919
| 59.84
|
On Fri, 2005-03-18 at 09:09 +0800, Li Shaohua wrote:> On Fri, 2005-03-18 at 02:08, Bjorn Helgaas wrote:> > On Thu, 2005-03-17 at 09:33 +0800, Li Shaohua wrote:> > > The comments in previous quirk said it's required only in PIC mode.> > ...> > > I feel we concerned too much. Changing the interrupt line isn't harmful,> > > right? Linux actually ignored interrupt line. Maybe just a> > > PCI_FIXUP_ENABLE(PCI_VENDOR_ID_VIA, PCI_ANY_ID, quirk_via_irq) is> > > sufficient.> > > > I think it's good to limit the scope of the quirk as much as> > possible because that makes it easier to do future restructuring,> > such as device-specific interrupt routers.> > > > The comment (before quirk_via_acpi(), nowhere near quirk_via_irqpic())> > says *on-chip devices* have this unusual behavior when the interrupt> > line is written. That makes sense to me.> > > > Writing the interrupt line on random plug-in Via PCI devices does> > not make sense to me, because for that to have any effect, an> > upstream bridge would have to be snooping the traffic going through> > it. That doesn't sound plausible to me.> > > > What about this:> Hmm, this looks like previous solution. We removed the specific via> quirk is because we don't know how many devices have such issue. Every> time we encounter an IRQ issue in a VIA PCI device, we will suspect it> requires quirk and keep try. This is a big overhead. OK. Try this one for size. It differs from what's currently inthe tree in these ways: - It's a quirk, so we don't have to clutter acpi_pci_irq_enable() and pirq_enable_irq() with Via-specific code. - It doesn't do anything unless we're in PIC mode. The comment suggests this issue only affects PIC routing. - We do this for ALL Via devices. The current code in the tree does it for all devices in the system IF there is a Via device with devfn==0, which misses Grzegorz's case.Does anybody have a pointer to a spec that talks about this? It'dbe awfully nice to have a reference.Grzegorz, can you check to make sure this still works after all thetweaking?===== arch/i386/pci/irq.c 1.55 vs edited =====--- 1.55/arch/i386/pci/irq.c 2005-02-07 22:39:15 -07:00+++ edited/arch/i386/pci/irq.c 2005-03-15 10:11:44 -07:00@@ -1026,7 +1026,6 @@ static int pirq_enable_irq(struct pci_dev *dev) { u8 pin;- extern int via_interrupt_line_quirk; struct pci_dev *temp_dev; pci_read_config_byte(dev, PCI_INTERRUPT_PIN, &pin);@@ -1081,10 +1080,6 @@ printk(KERN_WARNING "PCI: No IRQ known for interrupt pin %c of device %s.%s\n", 'A' + pin, pci_name(dev), msg); }- /* VIA bridges use interrupt line for apic/pci steering across- the V-Link */- else if (via_interrupt_line_quirk)- pci_write_config_byte(dev, PCI_INTERRUPT_LINE, dev->irq & 15); return 0; } ===== drivers/acpi/pci_irq.c 1.37 vs edited =====--- 1.37/drivers/acpi/pci_irq.c 2005-03-01 09:57:29 -07:00+++ edited/drivers/acpi/pci_irq.c 2005-03-15 10:10:57 -07:00@@ -388,7 +388,6 @@ u8 pin = 0; int edge_level = ACPI_LEVEL_SENSITIVE; int active_high_low = ACPI_ACTIVE_LOW;- extern int via_interrupt_line_quirk; ACPI_FUNCTION_TRACE("acpi_pci_irq_enable"); @@ -437,9 +436,6 @@ return_VALUE(0); } }-- if (via_interrupt_line_quirk)- pci_write_config_byte(dev, PCI_INTERRUPT_LINE, irq & 15); dev->irq = acpi_register_gsi(irq, edge_level, active_high_low); ===== drivers/pci/quirks.c 1.72 vs edited =====--- 1.72/drivers/pci/quirks.c 2005-03-10 01:38:25 -07:00+++ edited/drivers/pci/quirks.c 2005-03-18 10:55:01 -07:00@@ -683,19 +683,40 @@ } DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82454NX, quirk_disable_pxb ); -/*- * VIA northbridges care about PCI_INTERRUPT_LINE- */-int via_interrupt_line_quirk;+#ifdef CONFIG_ACPI+#include <linux/acpi.h>+#endif -static void __devinit quirk_via_bridge(struct pci_dev *pdev)+static void __devinit quirk_via_irqpic(struct pci_dev *dev) {- if(pdev->devfn == 0) {- printk(KERN_INFO "PCI: Via IRQ fixup\n");- via_interrupt_line_quirk = 1;+ u8 irq, new_irq;++#ifdef CONFIG_X86_IO_APIC+ if (skip_ioapic_setup)+ return;+#endif+#ifdef CONFIG_ACPI+ if (acpi_irq_model != ACPI_IRQ_MODEL_PIC)+ return;+#endif+ /*+ * Some Via devices make an internal connection to the PIC when the+ * PCI_INTERRUPT_LINE register is written. If we've changed the+ * device's IRQ, we need to update this routing.+ *+ * I suspect this only happens for devices on the same chip as the+ * PIC, but I don't know how to identify those without listing them+ * all individually, which is a maintenance problem.+ */+ new_irq = dev->irq & 0xf;+ pci_read_config_byte(dev, PCI_INTERRUPT_LINE, &irq);+ if (new_irq != irq) {+ printk(KERN_INFO "PCI: Via PIC IRQ fixup for %s, from %d to %d\n",+ pci_name(dev), irq, new_irq);+ pci_write_config_byte(dev, PCI_INTERRUPT_LINE, new_irq); } }-DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_ANY_ID, quirk_via_bridge );+DECLARE_PCI_FIXUP_ENABLE(PCI_VENDOR_ID_VIA, PCI_ANY_ID, quirk_via_irqpic); /* * Serverworks CSB5 IDE does not fully support native mode-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
|
http://lkml.org/lkml/2005/3/18/140
|
CC-MAIN-2016-07
|
refinedweb
| 774
| 55.84
|
15 November 2011 14:20 [Source: ICIS news]
WASHINGTON (ICIS)--?xml:namespace>
In its monthly report, the department’s Bureau of Labor Statistics said that wholesale prices for basic organic chemicals fell by 2.5% in October, more than offsetting a 1.1% gain in September.
The October decline in prices for organic chemicals at the producer level – also known as wholesale prices – resumed a downward trend in pricing that showed a 1.2% drop in August.
In plastic resins and related materials, wholesale prices were off by 3.9% in October, wiping out the 3.1% producer prices gain seen in September. Here too, the trend in resins prices again turned downward following a 2.2% decline in August.
Overall, the bureau said that its producer price index (PPI) was largely unchanged in October.
While the index did slip marginally by 0.3% in October, that was attributed chiefly to a 1.4% drop in wholesale prices for energy goods, including a 2.4% fall in producer prices for gasoline.
Wholesale prices for foods rose by a slim 0.1% last month, the bureau said.
However, with energy and food products backed out of the overall data, the bureau said the so-called core index of producer prices was unchanged in October.
That flat line in the core index for last month brought an end to what had been 10 consecutive monthly increases, the bureau noted.
While the bureau compiles monthly price changes in hundreds of commodities and products, it does not provide any analysis of those
|
http://www.icis.com/Articles/2011/11/15/9508513/us-wholesale-prices-for-chemicals-and-resins-fall-sharply-in-oct.html
|
CC-MAIN-2014-10
|
refinedweb
| 258
| 68.16
|
Introduction
I want to talk about a common pattern I use in my React apps to display common sets of data: hard-coding a "local API" into a project, via a local JSON file.
In my GIF FIT app all the exercises are a local JSON file that I import to one of my reducers and apply the selecting random exercises based on user input. I have a separate file for dumbbell exercises.
In my portfolio site I also have two different .json files, one for my projects and one for my blogs.
This article will explore what is an API and how I simulate using one in my projects.
API - What is it?
API is short for "Application Programming Interface". I could paste in a lot of technical definitions, but I would rather just summarize in my own words:
Think of an API as a way to define how information is both stored and then shared. Anytime you interact with a program, like Twitter or Facebook, any tweet you send or all the tweets you read, any terrible facebook post shared by your racist uncle that ends up in your feed, are all the processes of receiving data from and sending data to thier APIs.
APIs can follow different patterns, and some can be updated or modified by the user (like sending a new tweet, you just added an entry to Twitter's database) and some APIs are only meant to be consumed and not changed by the user.
How does this help us?
APIs make storing similar sets of data easy. Each Twitter user has the same properties: username, followers, tweets, likes, and A LOT MORE. Take a look at one Tweet object:
!!!! That is intimidating even for me!
You can imagine how complex APIs can grow as the scale of the application grows.
Okay, now that we are thoroughly stressed out, take a breath, and relax.
We can recreate an API in our local files and use that data to call anywhere in our app. And trust me, you probably will not have to create anything that complex, at least not on your own! And if you do, you probably need to stop reading this article because you can control the Matrix.
How to make your local API
The first thing you want to do is figure out what you want to display.
I imbedded a (very contrived) Codesandbox that I created a for this DEV post, called Powerful People.
For each "Powerful Person" I wanted to display an image, their full name, their profession and their hobbies. Next I had to create the file for my local data. In my
src folder I created a folder called
data and inside that folder I create a file called
personData.json.
src └───data │ │ personData.json
What is JSON? It is short for "JavaScript Object Notation"..
When we create a
.json file we can only structure the data within in a very particular format: an array of objects. When we import our
personData.json into our component, we will map through the array of objects, displaying each one individually. We will define each object with the properties I stated I wanted to display above.
Take a look at the structure of my "person object".
[ { "id": 1, "url": "", "name": "", "role": "", "hobbies": [ ] } ]
A couple notes:
Each object SHOULD have an "id" property. When I have multiple
.jsonfiles, I begin each array starting from a separate "hundred". This one starts at the "zero" hundred (001, 002) and a different
.jsonfile would have started with 201, 202, 203,
etc.). I think this is good practice.
It's a VERY good idea to have separate a
.jsonfile with an empty object for quick and easy copy-and-pasting new entries into your array. I call mine
structure.json.
src └───data │ │ personData.json │ │ structure.json
personData.json file with a couple entries filled out. Not too bad, huh! Each object gets a unique "id" and you just fill out what you want. This has numerous benefits that I will touch later as we get to displaying the information on screen!
[ { "id": 1, "url": "", "name": "Bruce Wayne", "role": "Batman", "hobbies": [ "spelunking", "stalking", "beating up bad guys" ] }, { "id": 2, "url": "", "name": "Lady Galadriel", "role": "Ring Bearer", "hobbies": [ "giving gifts", "star gazing", "slaying orcs" ] } ]
And the data can be anything you want or need it to be! Check out a couple of
.json examples from other React projects:
portfolio site blogs
[ { "id": 201, "logo": "devto.png", "name": "React Hooks Series: useState", "image": "useState screenshot.jpg", "summary": "Part one in my React Hooks Series. I examine the useState hook in a basic timer app with examples from Codesandbox.", "url": "" }, { "id": 202, "logo": "devto.png", "name": "React Hooks Series: useEffect", "image": "useEffect screenshot.jpg", "summary": "Part two in my React Hooks Series takes a look at the useEffect hook and how I implememnt in a small timer app I created in Codesandbox.", "url": "" } ]
portfolio site projects
[ { "id": 1, "name": "GIF FIT", "image": "gif fit resized collage.jpg", "github": "", "url": "", "summary": "Home workouts made easy!", "description": "Gif Fit builds randomized workouts for you that you can do without any equipment in your home. Select how many exercises you want to perform, how long each one lasts, the rest period in between and total number of rounds. Gif Fit will keep track of each move for you and let you know when to rest and prepare for the next exercise. Features a React front-end, Redux to manage state, and Material UI for styling. Gifs are sourced from Giphy.com (special thanks and credit to 8fit for uploading so many awesome exercises). Made with love to genuinely help others during these stressful and challenging times.", "technologies": [ "React", "JavaScript", "Redux", "Material UI" ] }, { "id": 2, "name": "DO DID DONE", "image": "do did done collage.jpg", "github": "", "url": "", "summary": "Keep track of your todos by category", "description": "Do Did Done allows a user to create an account and select several categories to manage their tasks. Do Did Done is made with a React frontend and a Rails API backend. The React frontend features Material UI components, React router, Redux and local state management, functional components and React hooks and a thoughtful design for improved UI and UX. The frontend consumes the Rails API with full CRUD functionality. The Rails API backend is hosted on Heroku and features a PostgreSQL database. It handles sessions, cookies, CRUD functionality, separation of concerns and MVC structure.", "technologies": [ "React", "Rails", "JavaScript", "Redux" ] } ]
YES. You have create the array of objects, and hard-code all this data in yourself. BUT! You would have to do that ANYWAY in your HTML/JSX, creating a separate
<div> for each entry. Trust me, this way seems like more work now, but it saves you so much time later.
Time to use your data
We have come to the fun part: USING our local API!
Because this is a very basic and contrived example, I kept my app to one component: App.js. Let's import our data.
import PersonData from './data/personData'
And when we
console.log(PersonData)
[Object, Object, Object, Object, Object, Object, Object] 0: Object id: 1 url: "(634x21 2:896x474)/cdn.vox-cdn.com/uploads/chorus_image/image/67233661/Ef4Do0cWkAEyy1i.0.jpeg" name: "Bruce Wayne" role: "Batman" hobbies: Array[3] 1: Object 2: Object 3: Object 4: Object 5: Object 6: Object
Nice! We have access to the beautiful JSON that we made ourselves. Awesome!
Time to display those objects on the screen.
Our entire App component:
import React from "react"; import "./styles.css"; import "./responsive.css" import PersonData from './data/personData' export default function App() { return ( <div className="App"> <h1>Powerful People</h1> {PersonData.map(person => { return ( <div className="card" key={person.id}> <div className="row"> <div className="column"> <div> <img src={person.url} alt={person.name} /> </div> </div> <div className="column"> <div className="info"> <p>Full name</p> <h4>{person.name}</h4> <p>Profession</p> <h4>{person.role}</h4> <p>Hobbies</p> <ul> {person.hobbies.map((hobby, i) => { return( <li key={i}>{hobby}</li> ) })} </ul> </div> </div> </div> </div> ) })} </div> ) }
You can see that inside our
{PersonData.map(person => { we access each object's properties:
person.name,
person.role, and then map again through each objects hobbies:
<ul> {person.hobbies.map((hobby, i) => { return( <li key={i}>{hobby}</li> ) })} </ul>
Some notes:
- Each object in a list must have a unique key or the linter gets mad at you. This is why we give each object an "id" property in our JSON
<div className="card" key={person.id}>
and
<li key={i}>{hobby}</li>
Where
i is the index for each hobby and sufficient to be a unique key.
- We only had to create ONE
<div className="card">. If we were not using our local data from
personData.json, we would have to create a separate div for EACH person we wanted to display on the screen. Imagine how out of control that could get! AND if you want create another person, you simply create another entry in
personData.jsonand VOILA! It's on the screen!
Wrapping up
I recognize we could argue the semantics of is a local .json file really an API, because you don't really communicate with it. BUT I don't care! I believe this is an EXCELLENT way to introduce yourself the concept of APIs and how to begin utilizing the JSON structure in your apps.
Learning how to communicate with an external API is a seperate article for another day.
However, if you are comfortable not only writing your own JSON, but mapping through it and extracting it's properties, when you start to communicate with external APIs, you will be in a GREAT spot to getting that data on your screen.
As always, thank you so much for reading my posts. I appreciate you taking time to engage with my thoughts and opinions. I welcome your feedback and if you're in the mood, PRAISE FOR MY GENIUS.
Just kidding...
Until next time, HAPPY CODING!
Discussion (1)
I like this.
Being a #codenewbie, I've been looking into different ways that I can essentially separate my content from my markup and styling.
I was leaning toward figuring out Handlebars or Nunjucks. However, this idea of a local API is more intriguing.
Thanks for this.
|
https://dev.to/jamesncox/react-patterns-local-api-495j
|
CC-MAIN-2021-43
|
refinedweb
| 1,722
| 66.44
|
fchmodat - change permissions of a file relative to a directory file descriptor
Synopsis
Description
Errors
Versions
Notes
Colophon
#include <fcntl.h> /* Definition of AT_* constants */ #include <sys/stat.h>
int fchmodat(int dirfd, const char *pathname, mode_t mode ", int " flags );
Feature Test Macro Requirements for glibc (see feature_test_macros(7)):
fchmodat():():
fchmodat() was added to Linux in kernel 2.6.16; library support was added to glibc in version 2.4.
POSIX.1-2008.
See openat(2) for an explanation of the need for fchmodat().
The GNU C library wrapper function implements the POSIX-specified interface described in this page. This interface differs from the underlying Linux system call, which does not have a flags argument.
chmod(2), openat(2), path_resolution(7), symlink(7)
This page is part of release 3.44 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
|
http://manpages.sgvulcan.com/fchmodat.2.php
|
CC-MAIN-2017-13
|
refinedweb
| 151
| 57.98
|
[
]
Bob Schellink updated CLK-587:
------------------------------
Fix Version/s: 2.1.0
Assignee: Bob Schellink
We don't currently own the click.apache.org namespace, as we're in the incubator. So we can
only publish the DTDs after graduation, which should be finalized at the next board meeting
mid Novemeber.
> Define an official DTD DOCTYPE declaration in click.xml
> -------------------------------------------------------
>
> Key: CLK-587
> URL:
> Project: Click
> Issue Type: Task
> Components: core
> Affects Versions: 2.1.0 RC1
> Reporter: hantsy bai
> Assignee: Bob Schellink
> Fix For: 2.1.0
>
>
> In a xml file, providing a dtd or schema declaration is not necessary but I think it
is essential.
> Currently the click.xml is lack of Dtd declaration header, like the following.
> <?xml version="1.0" encoding="UTF-8" ?>
> <!DOCTYPE click-app PUBLIC
> "-//Apache Software Foundation//DTD Click Configuration 2.1//EN"
> "">
> <click-app>
> ....
> </click-app>
> In the click.xml, providing a doctype declaration(the red section, it is just an example)
is helpful for IDE. NetBeans IDE can use this declaration to determine if this file is a click
configuration file, perform xml validation and provide basic code completion automatically
( The click for netbeans plugin provide basic code completion by the grammar api, and extra
code completion is provided by Code completion api , eg. extra code completion for package,
path and classname attribute value).
> Please publish an official dtd online in future version.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
|
http://mail-archives.apache.org/mod_mbox/click-dev/200910.mbox/%3C277196603.1256522099407.JavaMail.jira@brutus%3E
|
CC-MAIN-2017-13
|
refinedweb
| 252
| 50.33
|
I am trying to get to work tried a couple different ways but i am still very new and don't know how to yet
index.cpp
#include "outline.h" #include "TFP.h" int main() { toonOutline playerToon; toonFillPrint::fill(playerToon); return 0; }
outline.h
#include <string> struct toonOutline { <toon info> };
TFP.h
#pragma once #include <iostream> using namespace std; class toonFillPrint { public: TFP(void); ~TFP(void); void static fill(toonOutline&); void static print(toonOutline); };
TFP.cpp
#pragma once #include "toonFillPrint.h" TFP::TFP(void) { } TFP::~TFP(void) { } void TFP::fill(toonOutline& toon) { <fills in the toon info> } void toonFillPrint::print(toonOutline toon) { cout<< <toon info> }
I get the error that says "syntax error : identifer 'toonOutline' in the TFP.h right for the function call fill. i have tried putting #include "outline.h" in the top of the header but i get " 'toonOutline' : 'struct' tyep redefinition" so i know that will not work. but i am not sure how to get it. I would thank you for any help.
|
https://www.daniweb.com/programming/software-development/threads/126975/can-u-put-a-struct-in-it-s-own-file
|
CC-MAIN-2018-30
|
refinedweb
| 168
| 68.67
|
I use tagsoup as (SAX)
XMLREader and set the namespace feature to
false. This parser is used to feed the
Transformer as SAX Source. Complete code:
final TransformerFactory factory = TransformerFactory.newInstance(); final Transformer t = factory.newTransformer(new StreamSource( getClass().getResourceAsStream("/identity.xsl"))); final XMLReader p = new Parser(); // the tagsoup parser p.setFeature("", false); // getHtml() returns HTML as InputStream final Source source = new SAXSource(p, new InputSource(getHtml())); t.transform(source, new StreamResult(System.out));
This results in something like:
< xmlns: <> <> <> <> <
Problem is that the tag names are blank. The XMLReader (tagsoup parser) does report an empty namespaceURI and empty local name in the SAX methods
ContentHandler#startElement and
ContentHandler#endElement. For a not namespace aware parser this is allowed (see Javadoc).
If i add a
XMLFilter which copies the value of the qName to the localName, everything goes fine. However, this is not what i want, i expect this works "out of the box". What am i doing wrong? Any input would be appreciated!
|
https://www.howtobuildsoftware.com/index.php/how-do/j3d/java-xslt-sax-tag-soup-how-to-do-a-xsl-transform-in-java-using-a-not-namespace-aware-parser
|
CC-MAIN-2020-16
|
refinedweb
| 164
| 52.97
|
(For more resources on OGRE 3D, see here.)
Downloading and installing Ogre 3D
The first step we need to take is to install and configure Ogre 3D.
Time for action – downloading and installing Ogre 3D
We are going to download the Ogre 3D SDK and install it so that we can work with it later.
- Go to.
- Download the appropriate package. If you need help picking the right package, take a look at the next What just happened section.
- Copy the installer to a directory you would like your OgreSDK to be placed in.
- Double-click on the Installer; this will start a self extractor.
- You should now have a new folder in your directory with a name similar to OgreSDK_vc9_v1-7-1.
- Open this folder. It should look similar to the following screenshot:
(Move the mouse over the image to enlarge.)
What just happened?
We just downloaded the appropriate Ogre 3D SDK for our system. Ogre 3D is a cross-platform render engine, so there are a lot of different packages for these different platforms. After downloading we extracted the Ogre 3D SDK.
Different versions of the Ogre 3D SDK
Ogre supports many different platforms, and because of this, there are a lot of different packages we can download. Ogre 3D has several builds for Windows, one for MacOSX, and one Ubuntu package. There is also a package for MinGW and for the iPhone. If you like, you can download the source code and build Ogre 3D by yourself. This article will focus on the Windows pre-build SDK and how to configure your development environment. If you want to use another operating system, you can look at the Ogre 3D Wiki, which can be found at. The wiki contains detailed tutorials on how to set up your development environment for many different platforms.
Exploring the SDK
Before we begin building the samples which come with the SDK, let's take a look at the SDK. We will look at the structure the SDK has on a Windows platform. On Linux or MacOS the structure might look different. First, we open the bin folder. There we will see two folders, namely, debug and release. The same is true for the lib directory. The reason is that the Ogre 3D SDK comes with debug and release builds of its libraries and dynamic-linked/shared libraries. This makes it possible to use the debug build during development, so that we can debug our project. When we finish the project, we link our project against the release build to get the full performance of Ogre 3D.
When we open either the debug or release folder, we will see many dll files, some cfg files, and two executables (exe). The executables are for content creators to update their content files to the new Ogre version, and therefore are not relevant for us.
The OgreMain.dll is the most important DLL. It is the compiled Ogre 3D source code we will load later. All DLLs with Plugin_ at the start of their name are Ogre 3D plugins we can use with Ogre 3D. Ogre 3D plugins are dynamic libraries, which add new functionality to Ogre 3D using the interfaces Ogre 3D offers. This can be practically anything, but often it is used to add features like better particle systems or new scene managers. The Ogre 3D community has created many more plugins, most of which can be found in the wiki. The SDK simply includes the most generally used plugins. The DLLs with RenderSystem_ at the start of their name are, surely not surprisingly, wrappers for different render systems that Ogre 3D supports. In this case, these are Direct3D9 and OpenGL. Additional to these two systems, Ogre 3D also has a Direct3D10, Direct3D11, and OpenGL ES(OpenGL for Embedded System) render system.
Besides the executables and the DLLs, we have the cfg files. cfg files are config files that Ogre 3D can load at startup. Plugins.cfg simply lists all plugins Ogre 3D should load at startup. These are typically the Direct3D and OpenGL render systems and some additional SceneManagers. quakemap.cfg is a config file needed when loading a level in the Quake3 map format. We don't need this file, but a sample does.
resources.cfg contains a list of all resources, like a 3D mesh, a texture, or an animation, which Ogre 3D should load during startup. Ogre 3D can load resources from the file system or from a ZIP file. When we look at resources.cfg, we will see the following lines:
Zip=../../media/packs/SdkTrays.zip
FileSystem=../../media/thumbnails
Zip= means that the resource is in a ZIP file and FileSystem= means that we want to load the contents of a folder. resources.cfg makes it easy to load new resources or change the path to resources, so it is often used to load resources, especially by the Ogre samples. Speaking of samples, the last cfg file in the folder is samples.cfg. We don't need to use this cfg file. Again, it's a simple list with all the Ogre samples to load for the SampleBrowser. But we don't have a SampleBrowser yet, so let's build one.
The Ogre 3D samples
Ogre 3D comes with a lot of samples, which show all the kinds of different render effects and techniques Ogre 3D can do. Before we start working on our application, we will take a look at the samples to get a first impression of Ogre's capabilities.
Time for action – building the Ogre 3D samples
To get a first impression of what Ogre 3D can do, we will build the samples and take a look at them.
- Go to the Ogre3D folder.
- Open the Ogre3d.sln solution file.
- Right-click on the solution and select Build Solution.
- Visual Studio should now start building the samples. This might take some time, so get yourself a cup of tea until the compile process is finished.
- If everything went well, go into the Ogre3D/bin folder.
- Execute the SampleBrowser.exe.
- You should see the following on your screen:
- Try the different samples to see all the nice features Ogre 3D offers.
What just happened?
We built the Ogre 3D samples using our own Ogre 3D SDK. After this, we are sure to have a working copy of Ogre 3D.
(For more resources on OGRE 3D, see here.)
The first application with Ogre 3D
In this part, we will create our first Ogre 3D application, which will simply render one 3D model.
Time for action – starting the project and configuring the IDE
As with any other library, we need to configure our IDE before we can use it with Ogre 3D.
- Create a new empty project.
- Create a new file for the code and name it main.cpp.
- Add the main function:
int main (void)
{
return 0;
}
- Include ExampleApplication.h at the top of the following source file:
#include "Ogre\ExampleApplication.h":
- Add PathToYourOgreSDK\include\ to the include path of your project.
- Add PathToYourOgreSDK\boost_1_42 to the include path of your project.
- Add PathToYourOgreSDK\boost_1_42\lib to your library path.
- Add a new class to the main.cpp.
class Example1 : public ExampleApplication
{
public:
void createScene()
{
}
};
- Add the following code at the top of your main function:
Example1 app;
app.go();
- Add PathToYourOgreSDK\lib\debug to your library path.
- Add OgreMain_d.lib to your linked libraries.
- Add OIS_d.lib to your linked libraries.
- Compile the project.
- Set your application working directory to PathToYourOgreSDK\bin\debug.
- Start the application. You should see the Ogre 3D Setup dialog.
- Press OK and start the application. You will see a black window. Press Escape to exit the application.
What just happened?
We created our first Ogre 3D application. To compile, we needed to set different include and library paths so the compiler could find Ogre 3D.
In steps 5 and 6, we added two include paths to our build environment. The first path was to the Ogre 3D SDK include folder, which holds all the header files of Ogre 3D and OIS. OIS stands for Object Oriented Input System and is the input library that ExampleApplication uses to process user input. OIS isn't part of Ogre 3D; it's a standalone project and has a different development team behind it. It just comes with Ogre 3D because the ExampleApplication uses it and so the user doesn't need to download the dependency on its own. ExampleApplication.h is also in this include folder. Because Ogre 3D offers threading support, we needed to add the boost folder to our include paths. Otherwise, we can't build any application using Ogre 3D. If needed, Ogre 3D can be built from the source, disabling threading support and thus removing the need for boost. And while using boost, the compiler also needs to be able to link the boost libraries. Thus we have added the boost library folder into our library paths (see step 7).
In step 10, we added PathToYourOgreSDK\lib\debug to our library path. As said before, Ogre 3D comes with debug and release libraries. With this line we decided to use the debug libraries because they offer better debug support if something happens to go wrong. When we want to use the release versions, we have to change the lib\debug to \lib\release. The same is true for steps 11 und 12. There we added OgreMain_d.lib and OIS_d.lib to our linked libraries. When we want to use the release version, we need to add OgreMain.lib and OIS.lib. OgreMain.lib, and OgreMain_d.lib contains both the interface information about Ogre 3D and tells our application to load OgreMain.dll or OgreMain_d.dll. Note that OIS.lib or OIS_d.lib is the same for the input system— they load OIS_d.dll or OIS.dll. So we link Ogre 3D and OIS dynamically, enabling us to switch the DLL without recompiling our application, as long as the interface of the libraries doesn't change and the application and the DLL are using the same runtime library versions. This also implies that our application always needs to load the DLLs, so we have to make sure it can find it. This is one of the reasons we set the working directory in step 14. Another reason will be made clear in the next section.
ExampleApplication
We created a new class, Example1, which inherits from ExampleApplication. ExampleApplication is a class that comes with the Ogre 3D SDK and is intended to make learning Ogre 3D easier by offering an additional abstraction layer above Ogre 3D. ExampleApplication starts Ogre for us, loads different models we can use, and implements a simple camera so we can navigate through our scene. To use ExampleApplication, we just needed to inherit from it and override the virtual function createScene(). We will use the ExampleApplication class for now to save us from a lot of work, until we have a good understanding of Ogre 3D. Later, we will replace ExamplesApplication piece-by-piece with our own code.
In the main function, we created a new instance of our application class and called the go() function to start the application and load Ogre 3D. At startup, Ogre 3D loads three config files—Ogre.cfg, plugins.cfg, and resources.cfg. If we are using the debug versions, each file needs an "_d" appended to its name. This is useful because with this we can have different configuration files for debug and release. Ogre.cfg contains the configuration we selected in the setup dialog, so it can load the same settings to save us from entering the same information every time we start our application. plugins.cfg contains a list of plugins Ogre should load. The most important plugins are the rendersystem plugins. They are the interface for Ogre to communicate with OpenGL or DirectX to render our scene. resources.cfg contains a list of resources that Ogre should load during startup. If you look inside resources.cfg, you will see that the paths in this file are relative. That's the reason we need to set the working directory.
We have a basic application with nothing in it, which is rather boring. Now we will load a model to get a more interesting scene.
Time for action – loading a model
Loading a model is easy. We just need to add two lines of code.
- Add the following two lines into the empty createScene() method:
Ogre::Entity* ent =
mSceneMgr->createEntity("MyEntity","Sinbad.mesh");
mSceneMgr->getRootSceneNode()->attachObject(ent);
- Compile your application again.
- Start your application. You will see a small green figure after starting the application.
- Navigate the camera with the mouse and WASD until you see the green figure better.
- Close the application.
What just happened?
With mSceneMgr->createEntity("MyEntity","Sinbad.mesh");,we told Ogre that we wanted a new instance of the Sinbad.mesh model. mSceneMgr is a pointer to the SceneManager of Ogre 3D, created for us by the ExampleApplication. To create a new entity, Ogre needs to know which model file to use, and we can give a name to the new instance. It is important that the name is unique; it can't be used twice. If this happens, Ogre 3D will throw an exception. If we don't specify a name, Ogre 3D will automatically generate one for us.
We now have an instance of a model, and to make it visible, we need to attach it to our scene. Attaching an entity is rather easy—just write the following line:
mSceneMgr->getRootSceneNode()->attachObject(ent);
This attaches the entity to our scene so we can see it. And what we see is Sinbad, the mascot model of Ogre 3D.
Summary
We learned how the Ogre 3D SDK is organized, which libraries we needed to link, and which folder we needed in our include path. Also, we got a first glance at the class ExampleApplication and how to use it. We loaded a model and displayed it.
Specifically, we covered:
- Which files are important for the development with Ogre 3D, how they interact with each other, and what their purpose is
- What ExampleApplication is for: How this class helps to save us work and what happens during the startup of Ogre 3D
- Model loading: We learned how we can create a new instance of a model with createEntity and one way to attach the new instance to our scene
Further resources on this subject:
- Starting Ogre 3D [Article]
- The Ogre Scene Graph [Article]
- Materials with Ogre 3D [Article]
- Ogre 3D: Double Buffering [Article]
- OGRE 3D 1.7 Beginner's Guide [Book]
|
https://www.packtpub.com/books/content/installation-ogre-3d
|
CC-MAIN-2015-27
|
refinedweb
| 2,444
| 66.23
|
These are chat archives for opal/opal
new (window.Phaser.Plugin.Isometric)()the problem could be that each of the modules define a closure variable with the name of the module shadowing the originals.
@constructor
@constructor?
compiles to in to in
def initialize(name) @name = name @constructor = "foo" end
def.$initialize = function(name) { var self = this; self.name = name; return self.constructor = "foo"; };
Phaser.Plugin.Isometricdoes not seem to work out of the box even in regular js:.
still not getting why rspec won't run...
#spec-opal/test_spec.rb require 'opal' require 'opal-rspec' describe 'a spec' do puts "defining stuff" it 'has successful examples' do puts "running the test" 'I run'.should =~ /run/ end end
prints "defining stuff" to the console, but never runs the tests...
rails-4.2.3
opal-rails-0.8.0
opal-rspec-0.4.3
any clues anybody?
bundle exec rake opal:speceverything is fine.
|
https://gitter.im/opal/opal/archives/2015/08/11
|
CC-MAIN-2019-18
|
refinedweb
| 152
| 61.12
|
In this book, you will learn how to build complete Django projects, ready for production use. In case you haven't installed Django yet, you will learn how to do it in the first part of this chapter. This chapter will cover how to create a simple blog application using Django. The purpose of the chapter is to get a general idea of how the framework works, understand how the different components interact with each other, and give you the skills to easily create Django projects with basic functionality. You will be guided through the creation of a complete project without elaborating upon all the details. The different framework components will be covered in detail throughout the following chapters of this book.
This chapter will cover the following points:
Installing Django and creating your first project
Designing models and generating model migrations
Creating an administration site for your models
Working with QuerySet and managers
Building views, templates, and URLs
Adding pagination to list views
Using Django class-based views
If you have already installed Django, you can skip this section and jump directly to Creating your first project. Django comes as a Python package and thus can be installed in any Python environment. If you haven't installed Django yet, here is a quick guide to installing Django for local development.
Django works well with Python versions 2.7 or 3. In the examples of this book, we are going to use Python 3. If you're using Linux or Mac OS X, you probably have Python installed. If you are not sure if Python is installed in your computer, you can verify it by typing
python in the terminal. If you see something like the following, then Python is installed in your computer:
Python 3.5.0 (v3.5.0:374f501f4567, Sep 12 2015, 11:00:19) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>>
If your installed Python version is lower than 3, or Python is not installed on your computer, download Python 3.5.0 from and install it.
Since you are going to use Python 3, you don't have to install a database. This Python version comes with the SQLite database built-in. SQLite is a lightweight database that you can use with Django for development. If you plan to deploy your application in a production environment, you should use an advanced database such as PostgreSQL, MySQL, or Oracle. You can get more information about how to get your database running with Django in.
It is recommended that you use virtualenv to create isolated Python environments, so you can use different package versions for different projects, which is far more practical than installing Python packages system wide. Another advantage of using virtualenv is that you won't need any administration privileges to install Python packages. Run the following command in your shell to install virtualenv:
pip install virtualenv
After you install virtualenv, create an isolated environment with the following command:
virtualenv my_env
This will create a
my_env/ directory including your Python environment. Any Python libraries you install while your virtual environment is active will go into the
my_env/lib/python3.5/site-packages directory.
If your system comes with Python 2.X and you installed Python 3.X, you have to tell virtualenv to use the latter. You can locate the path where Python 3 is installed and use it to create the virtual environment with the following commands:
zenx$ *which python3* /Library/Frameworks/Python.framework/Versions/3.5/bin/python3 zenx$ *virtualenv my_env -p /Library/Frameworks/Python.framework/Versions/3.5/bin/python3*
Run the following command to activate your virtual environment:
source my_env/bin/activate
The shell prompt will include the name of the active virtual environment enclosed in parentheses, like this:
(my_env)laptop:~ zenx$
You can deactivate your environment anytime with the
deactivate command.
You can find more information about virtualenv at.
On top of virtualenv, you can use virtualenvwrapper. This tool provides wrappers that make it easier to create and manage your virtual environments. You can download it from.
pip is the preferred method for installing Django. Python 3.5 comes with pip pre-installed, but you can find pip installation instructions at. Run the following command at the shell prompt to install Django with pip:
pip install Django==1.8.6
Django will be installed in the Python
site-packages/ directory of your virtual environment.
Now check if Django has been successfully installed. Run
python on a terminal and import Django to check its version:
>>> import django >>> django.VERSION django.VERSION(1, 8, 5, 'final', 0)
If you get this output, Django has been successfully installed in your machine.
Django can be installed in several other ways. You can find a complete installation guide at.
Our first Django project will be a complete blog site. Django provides a command that allows you to easily create an initial project file structure. Run the following command from your shell:
django-admin startproject mysite
This will create a Django project with the name
mysite.
Let's take a look at the generated project structure:
mysite/ manage.py mysite/ __init__.py settings.py urls.py wsgi.py
These files are as follows:
manage.py: A command-line utility to interact with your project. It is a thin wrapper around the
django-admin.pytool. You don't need to edit this file.
mysite/: Your project directory consist of the following files:
__init__.py: An empty file that tells Python to treat the
mysitedirectory as a Python module.
settings.py: Settings and configuration for your project. Contains initial default settings.
urls.py: The place where your URL patterns live. Each URL defined here is mapped to a view.
wsgi.py: Configuration to run your project as a WSGI application.
The generated
settings.py file includes a basic configuration to use a SQLite database and a list of Django applications that are added to your project by default. We need to create the tables in the database for the initial applications.
Open the shell and run the following commands:
cd mysite python manage.py migrate
You will see an output that ends like this:
The tables for the initial applications have been created in the database. You will learn about the
migrate management command in a bit. like adding new files to your project, so you will have to restart the server manually in these cases.
Start the development server by typing the following command from your project's root folder:
python manage.py runserver
You should see something like this:
Performing system checks... System check identified no issues (0 silenced). November 5, 2015 - 19:10:54 Django version 1.8.6, using settings 'mysite.settings' Starting development server at Quit the server with CONTROL-C.
Now, open the URL in your browser. You should see a page telling you the project is successfully running, as shown in the following screenshot:
You can indicate Django to run the development server on a custom host and port, or tell it that you want to run your project loading a different settings file. For example, you can run the
manage.py command as follows:
python manage.py runserver 127.0.0.1:8001 \ --settings=mysite.settings
This comes in handy to deal with multiple environments that require different settings. Remember, this server is only intended for development and is not suitable for production use. In order to deploy Django in a production environment, you should run it as a Web Server Gateway Interface (WSGI) application using a real web server such as Apache, Gunicorn, or uWSGI. You can find more information about how to deploy Django with different web servers at.
Additional downloadable Chapter 13, Going Live covers setting up a production environment for your Django projects.
Let's open the
settings.py file and take a look at the configuration of our project. There are several settings that Django includes in this file, but these are only a part of all the Django settings available. You can see all settings and their default values in.
The following settings are worth looking at:
DEBUGis a boolean that turns on/off the debug mode of the project. If set to
True, Django will display detailed error pages when an uncaught exception is thrown by your application. When you move to a production environment, remember you have to set it to
False. Never deploy a site into production with
DEBUGturned on because you will expose sensitive data of your project.
ALLOWED_HOSTSis not applied while debug mode is on or when running tests. Once you are going to move your site to production and set
DEBUGto
False, you will have to add your domain/host to this setting in order to allow it to serve the Django site.
INSTALLED_APPSis a setting you will have to edit in all projects. This setting tells Django which applications are active for this site. By default, Django includes the following applications:
django.contrib.admin: This is an administration site.
django.contrib.auth: This is an authentication framework.
django.contrib.contenttypes: This is a framework for content types.
django.contrib.sessions: This is a session framework.
django.contrib.messages: This is a messaging framework.
django.contrib.staticfiles: This is a framework for managing static files.
MIDDLEWARE_CLASSESis a tuple containing middlewares to be executed.
ROOT_URLCONFindicates the Python module where the root URL patterns of your application are defined.
DATABASESis a dictionary containing the settings for all the databases to be used in the project. There must always be a
defaultdatabase. The default configuration uses a SQLite3 database.
LANGUAGE_CODEDefines the default language code for this Django site.
Don't worry if you don't understand much about what you are seeing. You will get more familiar with Django settings in the following chapters.
Throughout this book, you will read the terms project and application over and over. In Django, a project is considered a Django installation with some settings; and an application is a group of models, views, templates, and URLs. Applications interact with the framework to provide some specific functionalities and may be reused in various projects. You can think of the project as your website, which contains several applications like blog, wiki, or forum, which can be used in other projects.
Now let's create your first Django application. We will create a blog application from scratch. From your project's root directory, run the following command:
python manage.py startapp blog
This will create the basic structure of the application, which looks like this:
blog/ __init__.py admin.py migrations/ __init__.py models.py tests.py views.py
These files are as follows:.
We will, we will define a
Post model. Add the following lines to the
models.py file of the
blog application:
from django.db import modelsfrom django.utils import timezone from django.contrib.auth.models import User class Post(models.Model): STATUS_CHOICES = ( ('draft', 'Draft'), ('published', 'Published'), ) title = models.CharField(max_length=250) slug = models.SlugField(max_length=250, unique_for_date='publish') author = models.ForeignKey(User, our basic model for blog posts. Let's take a look at the fields we just defined for this model:
title: This is the field for the post title. This field is
CharField, which translates into a
VARCHARcolumn in the SQL database.
slug: This is a field intended to be used in URLs. A slug is a short label containing only letters, numbers, underscores, or hyphens. We will use the
slugfield to build beautiful, SEO-friendly URLs for our blog posts. We have added the
unique_for_dateparametermodel of the Django authentication system. We specify the name of the reverse relationship, from
Userto
Post, with the
related_nameattribute. We are going to learn more about this later.
body: This is the body of the post. This field is
TextField, which translates into a
TEXTcolumn in the SQL database.
publish: This datetime indicates when the post was published. We use Django's timezone
nowmethod as default value. This is just a timezone-aware
datetime.now.
created: This datetime indicates when the post was created. Since we are using
auto_now_addhere, the date will be saved automatically when creating an object.
updated: This datetime indicates the last time the post has been updated. Since we are using
auto_nowhere, the date will be updated automatically when saving an object.
status: This is a field to show the status of a post. We use a
choicesparameter, so the value of this field can only be set to one of the given choices.
As you can see, Django comes with different types of fields that you can use to define your models. You can find all field types in.
The class
Meta inside the model contains metadata. We are telling Django to sort results by the
publish field in descending order by default when we query the database. We specify descending order by using the negative prefix.
The
__str__() method is the default human-readable representation of the object. Django will use it in many places such as the administration site.
Note
If you come from Python 2.X, note that in Python 3 all strings are natively considered unicode, therefore we only use the
__str__() method. The
__unicode__() method is obsolete.
Since we are going to deal with datetimes, we will install the pytz module. This module provides timezone definitions for Python and is required by SQLite to work with datetimes. Open the shell and install pytz with the following command:
pipproject management command.
In order for Django to keep track of our application and be able to create database tables for its models, we have to activate it. To do this, edit the
settings.py file and add
blog to the
INSTALLED_APPS setting. It should look like this:
INSTALLED_APPS = ( 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'blog', )
Now Django knows that our application is active for this project and will be able to introspect its models..
First, we need to create a migration for the new model we just created. From the root directory of your project, enter this command:
python manage.py makemigrations blog
You should get the following output:
Migrations for 'blog': 0001_initial.py: - Create model Post manage.py sqlmigrate blog 0001
The output should look as follows:
BEGIN;")); CREATE INDEX "blog_post_2dbcba41" ON "blog_post" ("slug"); CREATE INDEX "blog_post_4f331e2f" ON "blog_post" ("author_id"); COMMIT;
The exact output depends on the database you are using. The output above is generated for SQLite. As you can see, Django generates the table names by combining the app name and the lowercase name of the model (
blog_post), but you can also specify them in the
Meta class of the models using the
db_table attribute. Django creates a primary key automatically for each model but you can also override this specifying
primary_key=True on one of your model fields.
Let's sync our database with the new model. Run the following command to apply existing migrations:
python manage.py migrate
You will get the following output that ends with the following line:
Applying blog.0001_initial... OK
We just applied migrations for the applications listed in
INSTALLED_APPS, including our
blog application. After applying migrations, the database reflects the current status of our models..
Now that we have defined the
Post model, we will create a simple administration site to manage blog posts. Django comes with a built-in administration interface that is very useful for editing content. The Django admin site is built dynamically by reading your model metadata and providing a production-ready interface for editing content. You can use it out-of-the-box, configuring how you want your models to be displayed in it.
Remember that
django.contrib.admin is already included in the
INSTALLED_APPS setting of our project and that's why we don't have to add it.
First, we need to create a user to manage the admin site. Run the following command:
python manage.py createsuperuser
You will see the following output. Enter your desired username, e-mail, and password:
Username (leave blank to use 'admin'): admin Email address: [email protected] Password: ******** Password (again): ******** Superuser created successfully.
Now, start the development server with the command
python manage.py runserver and open in your browser. You should see the administration login page, as shown in the following screenshot:
Log in using the credentials of the user you created in the previous step. You will see the admin site index page, as shown in the following screenshot:
The
Group and
User models you see here are part of the Django authentication framework located in
django.contrib.auth. If you click on Users, you will see the user you created before. The
Post model of your
blog application has a relationship with this
User model. Remember, it is a relationship defined by the
author field.
Let's add your blog models to the administration site. Edit the
admin.py file of your
blog application and make it look like this:
from django.contrib import admin from .models import Post admin.site.register(Post)
Now, reload the admin site in your browser. You should see your
Post model in the admin site as follows:
That was easy, right? When you register a model in the Django admin site, you get a user-friendly interface generated by introspecting your models that allows you to list, edit, create, and delete objects in a simple way.
Click on the Add link on the right of Posts to add a new post. You will see the create form that Django has generated dynamically for your model, as shown in the following screenshot:
Django uses different form widgets for each type of field. Even complex fields such as
DateTimeField are displayed with an easy interface like a JavaScript date picker.
Fill in the form and click on the Save button. You should be redirected to the post list page with a successful message and the post you just created, as shown in the following screenshot:
Now we are going to see how to customize the admin site. Edit the
admin.py file of your blog application and change it into this:
from django.contrib import admin from .models import Post class PostAdmin(admin.ModelAdmin): list_display = ('title', 'slug', 'author', 'publish', 'status') admin.site.register(Post, PostAdmin)
We are telling the Django admin site that our model is registered into the admin site using a custom class that inherits from
ModelAdmin. In this class, we can include information about how to display the model in the admin site and how to interact with it. The
list_display attribute allows you to set the fields of your model that you want to display in the admin object list page.']
Go back to your browser and reload the post list page. Now it will look like this: we have defined a list of searchable fields using the
search_fields attribute. Just below the search bar, there is a bar to navigate quickly through a date hierarchy. This has been defined by the
date_hierarchy attribute. You can also see that the posts are ordered by Status and Publish columns by default. You have specified the default order using the
ordering attribute.
Now click on the Add post link. You will also see some changes here. As you type the title of a new post, the slug field is filled automatically. We have told Django to prepopulate the
slug field with the input of the
title field using the
prepopulated_fields attribute. Also, now the
author field is displayed with a lookup widget that can scale much better than a dropdown select input when you have thousands of users, as shown in the following screenshot:
With a few lines of code, we have customized the way our model is displayed in the admin site. There are plenty of ways to customize and extend the Django administration site. Later in this book, we will cover this further., and Oracle. Remember that you can define the database of your project by editing the
DATABASES setting in the
settings.py file of your project. Django can work with multiple databases at a time and you can even program database routers that handle the data in any way you like.
Once you have created your data models, Django gives you a free API to interact with them. You can find the data model reference of the official documentation at.
Open the terminal and run the following command to open the Python shell:
python manage.py shell
Then type the following lines:
>>> from django.contrib.auth.models import User >>> from blog.models import Post >>> user = User.objects.get(username='admin') >>> Post.objects.create(title='One more post', slug='one-more-post', body='Post body.', author=user) >>> post.save()
Let's analyze what this code does. First, we retrieve the user object that has the username
admin:
user = User.objects.get(username='admin')
The
get() method allows you to retrieve a single object from the database. Note that this method expects one we create a
Post instance with a custom title, slug, and body; and we set the user we previously retrieved as author of the post:
post = Post(title='Another post', slug='another-post', body='Post body.', author=user)
Finally, we save the
Post object to the database using the
save() method:
post.save()
This action performs an
INSERT SQL statement behind the scenes. We have seen how to create an object in memory first and then persist it to the database, but we can also create the object into the database directly using the
create() method as follows:
Post.objects.create(title='One more post', slug='one-more-post', body='Post body.', author=user)
Now, change the title of the post into something different and save the object again:
>>> post.>> post.save()
This time, the
save() method performs an
UPDATE SQL statement.
The Django Object-relational mapping (ORM) is based on QuerySet. A QuerySet is a collection of objects from your database that can have several filters to limit the results. You already know how to retrieve a single object from the database using the
get() method. As you have seen, we have accessed this method using
Post.objects.get(). Each Django model has at least one manager, and the default manager is called
objects. You get a
QuerySet object by using your models manager. To retrieve all objects from a table, you just use the
all() method on the default
objects manager, like this:
>>> all_posts = Post.objects.all()
This is how we create a QuerySet that returns all objects in the database. Note that this QuerySet has not been executed yet. Django QuerySets are lazy; they are only evaluated when they are forced to do it. This behavior makes QuerySets very efficient. If we don't set the QuerySet to a variable, but instead write it directly on the Python shell, the SQL statement of the QuerySet is executed because we force it to output results:
>>> Post.objects.all()
To filter a QuerySet, you can use the
filter() method of the manager. For example, we can retrieve all posts published in the year 2015 using the following QuerySet:
Post.objects.filter(publish__year=2015)
You can also filter by multiple fields. For example, we can retrieve all posts published in 2015 by the author with the username
admin:
Post.objects.filter(publish__year=2015, author__username='admin')
This equals to building the same QuerySet chaining multiple filters:
Post.objects.filter(publish__year=2015)\ filter(author__username='admin')
You can exclude certain results from your QuerySet using the
exclude() method of the manager. For example, we can retrieve all posts published in 2015 whose titles don't start by
Why:
Post.objects.filter(publish__year=2015)\ .exclude(title__startswith='Why')
You can order results by different fields using the
order_by() method of the manager. For example, you can retrieve all objects ordered by their title:
Post.objects.order_by('title')
Ascending order is implied. You can indicate descending order with a negative sign prefix, like this:
Post.objects.order_by('-title')
If you want to delete an object, you can do it from the object instance:
post = Post.objects.get(id=1) post.delete()
You can concatenate as many filters as you like to a QuerySet and you will not hit the database until the QuerySet is evaluated. it in a statement such as
bool(),
or,
and, or
if
As we previously mentioned,
objects is the default manager of every model, which retrieves all objects in the database. But we can also define custom managers for our models. We are going to create a custom manager to retrieve all posts with
published status.
There are two ways to add managers to your models: You can add extra manager methods or modify initial manager querysets. The first one turns up something like
Post.objects.my_manager()and the later like
Post.my_manager.all(). Our manager will allow us to retrieve posts using
Post.published..
get_queryset() is the method that returns the QuerySet to be executed. We use it to include our custom filter in the final QuerySet. We have defined our custom manager and added it to the
Post model; we can now use it to perform queries. For example, we can retrieve all published posts whose title starts with
Who using:
Post.published.filter(title__startswith='Who')
Now that you have some knowledge about how to use the ORM, you are ready to build the views of the blog application. A Django view is just a Python function that receives a web request and returns a web response. Inside the view goes all the logic to return the desired response.
First, we will create our application views, then we will define an URL pattern for each view; and finally, we will create HTML templates to render the data generated by the views. Each view will render a template passing variables to it and will return an HTTP response with the rendered. Remember that this parameter is required by all views. In this view, we are retrieving all the posts with the
published status using the
published manager we created previously.
Finally, we are using the
render() shortcut provided by Django to render the list of posts with the given template. This function takes the
request object as parameter, the template path and the variables to render the given template. It returns an
HttpResponse object with the rendered text (normally HTML code). The
render() shortcut takes the request context into account, so any variable set parameters to retrieve a published post with the given slug and date. Notice that when we created the
Post model, we added the
unique_for_date parameter to the
slug field. This way we ensure that there will be only one post with a slug for a given date, and thus, we can retrieve single posts by date and slug. In the detail view, we are using the
get_object_or_404() shortcut to retrieve the desired
Post. This function retrieves the object that matches with the given parameters, or launches an HTTP 404 (Not found) exception if no object is found. Finally, we use the
render() shortcut to render the retrieved post using a template.
An URL pattern is composed of a Python regular expression, a view, and a name that allows you to name it project-wide. Django runs through each URL pattern and stops at the first one that matches the requested URL. Then, Django imports the view of the matching URL pattern and executes it, passing an instance of the
HttpRequest class and keyword or positional arguments.
If you haven't worked with regular expressions before, you might want to take a look at first.
Create an
urls.py file in the directory of the
blog application and add the following lines:
from django.conf.urls import url from . import views urlpatterns = [ # post views url(r'^$', views.post_list, name='post_list'), url(r'^(?P<year>\d{4})/(?P<month>\d{2})/(?P<day>\d{2})/'\ r'(?P<post>[-\w]+)/$', views.post_detail, name='post_detail'), ]
The first URL pattern doesn't take any arguments and is mapped to the
post_list view. The second pattern takes the following four arguments and is mapped to the
post_detail view. Let's take a look at the regular expression of the URL pattern:
year: Requires four digits.
month: Requires two digits. We will only allow months with leading zeros.
day: Requires two digits. We will only allow days with leading zeros.
post: Can be composed by words and hyphens.
Now you have to include the URL patterns of your blog application into the main URL patterns of the project. Edit the
urls.py file located in the
mysite directory of your project and make it look like the following:
from django.conf.urls import include, url from django.contrib import admin urlpatterns = [ url(r'^admin/', include(admin.site.urls)), url(r'^blog/', include('blog.urls', namespace='blog', app_name='blog')), ]
This way, you are telling Django to include the URL patterns defined in the blog
urls.py under the
blog/ path. You are giving them a namespace called
blog so you can refer to this group of URLs easily.
You can use the
post_detail URL that you have defined in the previous your
models.py file and add the following:
from django.core.urlresolvers import reverse Class Post(models.Model): # ... def get_absolute_url(self): return reverse('blog:post_detail', args=[self.publish.year, self.publish.strftime('%m'), self.publish.strftime('%d'), self.slug])
Note that we are using the
strftime() function to build the URL using month and day with leading zeros. We will use the
get_absolute_url() method in our templates.
We have created views and URL patterns for our application. Now it's time to add templates to display posts in a user-friendly way.
Create the following directories and files inside your
blog application directory:
templates/ blog/ base.html post/ list.html detail.html
This will be the file structure for our templates. The
base.html file will include the main HTML structure of the website and divide the content into, which look like
{% tag %} template variables, which look like
{{ variable }} and template filters, which can be applied to variables and look like
{{ variable|filter }}. You can see all built-in template tags and filters in.
Let's edit the
base.html file and add the following code:
{% load staticfiles %} <!DOCTYPE html> <html> <head> <title>{% block title %}{% endblock %}</title> <link href="{% static "css/blog.css" %}" rel="stylesheet"> </head> <body> <div id="content"> {% block content %} {% endblock %} </div> <div id="sidebar"> <h2>My blog</h2> <p>This is my blog.</p> </div> </body> </html>
{% load staticfiles %} tells Django to load the
staticfiles template tags that are provided by the
django.contrib.staticfiles application. After loading it, you are able to use the
{% static %} template filter throughout this template. With this template filter, you can include static files such as the
blog.css file that you will find in the code of this example, under the
static/ directory of the blog application. Copy this directory into the same location of your project to use the existing static files.
You can see that there are two
{% block %} tags. These tell Django that we want to define a block in that area. Templates that inherit from this template can fill the blocks with content., we are telling Django to inherit from the
blog/base.html template. Then we are filling the
title and
content blocks of the base template with content. We iterate through the posts and display their title, date, author, and body, including a link in the title to the canonical URL of the post. In the body of the post, we are applying two template filters:
truncatewords truncates the value to the number of words specified, and
linebreaks converts the output into HTML line breaks. You can concatenate as many template filters as you wish; each one will be applied to the output generated by the previous one.
Open the shell and execute the command
python manage.py runserver to start the development server. Open in your browser and you will see everything running. Note that you need to have some posts with status Published in order to see them here. You should see something like this:
Then, let's edit the
post/detail.html file and make it look like the following:
{% extends "blog/base.html" %} {% block title %}{{ post.title }}{% endblock %} {% block content %} <h1>{{ post.title }}</h1> <p class="date"> Published {{ post.publish }} by {{ post.author }} </p> {{ post.body|linebreaks }} {% endblock %}
Now, you can go back to your browser and click on one of the post titles to see the detail view of a post. You should see something like this:
Take a look at the URL. It should look like
/blog/2015/09/20/who-was-django-reinhardt/. We have created a SEO friendly URL for our blog posts.
When:
from django.core.paginator import Paginator, EmptyPage,\ PageNotAnInteger def post_list(request): object_list = Post.published.all() paginator = Paginator(object_list, 3) # 3:
We instantiate the
Paginatorclass with the number of objects we want to display in each page.
We get the
pageGET parameter that indicates the current page number.
We obtain the objects for the desired page calling the
page()method of
Paginator.
If the
pageparameter is not an integer, we retrieve the first page of results. If this parameter is a number higher than the last page of results, we retrieve the last page.
We pass the page number and retrieved objects to the template. display the current page and total pages of results. Let's go back to the
blog/post/list.html template and include the
pagination.html template at the bottom of the
{% content %} block, like this:
{% block content %} ... {% include "pagination.html" with page=posts %} {% endblock %}
Since the
Page object we are passing to the template is called
posts, we are including the pagination template into the post list template specifying the parameters to render it correctly. This is the method you can use to reuse your pagination template in paginated views of different models.
Now, open in your browser. You should see the pagination at the bottom of the post list and you should be able to navigate through functionality. This is an alternate method to create your views.
We are going to change. Here, we are telling
ListView to:
Use a specific queryset instead of retrieving all objects. Instead of defining a
querysetattribute, we could have specified
model = Postand Django would have built the generic
Post.objects.all()queryset for us.
Use the context variable
postsfor the query results. The default variable is
object_listif we don't specify any
context_object_name.
Paginate the result displaying three objects per page.
Use a custom template to render the page. If we don't set a default template,
ListViewwill use
blog/post_list.html.
Now, open the
urls.py file of your
blog application, comment the previous
post_list URL pattern, and add a new URL pattern using the
PostListView class as follows:
urlpatterns = [ # post views # url(r'^$', views.post_list, name='post_list'), url(r'^$', views.PostListView.as_view(), name='post_list'), url(r'^(?P<year>\d{4})/(?P<month>\d{2})/(?P<day>\d{2})/'\ r'(?P<post>[-\w]+)/$', views.post_detail, name='post_detail'), ]
In order to keep pagination working, we have to use the right page object that is passed to the template. Django's
ListView passes the selected page in a variable called
page_obj, so you have to edit your
post_list.html template accordingly to include the paginator using the right variable, like this:
{% include "pagination.html" with page=page_obj %}
Open in your browser and check.
In this chapter, you have learned the basics of the Django web framework by creating a basic blog application. You have designed the data models and applied migrations to your project. You have created the views, templates, and URLs for your blog, including object pagination.
In the next chapter, you will learn how to enhance your blog application with a comment system, tagging functionality, and allowing your users to share posts by e-mail.
|
https://www.packtpub.com/product/django-by-example/9781784391911
|
CC-MAIN-2021-21
|
refinedweb
| 6,016
| 57.87
|
to do so.
Now.
If your site uses CGI programs to create dynamic pages, you might be tempted to include your standard headers and footers in the programs' output using #include. Unfortunately, because CGI programs and SSIs use different handlers, there isn't any way for this to work. If you decide to use HTML fragments as headers and footers, you might want to define some short subroutines that can be included in your CGI programs.
Also, because different handlers are used for SSIs (“server-parsed”) and CGI programs (“cgi-script”), you cannot include server-side includes in the output from CGI programs and expect them to be interpreted. If you decide to create a uniform look and feel for your site using HTML fragments (described below), any CGI programs you write will be able to include those fragments. If you write CGI programs in Perl, such a subroutine could look like Listing 2. Your CGI programs would then look like Listing 3. Now when you change header.htmlf or footer.htmlf, all output on the server—from HTML files and CGI programs alike—will immediately reflect the changes.
In case you are wondering, fragments are imported verbatim, and any SSIs they might contain are passed along as HTML comments. Assume we defined header.htmlf to be the following two-line fragment:
<P>This is the header.</P> <!--#printenv -->
If this fragment were retrieved directly through Apache, the #printenv SSI would print the current list of environment variables. But since header.htmlf is imported via a #include SSI, the #printenv function is sent to the user's browser uninterpreted. This might seem unnecessary, until you consider that allowing SSIs inside of included files might lead to infinite loops or other unexpected results.
One of the more interesting recent additions to server-side includes is a limited programming language allowing for the setting and testing of variables.
Setting variables is fairly simple; you can do it with the following syntax:
<!--#set var="varname" value="value" -->
You can see the results with #echo (for a specific list of variables) or #printenv (for all defined variables), as in the following example:
<HTML> <Head><Title>Setting variables</Title></Head> <!--#set var="pi" value="3.14159" --> <pre><!--#printenv --></pre> <P>pi = <!--#echo var="pi" --></P> <HR> <!--#set var="e" value="2.71828" --> <pre><!--#printenv --></pre> <P>e = <!--#echo var="e" --></P> </Body> </HTML>The above example also demonstrates how SSIs are interpreted in the same order as they appear in the file. The output from #printenv changes after each variable setting.
Setting variables is useful when used in conjunction with if-then statements. These statements can be used to create conditional text within HTML files without having to use CGI programs. The syntax is rather simple, for example:
<!--#if expr="$SERVER_PORT=80" --> <P>You are using server port 80</P> <!--#else --> <P>You are using a non-standard server port</P> <!--#endif -->
Note that the variable name in an #if statement must be preceded by a dollar sign, much as with shell scripts. The #else statement is optional, but the #endif is mandatory, indicating the end of the conditional text.
You can even perform pattern-matching within variables, using regular expressions, as in the following:
<HTML> <Head><Title>Browser check</Title></Head> <!--#if expr="$HTTP_USER_AGENT = /^Mozilla/" --> <P>You are using Netscape</P> <!--#else --> <P>You are using another browser</P> <!--#endif --> </Body> </HTML>
If the value of HTTP_USER_AGENT (normally set to a string identifying the user's browser) is set to
Mozilla/4.04 [en] (X11; I; Linux 2.0.30 i586; Nav)as is the case on my system, the above will evaluate to “true”, and thus print the first string. Otherwise, it will print the second string. In this way, you can create menus customized for each browser. For instance, you could make life easier for users of Lynx (a text-only browser) by giving them a separate menu structure that does not rely on images..
|
https://www.linuxjournal.com/article/2919
|
CC-MAIN-2018-30
|
refinedweb
| 663
| 66.03
|
#include <hallo.h> * Jon Dowland [Thu, Oct 28 2004, 12:33:47PM]: > > In case you don't need to run 3d programs, using nv will be fine for you > > ...provided you have a reasonably modern computer or a small > resolution display. On my AMD K6-2 450mhz, the nv driver is unbearably > slow at 1280x1024. I don't think the CPU is your problem. Do you have a TNT-2 or older Nvidia chip? The XVideo support for them is not complete, which means taht watching videos becomes _very_ CPU intensive. Although, the nvidia drivers does support XVideo at TNT-2. With modern Nvidia cards, I did not notice any big difference except of GLX support. And the NV driver is less buggy WRT power saving system modes. Regards, Eduard. -- <martoss> hmm, naja, aber kde 2 ist nicht wirklich prickelnd <weasel> martoss: wenn du was prickelndes willst, trink mineralwasser <martoss> weasel: :-), ich weiss, aber es muss halt auch ein bischen was fürs auge da sein... <youam> martoss: dann tu nen strohhalm rein! :)
|
https://lists.debian.org/debian-user/2004/10/msg03099.html
|
CC-MAIN-2018-05
|
refinedweb
| 172
| 73.27
|
[UNIX] PADS Simple Stack Overflow
From: SecuriTeam (support_at_securiteam.com)
Date: 08/22/04
- Previous message: SecuriTeam: "[UNIX] Sympa Mailing List System Cross Site Scripting"
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] [ attachment ]
To: list@securiteam.com Date: 22 Aug 2004 17:28:15 +0200
The following security advisory is sent to the securiteam mailing list, and can be found at the SecuriTeam web site:
- - promotion
The SecuriTeam alerts list - Free, Accurate, Independent.
Get your security news from a reliable source.
- - - - - - - - -
PADS Simple Stack Overflow
------------------------------------------------------------------------
SUMMARY
" <> PADS is a signature based detection
engine used to passively detect network assets. It is designed to
complement IDS technology by providing context to IDS alerts."
A simple stack overflow exists in PADS when handling the 'w' command line
argument.
DETAILS
Vulnerable Systems:
* PADS version 1.1 and prior
Immune Systems:
* PADS version 1.1.1 or newer
PADS is vulnerable to a buffer overflow when handling the 'w' command line
argument which is used to specify to which file the report should be
written to. There is no bounds checking whatsoever and the optional
argument of the filename is copied directly to a fixed-length buffer using
strcpy(). The piece of relevant code is located at the pads.c file:
.....
char report_file[255] = "assets.csv";
........
case 'w':
strcpy(report_file, optarg);
break;
...........
Vendor Status:
The author of the program was informed and a fix is available, upgrade to
version 1.1.1.
A proof of concept exploit is also provided:
/*
lazy mans exploit
i make no guarantees this will exploit anything, the exploit itself was
coded sloppy
ChrisR-
*/
#include <string.h>
#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
// typical shellcode to spawn a shell, this can be replaced int sp(void)
{__asm__("movl %esp, %eax");}
int main(int argc, char *argv[])
{
int i, offset;
long esp, ret, *addr_ptr;
char *buffer, *ptr;
int buff_size, nop_size;
char prog_path[255];
char prog_name[255];
char prog_arg[255];
if (argc > 1)
{
printf("\nUsage: %s And enter the values\n",argv[0]);
printf("%s is a tool to aide automate local stack overflow testing. You
may need to change the code to fit your needs"
"there is no way to guarntee automation in the exploitation
process, except for basic examples.\n\n",argv[0]);
printf("chris @\n\n");
return 0;
}
printf("Please enter the values as requested . . .\n");
printf("Enter the vulnerable program path: ", prog_path);
scanf("%s", prog_path);
printf("Enter the vulnerable program name: ", prog_name);
scanf("%s", prog_name);
/****if no args req. comment out the next line and the one below that and
fix execl()****/
printf("Enter any arguments the program requires: ", prog_arg);
scanf("%s", prog_arg);
printf("Enter an offset: ");
scanf("%d", &offset);
printf("Enter a buffer size: ");
scanf("%d", &buff_size);
printf("Enter the nop sled size: ");
scanf("%d", &nop_size);
esp = sp();
ret = esp - offset;
printf("\nThe Return Value Is: 0x%x\n", ret);
buffer = malloc(buff_size);
ptr = buffer;
addr_ptr = (long *)ptr;
for (i = offset; i < buff_size; i+=4)
*(addr_ptr++) = ret;
for ( i = offset; i < nop_size; i++)
buffer[i] = '\x90';
ptr = buffer + nop_size;
for ( i = offset; i < strlen(shellcode); i++)
*(ptr++) = shellcode[i];
buffer[buff_size] = 0;
printf("Injecting Shellcode . . .\n\n");
execl(prog_path, prog_name, prog_arg, buffer, 0);
free(buffer);
return 0;
}
Output:
/> ex_bof
Please enter the values as requested . . .
Enter the vulnerable program path: pads
Enter the vulnerable program name: pads
Enter any arguments the program requires: -w
Enter an offset: 0
Enter a buffer size: 600
Enter the nop sled size: 400
The Return Value Is: 0xbffff8b8
Injecting Shellcode . . .
pads - Passive Asset Detection System
v1.1 - 08/14/04
Matt Shelton
sh-3.00$ id
uid=1000(chris) gid=1000(chris)
groups=20(dialout),24(cdrom),25(floppy),1000(chris)
sh-3.00$
ADDITIONAL INFORMATION
The information has been provided by <mailto:chris@cr-secure.net>
Chris] Sympa Mailing List System Cross Site Scripting"
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] [ attachment ]
|
http://www.derkeiler.com/Mailing-Lists/Securiteam/2004-08/0075.html
|
crawl-001
|
refinedweb
| 644
| 53.21
|
The Trie Data Structure in Java
Last modified: September 22, 2020
1. Overview
Data structures represent a crucial asset in computer programming, and knowing when and why to use them is very important.
This article is a brief introduction to trie (pronounced “try”) data structure, its implementation and complexity analysis.
2. Trie
A trie is a discrete data structure that's not quite well-known or widely-mentioned in typical algorithm courses, but nevertheless an important one.
A trie (also known as a digital tree) and sometimes even radix tree or prefix tree (as they can be searched by prefixes), is an ordered tree structure, which takes advantage of the keys that it stores – usually strings.
A node's position in the tree defines the key with which that node is associated, which makes tries different in comparison to binary search trees, in which a node stores a key that corresponds only to that node.
All descendants of a node have a common prefix of a String associated with that node, whereas the root is associated with an empty String.
Here we have a preview of TrieNode that we will be using in our implementation of the Trie:
public class TrieNode { private HashMap<Character, TrieNode> children; private String content; private boolean isWord; // ... }
There may be cases when a trie is a binary search tree, but in general, these are different. Both binary search trees and tries are trees, but each node in binary search trees always has two children, whereas tries' nodes, on the other hand, can have more.
In a trie, every node (except the root node) stores one character or a digit. By traversing the trie down from the root node to a particular node n, a common prefix of characters or digits can be formed which is shared by other branches of the trie as well.
By traversing up the trie from a leaf node to the root node, a String or a sequence of digits can be formed.
Here is the Trie class, which represents an implementation of the trie data structure:
public class Trie { private TrieNode root; //... }
3. Common Operations
Now, let's see how to implement basic operations.
3.1. Inserting Elements
The first operation that we'll describe is the insertion of new nodes.
Before we start the implementation, it's important to understand the algorithm:
- Set a current node as a root node
- Set the current letter as the first letter of the word
- If the current node has already an existing reference to the current letter (through one of the elements in the “children” field), then set current node to that referenced node. Otherwise, create a new node, set the letter equal to the current letter, and also initialize current node to this new node
- Repeat step 3 until the key is traversed
The complexity of this operation is O(n), where n represents the key size.
Here is the implementation of this algorithm:
public void insert(String word) { TrieNode current = root; for (char l: word.toCharArray()) { current = current.getChildren().computeIfAbsent(l, c -> new TrieNode()); } current.setEndOfWord(true); }
Now let's see how we can use this method to insert new elements in a trie:
private Trie createExampleTrie() { Trie trie = new Trie(); trie.insert("Programming"); trie.insert("is"); trie.insert("a"); trie.insert("way"); trie.insert("of"); trie.insert("life"); return trie; }
We can test that trie has already been populated with new nodes from the following test:
@Test public void givenATrie_WhenAddingElements_ThenTrieNotEmpty() { Trie trie = createTrie(); assertFalse(trie.isEmpty()); }
3.2. Finding Elements
Let's now add a method to check whether a particular element is already present in a trie:
- Get children of the root
- Iterate through each character of the String
- Check whether that character is already a part of a sub-trie. If it isn't present anywhere in the trie, then stop the search and return false
- Repeat the second and the third step until there isn't any character left in the String. If the end of the String is reached, return true
The complexity of this algorithm is O(n), where n represents the length of the key.
Java implementation can look like:
public boolean find(String word) { TrieNode current = root; for (int i = 0; i < word.length(); i++) { char ch = word.charAt(i); TrieNode node = current.getChildren().get(ch); if (node == null) { return false; } current = node; } return current.isEndOfWord(); }
And in action:
@Test public void givenATrie_WhenAddingElements_ThenTrieContainsThoseElements() { Trie trie = createExampleTrie(); assertFalse(trie.containsNode("3")); assertFalse(trie.containsNode("vida")); assertTrue(trie.containsNode("life")); }
3.3. Deleting an Element
Aside from inserting and finding an element, it's obvious that we also need to be able to delete elements.
For the deletion process, we need to follow the steps:
- Check whether this element is already part of the trie
- If the element is found, then remove it from the trie
The complexity of this algorithm is O(n), where n represents the length of the key.
Let's have a quick look at the implementation:
public void delete(String word) { delete(root, word, 0); } private boolean delete(TrieNode current, String word, int index) { if (index == word.length()) { if (!current.isEndOfWord()) { return false; } current.setEndOfWord(false); return current.getChildren().isEmpty(); } char ch = word.charAt(index); TrieNode node = current.getChildren().get(ch); if (node == null) { return false; } boolean shouldDeleteCurrentNode = delete(node, word, index + 1) && !node.isEndOfWord(); if (shouldDeleteCurrentNode) { current.getChildren().remove(ch); return current.getChildren().isEmpty(); } return false; }
And in action:
@Test void whenDeletingElements_ThenTreeDoesNotContainThoseElements() { Trie trie = createTrie(); assertTrue(trie.containsNode("Programming")); trie.delete("Programming"); assertFalse(trie.containsNode("Programming")); }
4. Conclusion
In this article, we've seen a brief introduction to trie data structure and its most common operations and their implementation.
The full source code for the examples shown in this article can be found over on GitHub.
In the (common) special case where you know the char will fit in latin-1 (or some other managably small data set), defining children to be an array of TriNode to be indexed by the char is MUCH MUCH faster.
Hey Micha – that’s an interesting note, thanks. Yes, I’m very much aware there are optimizations that can be made here, that will improve the time and space complexity of the algorithms.
However, keep in mind the main focus of this article is to introduce and explain the core concept, not necessarily to reach the most efficient implementation possible.
So, I do agree, we’ll leave that here as a note in case someone needs to optimize our base implementation here.
Thanks,
Eugen.
|
https://www.baeldung.com/trie-java
|
CC-MAIN-2020-50
|
refinedweb
| 1,092
| 50.77
|
Introduction
Hadoop was originally built as infrastructure for the Nutch project, which crawls the web and builds a search engine index for the crawled pages. It provides a distributed filesystem (HDFS) that can store data across thousands of servers, and a means of running work (Map/Reduce jobs) across those machines, running the work near the data.
Contents
Hadoop Map/Reduce
Programming model and execution framework
Map/Reduce is a programming paradigm that expresses a large distributed computation as a sequence of distributed operations on data sets of key/value pairs. The Hadoop Map/Reduce framework harnesses a cluster of machines and executes user defined Map/Reduce jobs across the nodes in the cluster. A Map/Reduce computation has two phases, a map phase and a reduce phase. The input to the computation is a data set of key/value pairs.
In the map phase, the framework splits the input data set into a large number of fragments and assigns each fragment to a map task. The framework also distributes the many map tasks across the cluster of nodes on which it operates. Each map task consumes key/value pairs from its assigned fragment and produces a set of intermediate key/value pairs. For each input key/value pair (K,V), the map task invokes a user defined map function that transmutes the input into a different key/value pair (K',V').
Following the map phase the framework sorts the intermediate data set by key and produces a set of (K',V'*) tuples so that all the values associated with a particular key appear together. It also partitions the set of tuples into a number of fragments equal to the number of reduce tasks.
In the reduce phase, each reduce task consumes the fragment of (K',V'*) tuples assigned to it. For each such tuple it invokes a user-defined reduce function that transmutes the tuple into an output key/value pair (K,V). Once again, the framework distributes the many reduce tasks across the cluster of nodes and deals with shipping the appropriate fragment of intermediate data to each reduce task.
Tasks in each phase are executed in a fault-tolerant manner, if node(s) fail in the middle of a computation the tasks assigned to them are re-distributed among the remaining nodes. Having many map and reduce tasks enables good load balancing and allows failed tasks to be re-run with small runtime overhead.
Architecture
The Hadoop Map/Reduce framework has a master/slave architecture. It has a single master server or jobtracker and several slave servers or tasktrackers, one per node in the cluster. The jobtracker is the point of interaction between users and the framework. Users submit map/reduce jobs to the jobtracker, which puts them in a queue of pending jobs and executes them on a first-come/first-served basis. The jobtracker manages the assignment of map and reduce tasks to the tasktrackers. The tasktrackers execute tasks upon instruction from the jobtracker and also handle data motion between the map and reduce phases.
Hadoop DFS.
Architecture
Like Hadoop Map/Reduce, HDFS follows a master/slave architecture. An HDFS installation consists of a single Namenode, a master server that manages the filesystem namespace and regulates access to files by clients. In addition, there are a number of Datanodes, one per node in the cluster, which manage storage attached to the nodes that they run on. The Namenode makes filesystem namespace operations like opening, closing, renaming etc. of files and directories available via an RPC interface. It also determines the mapping of blocks to Datanodes. The Datanodes are responsible for serving read and write requests from filesystem clients, they also perform block creation, deletion, and replication upon instruction from the Namenode.
Scalability
The intent is to scale Hadoop up to handling thousand of computers. Some real Hadoop clusters are described on the PoweredBy page.
|
https://wiki.apache.org/hadoop/ProjectDescription
|
CC-MAIN-2017-30
|
refinedweb
| 649
| 51.89
|
I need to change an image's border color on hover and remove the border when the hover is removed. Is there a way to do this in Xamarin?
public class MouseoverCursorAppearanceEffect : PlatformEffect { private FrameworkElement _element; /// <summary>Method that is called after the effect is attached and made valid.</summary> /// <remarks>To be added.</remarks> protected override void OnAttached() { _element = Control ?? Container; _element.PointerEntered -= OnPointerEntered; _element.PointerExited -= OnPointerExited; _element.PointerEntered += OnPointerEntered; _element.PointerExited += OnPointerExited; } private void OnPointerExited(object sender, PointerRoutedEventArgs e) { Window.Current.CoreWindow.PointerCursor = new CoreCursor(CoreCursorType.Arrow, 1); } private void OnPointerEntered(object sender, PointerRoutedEventArgs e) { Window.Current.CoreWindow.PointerCursor = new CoreCursor(CoreCursorType.Hand, 1); } /// <summary>Method that is called after the effect is detached and invalidated.</summary> /// <remarks>To be added.</remarks> protected override void OnDetached() { Window.Current.CoreWindow.PointerCursor = new CoreCursor(CoreCursorType.Arrow, 1); _element.PointerEntered -= OnPointerEntered; _element.PointerExited -= OnPointerExited; } }.
Answers
@XamarinBeginner it sounds like you're looking to apply styling or implement a rule. You can try doing a declaration block which consist of a list of declarations for the specific image border you want changed.
@XamarinBeginner No, definitely don't use CSS styling for anything.
This is not web dev, this is XAML - MVVM world, do it with XAML styles if you do styling. That fits 99% of the companies you could possibly get a Xamarin job with.
As to your question, the way I would do it is using an Effect. I have a coworker who has written a mouseover effect before, I just don't have access to show the right code currently.
It is relatively easy though, not too tough if you know anything about the underlying platform. I assume you are doing this for UWP/Mac, as mouseover really doesn't make sense on the mobile app side.
@AdamMeaney Yes, I am doing it for Xamarin UWP. Could you please give an idea on how to implement/write the mouseover effect on image?
@NMackay Sorry I was not clear before. I need mouseover effect on a button as well as standalone image as well.
I have tried this link before for the button mouseover. When I had tried it, the button had some issues - i.e when we put image inside button, image was not completely filling inside the button, there was space between the border of the button and the image. How do I reduce the space? I tried setting padding="0" but it did not work.
Do you know any mouse over effect that can be implemented on image element only?
Thanks
Oh right, if it's just a plain image that might be tricky. The Xamarin render effectively exposes the same events as the native renderer
There's no mouse over event.
Seen this done before with storyboards in WPF, something similar to this but in this sample the image is wrapped in a parent UIElement
Why don't you wrap you image in a parent control using a custom control and then get the parent renderer to take care of the mouseover events? that's the approach I would be taking.
But there is no gridview in Xamarin
. We have only Grid in Xamarin. We cannot implement mouse over effect like in this link in xamarin right?
Well I was thinking you'd have to completely override the layout renderer and write your own control which seems a lot of grief.
I think you need to investigate the button render further as these hooks are there, and use styling to achieve your hover effect.
Really, I hate UWP with a passion as it's the platform that restricts our cross platform app the most and i wish we didn't have to support it. I don't have time to look further into this.
@ColeX @jezh Any ideas?.
@AdamMeaney
Thanks for sharing.
@NMackay @AdamMeaney Thanks for sharing the code.
Make sure you mark Adam's post as an answer (which it is) if it helps you.
@AdamMeaney I have tried the code you shared. It is possible to access the control in which effect is added on PointerEntered event. Is there a way to access other objects in the page as well on the same PointerEntered event?
For example, I am adding effect to a button in a page. If we need to access a label or grid in the same page on the pointer entered event of button, is there a way to do that?
I mean its possible by looking at the parent of the Element in the Effect and working your way up, but I don't recommend it.
If you need to do some other processing in Forms when the pointer events happen, you can expose Events on the Forms Effect that you fire from this effect.
This lets your page subscribe to those events and react in Forms code.
|
https://forums.xamarin.com/discussion/147498/how-can-i-change-an-image-border-color-on-hover-in-xamarin
|
CC-MAIN-2019-35
|
refinedweb
| 807
| 67.15
|
I'm fairly new to Arduino's and C++ coding and have been given a few projects to help me learn the ropes.
One of them is to use a servo motor with a ultrasonic sensor where depending on where the sensor picks up the distance the motor moves. Right now I have found a line of code online as a base but it only has 2 movements 0 and 180. I want for it to be where the servo motor moves depending on where the sensor picks it up from all the numbers from 0 to 180 degrees on the motor not just 0 and 180. Any tips on what to change and do to fix would be greatly appreciated.
#include <Servo.h> // constants won't change const int TRIG_PIN = 6; // Arduino pin connected to Ultrasonic Sensor's TRIG pin const int ECHO_PIN = 7; // Arduino pin connected to Ultrasonic Sensor's ECHO pin const int SERVO_PIN = 9; // Arduino pin connected to Servo Motor's pin const int DISTANCE_THRESHOLD = 50; // centimeters Servo servo; // create servo object to control a servo // variables will change: float duration_us, distance_cm; void setup() { Serial.begin (9600); // initialize serial port pinMode(TRIG_PIN, OUTPUT); // set arduino pin to output mode pinMode(ECHO_PIN, INPUT); // set arduino pin to input mode servo.attach(SERVO_PIN); // attaches the servo on pin 9 to the servo object servo.write(0); } void loop() { // generate 10-microsecond pulse to TRIG pin digitalWrite(TRIG_PIN, HIGH); delayMicroseconds(10); digitalWrite(TRIG_PIN, LOW); // measure duration of pulse from ECHO pin duration_us = pulseIn(ECHO_PIN, HIGH); // calculate the distance distance_cm = 0.017 * duration_us; if (distance_cm < DISTANCE_THRESHOLD) servo.write(180); // rotate servo motor to 90 degree else servo.write(0); // rotate servo motor to 0 degree // print the value to Serial Monitor Serial.print("distance: "); Serial.print(distance_cm); Serial.println(" cm"); delay(500); }
|
https://forum.arduino.cc/t/servo-motor-with-a-ultrasonic-sensor/921109
|
CC-MAIN-2021-49
|
refinedweb
| 302
| 50.26
|
ASP.NET MVC 101
If you run the skeleton application just created by pressing F5, your browser should display a page similar to that on Figure 3.
Figure 3: The basic ASP.NET MVC application look.
Often, the best way to learn a new technique is to see how an example application has been built. The skeleton created by the MVC project template is one such example. ASP.NET MVC applications handle the web requests thru controller classes, and the request from the browser is directed to a particular controller via routes.
Routes are a new feature in .NET 3.5 SP1, and ASP.NET MVC uses this feature to give controller classes the capability to respond to requests. ASP.NET MVC uses REST-like URLs (Representational State Transfer) that are cleaner than regular ASP.NET web application URLs. Here are examples of such URLs:
/products/show/881 /customers/list /login/register
As you can see, REST-like URLs tend to be clean, simple, and don't expose .aspx files directly on the server. Although you can have directly addressed .aspx pages in ASP.NET MVC applications, this is not the main idea.
Through developer-defined routes, the ASP.NET MVC application can direct requests to controllers. Routes are defined once for the entire ASP.NET application, so the Global.asax file is the logical place to define these. In fact, if you take a look at the default Global.asax.cs file in an ASP.NET MVC application, you will see a method called RegisterRoutes defined. This method is called from the Application_Start method, and implemented like this:
public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Default", "{controller}/{action}/{id}", new { controller = "Home", action = "Index", id = "" } ); }
By default, a route called "Default" is defined with three parts: controller, action, and finally an ID separated by slashes. When a request arrives at the MVC application, the URL is parsed according to this definition. For instance, given the request for "/products/show/881", this would be parsed so that the "controller" parameter would have the value of "products", the "action" parameter would be "show", and finally the "id" would be "881".
Note: Parameter values can be optional, in which case the route also has default values specified. For instance, if the action parameter value does not exist, the default value of "Index" is used. Similarly, "Home" is the default value for the controller. Given a request of "/" (the root), the Home controller is used with the "Index" action.
Handling Requests with Controllers
If you recall the default routing from the previous code listing, you know that in the skeleton ASP.NET MVC application, the Home controller is the default controller for requests that do not specify any controller in the request URL.
Controller classes live in the Controllers folder in the Visual Studio solution. From here, you can by default find two controller classes: the HomeController and the AccountController. The naming convention is that each controller class should have the word "Controller" in its name. This way, the MVC framework can redirect a request for "/products" for the ProductsController class.
Each of your own controller classes descends from the Controller class, which is part of the System.Web.Mvc namespace, the default namespace for all classes related to MVC. Then, each public method in your controller class can handle an action based on the method's name. For example, in the case of the previous request to "/products/show/881", the ProductsController's "Show" method would be called.
This method would be the place to write the logic to handle the request. Normally, a controller method returns a type called ActionResult (a class defined also in the System.Web.Mvc namespace), which is most commonly a view object, although it also can be something else. As you will recall, the view is the actual representation of the data, most often HTML code.
When developing your ASP.NET MVC application, you want to add our own actions and controllers. This is very easy indeed, because all you need to do is write more code! That is, if you want to add new actions such as "delete" or "insert" to your ProductsController, it is enough just to write those methods. You do not need to add anything to the routes in the Global.asax.cs file because the routing system is intelligent enough to do all this for you. The same goes with controller classes; just add your new controllers to the Controllers folder in the solution.
Returning Views to the Browser
Up to this point, you have learned about controllers. Next, you will see how controllers can manipulate a view and display data on it. Because the idea behind MVC applications is that you separate the logic from the view, it would not make much sense if the controller would construct HTML code directly to be returned to the user.<<
|
https://www.developer.com/net/article.php/10916_3788416_2/ASPNET-MVC-101.htm
|
CC-MAIN-2018-51
|
refinedweb
| 823
| 65.83
|
Guide to AVL Trees in Java
Last modified: March 8, 2020
1. Introduction
In this tutorial, we'll introduce the AVL Tree and we'll look at algorithms for inserting, deleting, and searching for values.
2. What Is AVL Tree?
The AVL Tree, named after its inventors Adelson-Velsky and Landis, is a self-balancing binary search tree (BST).. For a BST with N nodes, let's say that that every node has only zero or one child. Therefore its height equals N, and the search time in the worst case is O(N). So our main goal in a BST is to keep the maximum height close to log(N).
The balance factor of node N is height(right(N)) – height(left(N)). In an AVL Tree, the balance factor of a node could be only one of 1, 0, or -1 values.
Let's define a Node object for our tree:
public class Node { int key; int height; Node left; Node right; ... }
Next, let's define the AVLTree:
public class AVLTree { private Node root; void updateHeight(Node n) { n.height = 1 + Math.max(height(n.left), height(n.right)); } int height(Node n) { return n == null ? -1 : n.height; } int getBalance(Node n) { return (n == null) ? 0 : height(n.right) - height(n.left); } ... }
3. How to Balance an AVL Tree?
The AVL Tree checks the balance factor of its nodes after the insertion or deletion of a node. If the balance factor of a node is greater than one or less than -1, the tree rebalances itself.
There are two operations to rebalance a tree:
- right rotation and
- left rotation.
3.1. Right Rotation
Let's start with the right rotation.
Assume we have a BST called T1, with Y as the root node, X as the left child of Y, and Z as the right child of X. Given the characteristics of a BST, we know that X < Z < Y.
After a right rotation of Y, we have a tree called T2 with X as the root and Y as the right child of X and Z as the left child of Y. T2 is still a BST because it keeps the order X < Z < Y.
Let's take a look at the right rotation operation for our AVLTree:
Node rotateRight(Node y) { Node x = y.left; Node z = x.right; x.right = y; y.left = z; updateHeight(y); updateHeight(x); return x; }
3.2. Left Rotation Operation
We also have a left rotation operation.
Assume a BST called T1, with Y as the root node, X as the right child of Y, and Z as the left child of X. Given this, we know that Y < Z < X.
After a left rotation of Y, we have a tree called T2 with X as the root and Y as the left child of X and Z as the right child of Y. T2 is still a BST because it keeps the order Y < Z < X.
Let's take a look at the left rotation operation for our AVLTree:
Node rotateLeft(Node y) { Node x = y.right; Node z = x.left; x.left = y; y.right = z; updateHeight(y); updateHeight(x); return x; }
3.3. Rebalancing Techniques
We can use the right rotation and left rotation operations in more complex combinations to keep the AVL Tree balanced after any change in its nodes. In an unbalanced structure, at least one node has a balance factor equal to 2 or -2. Let's see how we can balance the tree in these situations.
When the balance factor of node Z is 2, the subtree with Z as the root is in one of these two states, considering Y as the right child of Z.
For the first case, the height in the right child of Y (X) is greater than the hight of the left child (T2). We can rebalance the tree easily by a left rotation of Z.
For the second case, the height of the right child of Y (T4) is less than the height of the left child (X). This situation needs a combination of rotation operations.
In this case, we first rotate Y to the right, so the tree gets in the same shape as the previous case. Then we can rebalance the tree by a left rotation of Z.
Also, when the balance factor of node Z is -2, its subtree is in one of these two states, so we consider Z as the root and Y as its left child.
The height in the left child of Y is greater than that of its right child, so we balance the tree with the right rotation of Z.
Or in the second case, the right child of Y has a greater height than its left child.
So, first of all, we transform it into the former shape with a left rotation of Y, then we balance the tree with the right rotation of Z.
Let's take a look at the rebalance operation for our AVLTree:
Node rebalance(Node z) { updateHeight(z); int balance = getBalance(z); if (balance > 1) { if (height(z.right.right) > height(z.right.left)) { z = rotateLeft(z); } else { z.right = rotateRight(z.right); z = rotateLeft(z); } } else if (balance < -1) { if (height(z.left.left) > height(z.left.right)) z = rotateRight(z); else { z.left = rotateLeft(z.left); z = rotateRight(z); } } return z; }
We'll use rebalance after inserting or deleting a node for all the nodes in the path from the changed node to the root.
4. Insert a Node
When we're going to insert a key in the tree, we have to locate its proper position to pass the BST rules. So we start from the root and compare its value with the new key. If the key is greater, we continue to the right — otherwise, we go to the left child.
Once we find the proper parent node, then we add the new key as a node to left or right, depending on the value.
After inserting the node, we have a BST, but it may not be an AVL Tree. Therefore, we check the balance factors and rebalance the BST for all the nodes in the path from the new node to the root.
Let's take a look at the insert operation:
Node insert(Node node, int key) { if (node == null) { return new Node(key); } else if (node.key > key) { node.left = insert(node.left, key); } else if (node.key < key) { node.right = insert(node.right, key); } else { throw new RuntimeException("duplicate Key!"); } return rebalance(node); }
It is important to remember that a key is unique in the tree — no two nodes share the same key.
The time complexity of the insert algorithm is a function of the height. Since our tree is balanced, we can assume that time complexity in the worst case is O(log(N)).
5. Delete a Node
To delete a key from the tree, we first have to find it in the BST.
After we find the node (called Z), we have to introduce the new candidate to be its replacement in the tree. If Z is a leaf, the candidate is empty. If Z has only one child, this child is the candidate, but if Z has two children, the process is a bit more complicated.
Assume the right child of Z called Y. First, we find the leftmost node of the Y and call it X. Then, we set the new value of Z equal to X ‘s value and continue to delete X from Y.
Finally, we call the rebalance method at the end to keep the BST an AVL Tree.
Here is our delete method:
Node delete(Node node, int key) { if (node == null) { return node; } else if (node.key > key) { node.left = delete(node.left, key); } else if (node.key < key) { node.right = delete(node.right, key); } else { if (node.left == null || node.right == null) { node = (node.left == null) ? node.right : node.left; } else { Node mostLeftChild = mostLeftChild(node.right); node.key = mostLeftChild.key; node.right = delete(node.right, node.key); } } if (node != null) { node = rebalance(node); } return node; }
The time complexity of the delete algorithm is a function of the height of the tree. Similar to the insert method, we can assume that time complexity in the worst case is O(log(N)).
6. Search for a Node
Searching for a node in an AVL Tree is the same as with any BST.
Start from the root of the tree and compare the key with the value of the node. If the key equals the value, return the node. If the key is greater, search from the right child, otherwise continue the search from the left child.
The time complexity of the search is a function of the height. We can assume that time complexity in the worst case is O(log(N)).
Let's see the sample code:
Node find(int key) { Node current = root; while (current != null) { if (current.key == key) { break; } current = current.key < key ? current.right : current.left; } return current; }
7. Conclusion
In this tutorial, we have implemented an AVL Tree with insert, delete, and search operations.
As always, you can find the code over on Github.
|
https://www.baeldung.com/java-avl-trees
|
CC-MAIN-2021-17
|
refinedweb
| 1,547
| 82.44
|
Terminate a connection with the composited windowing system.
#include <screen/screen.h>
int screen_destroy_context(screen_context_t ctx)
The connection to Screen that is to be terminated. This context must have been created with screen_create_context().
Function type: Apply Execution
This function closes an existing connection with the composited windowing system resource manager; the context is freed and can no longer be used. All windows and pixmaps associated with this connection will be destroyed. All events waiting in the event queue will be discarded. This operation does not flush the command buffer. Any pending asynchronous commands are discarded.
0 if the context was destroyed, or -1 if an error occurred (errno is set; refer to /usr/include/errno.h for more details). Note that the error may also have been caused by any delayed execution function that's just been flushed.
|
http://www.qnx.com/developers/docs/6.6.0.update/com.qnx.doc.screen/topic/screen_destroy_context.html
|
CC-MAIN-2018-13
|
refinedweb
| 138
| 50.94
|
CodePlexProject Hosting for Open Source Software
How can I refresh the entire page with an Action?
I've got a form in FirstAside with a textbox and a submit button which point to an action "Search" in a controller. (All this in a widget)
What i want to do is Refresh whole page so another widget (located in Content Zone) will display informations that match the current search.
Note: I do not use content items from Orchard but external data fetched by a WebService... That's why I don't use projections.
Until now, my controller looks like this:
public ActionResult SearchInstallation(string textToSearch) {
//Get All the Installations
List<Services.Customer.TransportModel.Installation.Installation> lst =
(List<Services.Customer.TransportModel.Installation.Installation>)(new Client().GetInstallationCollection(null, null));
List<Services.Customer.TransportModel.Installation.Installation> result;
// Get Installations that matches textToSearch
result = lst.FindAll(f => f.Number.Contains(textToSearch));
//return ContentShape("Parts_Installation", () => shapeHelper.Parts_Installation(ContentItems: result));
return View("Parts/Installation", result);
}
I'm doin ol' plain MVC stuff but I guess it is not the right way to do this isn't it?Is there someone who can helps me? Thanks
I'm doin ol' plain MVC stuff but I guess it is not the right way to do this isn't it?
Is there someone who can helps me?
Thanks
Not sure what you mean by refresh. Can you explain exactly the scenario you are trying to implement?
Sorry, I'm new to Orchard and maybe the terms I use are not the right ones.
First, i've created a widget which displays external data in a grid. My display method looks like this:
protected override DriverResult Display(InstallationPart part, string displayType, dynamic shapeHelper) {
List<Services.Customer.TransportModel.Installation.Installation> model =
(List<Services.Customer.TransportModel.Installation.Installation>)(new Client().GetInstallationCollection(null, null));
return ContentShape("Parts_Installation", () => shapeHelper.Parts_Installation(ContentItems: model));
}
This use the GetInstallationCollection method to retrieve all the Data needed to fill a Kendo Grid. I'm passing the list to the template with a dynamic property named "ContentItems" This works all fine. My template's name is /Views/Part/Installation.cshtml.
Second, I've created a second widget (that i placed in FirstAside) which consists of a textbox and submit button. This button calls an action of the controller (pasted in previous post). My form looks like that.
@using (Html.BeginForm("SearchInstallation", "Installation", new { controller = "Installation", area = "MyProject.Installation" }, FormMethod.Post)) {
@Html.AntiForgeryTokenOrchard()
@Html.TextBox("textToSearch")
<input
type="submit"
value=">"
/>
<br
/>
}
Up to now, no problem! My action is called perfectly... but now, when my view is returned, everything screwed up! I'm loosing everything in content zone... As you can see, my action call GetInstallationCollection again, filters the result with the FindAll method
then send this data back with the view so the (grid) Widget can display filtered values.
The "refresh" means that my grid widget needs to "filter" based on search parameter in my search widget, i i'm stuck on how to do it. Since i'm learning Orchard, i'm pretty sure i've made a lot of bad code and mistakes and i'm looking foward
to achieve this in a correct way ;)
I've seen a lot of differences between
return ContentShape("Parts_Installation", () => shapeHelper.Parts_Installation(ContentItems: model));
and
return View("Parts/Installation", result);
In the first approach, the model's Type is a shape and in the second option, my model' Type is a List. I'm really stuck in all this...
- How can an Action (plain MVC) can be used in an Orchard context?
- How can it be possible to make 2 widgets to communicate
- What am I missing
Thank you very much for your precious help.
Pat
- How can an Action (plain MVC) can be used in an Orchard context?
Easily. Orchard is built on MVC, so it just works. Only caveats is that you need to play nice with custom routes, defining them in an IRouteProvider implementation because they are shared resources, and that you have additional capabilities to return shapes
from actions and you can mark your actions as themed.
- How can it be possible to make 2 widgets to communicate
You shouldn't. Instead, both widgets should take their information from a common source.
Thanks Bertrand,
Do you have an example or can you point me some doc to allow both Widgets sharing the same source?
Also, i think i've found a clue this morning, but i'm not sure how this could end. I've read this post and i've learned how i can make a widget without Driver and Parts. This way of doing things helps me a lot because i'm only using external data.
I've also how to return a ShapeResult within my controller as you told me in previous post... like this:
public ActionResult SearchInstallation(string textToSearch) {
dynamic myDisplay = _services.New.TestInfo(
TextToDisplay: textToSearch
);
return new ShapeResult(this, myDisplay);
}
This is perfect so i can take my searched text and pass that text to my Shape (TestInfo). The problem is that this shape I return is always displaying in the Content Zone and what i really need is display that "TextToDisplay" in the Header. Can
I achieve that using Placement.info? Or do I have to specify it in my Action? Or do I have to use alternates?
Regards,
Placement can target top-level zones by adding a leading slash: "/AsideSecond:5" for example.
Yeah, i tried that and it doesn't seem to work. I'm probably wrong with the name
<Place
TheWrongName="/AsideSecond:5"/>
What name should i write here. My shape's name is TestInfo but its a dynamic one with no part. (like this example)
I guess i'll have to use a part to be able to use placement.info?
My widget's name is TestInfoWidget as defined in my Migration.cs file:
ContentDefinitionManager.AlterTypeDefinition("TestInfoWidget",
cfg => cfg
.WithPart("WidgetPart")
.WithPart("CommonPart")
.WithSetting("Stereotype",
"Widget"));
And my handler looks like this:
public class TestInfoWidgetHandler : ContentHandler {
public TestInfoWidgetHandler() {
}
protected override void BuildDisplayShape(BuildDisplayContext context) {
base.BuildDisplayShape(context);
if (context.ContentItem.ContentType == "TestInfoWidget") {
dynamic myDisplay = context.New.TestInfo(
TextToDisplay: "ABCDEF"
);
context.Shape.Zones["Content"].Add(myDisplay);
}
}
}
Also, you wrote about 2 widgets sharing the same source, how can I do that? It me also be a good path to explore.
Cheers,
Right, that's not a content part. Placement is for shapes generated by content parts. In that case you just add the shape to a zone.
I don't know what more to tell you about 2 widgets sharing the same source. Your widgets are taking their data from somewhere. You'll need to be more specific.
I have an API which is used to call WebServices. The method that I use is GetInstallationCollection() which returns a Generic List of Installations
List<Installation> lst = GetInstallationCollection()
This list have to be the source to every widget in my application.
- The widget "Search" (in AsideFirst) will filter that list
- The widget "Grid" (In Content) will display the List<Installation> according to the specified filter in the widget "Search"
Up to now the only way i managed to do this is ths way:
1) The search widget calls an action in a controller
2) This action stores filter in the HttpContext in the session
3) A handler is defined for my "Grid" widget trap the OnLoaded<InstallationPart>, get the filter in session and then calls GetInstallationCollection() to get my data from the WebService
4) The list is passed to a property of my InstallationPart (InstallationPart.ListInstallation = lst)
5) Finally Display method from the InstallationDriver looks like that:
protected override DriverResult Display(InstallationPart part, string displayType, dynamic shapeHelper) {
return ContentShape("Parts_Installation", () => shapeHelper.Parts_Installation(ContentPart: part));
}
Am I doing wrong? Is there another way (better) of doing this?
I really appreciate your help Bertrand
Pat
That sounds just fine.
Great, i'll continue that way!
BTW, i really like Orchard. You are all doing very good with this application. We are looking to use it and we will suggest it to our clients who need a good CMS in .NET.
Thanks again,
Note: I'll son post my entire solution so someone else could benefit from my research. Is there a way to easily contribute to the knowledge base?
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later.
|
http://orchard.codeplex.com/discussions/405029
|
CC-MAIN-2017-26
|
refinedweb
| 1,412
| 57.37
|
Since ATtiny13 does not have hardware USART/UART in some cases we’re forced to use Software UART. In this example project a simple bit-banging UART has been presented. Compiled for both TX and RX it uses just 248 bytes of flash. Default serial configuration is 8/N/1 format at 19200 baud (lowest error rates during tests). It can be easily changed to other standard RS-232 baudrate. Note that it is a great solution for a simple debug/logging but to other purposes I would recommend using an AVRs with a built-in UART. The code is on Github, click here.
Edit: To make a simpler integration with external projects I have created a little library:
Parts Required
- ATtiny13 – i.e. MBAVR-1 development board
Circuit Diagram
Serial communications tools
1) TTL Serial Adapter (UART <-> USB converter)
2) Serial terminal
I usually use CuteCom – a graphical serial port communications program, similar to Minicom.
Or python scripts, i.e.:
#!/usr/bin/env python import serial import time class Client: def __init__(self, port="/dev/ttyUSB0"): self.port = serial.Serial( port=port, baudrate=19200, stopbits=serial.STOPBITS_ONE, bytesize=serial.EIGHTBITS) def loop(self): while True: req = "%s\r\n" % raw_input("client$ ") n = self.port.write(req) print "Sent %d bytes" % n time.sleep(0.01) rep = self.port.readline() print "Received %d bytes, rep=\"%s\"" % (len(rep), rep.replace("\r\n", "")) if __name__ == '__main__': Client().loop()
Firmware
This code is written in C and can be compiled using the avr-gcc. More details on how compile this project is here.
#include <avr/io.h> #include <util/delay.h> #include <avr/interrupt.h> #define UART_RX_ENABLED (1) // Enable UART RX #define UART_TX_ENABLED (1) // Enable UART TX #ifndef F_CPU # define F_CPU (1200000UL) // 1.2 MHz #endif /* !F_CPU */ #if defined(UART_TX_ENABLED) && !defined(UART_TX) # define UART_TX PB3 // Use PB3 as TX pin #endif /* !UART_TX */ #if defined(UART_RX_ENABLED) && !defined(UART_RX) # define UART_RX PB4 // Use PB4 as RX pin #endif /* !UART_RX */ #if (defined(UART_TX_ENABLED) || defined(UART_RX_ENABLED)) && !defined(UART_BAUDRATE) # define UART_BAUDRATE (19200) #endif /* !UART_BAUDRATE */ #define TXDELAY (int)(((F_CPU/UART_BAUDRATE)-7 +1.5)/3) #define RXDELAY (int)(((F_CPU/UART_BAUDRATE)-5 +1.5)/3) #define RXDELAY2 (int)((RXDELAY*1.5)-2.5) #define RXROUNDED (((F_CPU/UART_BAUDRATE)-5 +2)/3) #if RXROUNDED > 127 # error Low baud rates unsupported - use higher UART_BAUDRATE #endif static char uart_getc(); static void uart_putc(char c); static void uart_puts(const char *s); int main(void) { char c, *p, buff[16]; uart_puts("Hello World!\n"); /* loop */ while (1) { p = buff; while((c = uart_getc()) != '\n' && (p - buff) < 16) { *(p++) = c; } *p = 0; _delay_ms(10); uart_puts(buff); } } char uart_getc(void) { #ifdef UART_RX_ENABLED char c; uint8_t sreg; sreg = SREG; cli(); PORTB &= ~(1 << UART_RX); DDRB &= ~(1 << UART_RX); __asm volatile( " ldi r18, %[rxdelay2] \n\t" // 1.5 bit delay " ldi %0, 0x80 \n\t" // bit shift counter "WaitStart: \n\t" " sbic %[uart_port]-2, %[uart_pin] \n\t" // wait for start edge " rjmp WaitStart \n\t" "RxBit: \n\t" // 6 cycle loop + delay - total = 5 + 3*r22 // delay (3 cycle * r18) -1 and clear carry with subi " subi r18, 1 \n\t" " brne RxBit \n\t" " ldi r18, %[rxdelay] \n\t" " sbic %[uart_port]-2, %[uart_pin] \n\t" // check UART PIN " sec \n\t" " ror %0 \n\t" " brcc RxBit \n\t" "StopBit: \n\t" " dec r18 \n\t" " brne StopBit \n\t" : "=r" (c) : [uart_port] "I" (_SFR_IO_ADDR(PORTB)), [uart_pin] "I" (UART_RX), [rxdelay] "I" (RXDELAY), [rxdelay2] "I" (RXDELAY2) : "r0","r18","r19" ); SREG = sreg; return c; #else return (-1); #endif /* !UART_RX_ENABLED */ } void uart_putc(char c) { #ifdef UART_TX_ENABLED uint8_t sreg; sreg = SREG; cli(); PORTB |= 1 << UART_TX; DDRB |= 1 << UART_TX; __asm volatile( " cbi %[uart_port], %[uart_pin] \n\t" // start bit " in r0, %[uart_port] \n\t" " ldi r30, 3 \n\t" // stop bit + idle state " ldi r28, %[txdelay] \n\t" "TxLoop: \n\t" // 8 cycle loop + delay - total = 7 + 3*r22 " mov r29, r28 \n\t" "TxDelay: \n\t" // delay (3 cycle * delayCount) - 1 " dec r29 \n\t" " brne TxDelay \n\t" " bst %[ch], 0 \n\t" " bld r0, %[uart_pin] \n\t" " lsr r30 \n\t" " ror %[ch] \n\t" " out %[uart_port], r0 \n\t" " brne TxLoop \n\t" : : [uart_port] "I" (_SFR_IO_ADDR(PORTB)), [uart_pin] "I" (UART_TX), [txdelay] "I" (TXDELAY), [ch] "r" (c) : "r0","r28","r29","r30" ); SREG = sreg; #endif /* !UART_TX_ENABLED */ } void uart_puts(const char *s) { while (*s) uart_putc(*(s++)); }
17 thoughts on “ATtiny13 – software UART (debug logger)”
The asm code looks familiar. 🙂
Ralph, thanks for emailing me with the optimization hint! Will try it 🙂
Hi Łukasz , I am so thankful to you for this solution. Simple and works right away. I have been struggling with debugging my ATTiny13 but not anymore.
Thank you so much for this! Keep up the good work!
Hi Vijay, I hope to put here a lot more entries about ATtiny13, soon. Thanks!
I experienced some problems while trying the IR receiver demo where I was not able to have a UART communication. So I tried this part and it works for me out of the box. When I change the demo here to FUSE_L=0x7A, FUSE_H=0xFF, F_CPU=9600000, -DUART_BAUDRATE=57600 (taken from the IR project), I get some compile errors:
main.c:70:2: warning: asm operand 4 probably doesn’t match constraints
__asm volatile(
^
main.c:70:2: error: impossible constraint in ‘asm’
I’m now confused as the code looks identical when comparing these projects. I would be glad if I could get a helping hand ….
When I switch to 115200, the example works again. Do you have an explanation for this behaviour?
I guess it’s a problem with internal clock which is not so stable.
Works right out of the box 🙂 thank you very much Łukasz.
Great. 🙂
Hi, I like your blog and I share your addiction to such a tiny but powerful devices as attiny13. There no too much resources in internet and your blog is real gem for me. So, first of all – thank you for the great work you did.
I have no too much experience in AVRs and may be I’m doing something wrong but I’ve played a little bit with your code and noticed couple of issues. When I’ve added just an additional if-else branch in main() inside while loop, the “Hello World!” stopped working, which is pretty weird. Seems like memory or registers corruption (it might be that assembly code somehow interfere with code generated by gcc?)
Also, it seems me excessive to do DDRB |= 1 << UART_TX each time when you need to send character (and may be PORTB |= 1 << UART_TX as well). Would it be more efficient to take it out of the function and perform only once in kind of "setup()"?
Thanks, Alex. Comments like that give me a power for making next projects.
Issue with program hangs was probably caused by incorrect condition inside a loop (Boma-fix). It has been updated on page after you reported concerns about that bug. Yes, you can achieve one-time initialization of I/O pins by refactoring the code. Note that these redundant instructions cost is two clock-cycles while additional setup function will increase the size of program space. This compromise between CPU speed and program space depend on project requirements. You can choose what fits you best.
I’ve compiled the code from the repo and uploaded to an attiny13 using an arduino mini. The hex is successfully uploaded without error, but when I connect the uart adapter to the attiny and open the serial monitor there is no response. Why?
Hi, I must admit that I never tried upload hex files using an Arduino and I cant help with what could go wrong. I always recommend to check twice if there is a bug related to wiring or mistake in firmware code.
There’s error in main(). There should be (p – buff) < 16.
Fixed. Thank you for reporting that bug.
Fixed where? In the page it still with bug.
Good eyes! Thanks. Fix has been applied to repo. Now page is updated, too.
|
https://blog.podkalicki.com/attiny13-software-uart-debug-logger/
|
CC-MAIN-2020-05
|
refinedweb
| 1,326
| 65.22
|
We'd like to implement the proposed CSS3 Transform property, which WebKit recently added to their development trunk. ('-webkit-transform') WebKit's proposal spec is: and a simple demo page is:
To try out WebKit's behavior on transforms, you can get WebKit nightlies at: or in Ubuntu Linux: sudo apt-get install midori (midori uses a fairly recent webkit trunk build)
Marking wanted1.9.1? to get this on the triage queue.
Bumpin' to P1 as I think this would be great...
Keith Schwarz is taking this on as his internship project. Go Keith! :)
Created attachment 328390 [details] [diff] [review] WIP Patch #1 Basic (and somewhat broken) functionality... can parse the -moz-transform property and render it correctly, but has some issues with invalidation. Click detection is not yet implemented.
Created attachment 328393 [details] [diff] [review] WIP Patch #2 Correction... previous patch is invalid. This is the current WIP patch.
Comment on attachment 328393 [details] [diff] [review] WIP Patch #2 >+PRBool nsDisplayTransform::OptimizeVisibility(nsDisplayListBuilder *aBuilder, nsRegion *aVisibleRegion) >+{ >+ return PR_TRUE; >+} >+ >+#include <time.h> >+ >+/* HitTest does some fun stuff with matrix transforms to obtain the answer. */ >+nsIFrame *nsDisplayTransform::HitTest(nsDisplayListBuilder *aBuilder, nsPoint aPt, HitTestState *aState) You probably want this "#include <time.h>" at the top of the file with the other #includes, right?
Created attachment 328422 [details] Test Case 1 This is a test case that showcases Javascript interacting with the transform property. It should cause the Google logo to grow, shrink, and rotate. As of the second WIP patch, this test case renders correctly, but may have some issues with highlighting.
Created attachment 328423 [details] Test Case 2 This test case showcases the transform property applied to a form. As of WIP patch 2, the form renders correctly but does not correctly support click detection. Also, selecting the form text causes unusual results.
This looks great! One thing I noticed on testcase 2: the form controls aren't always repainting correctly when I navigate through them with the keyboard. Experiment A: - Load testcase 2 - hit tab 3 times, to focus the "Click when finished" button Result: * Radio button focus outline isn't entirely erased. * focus-outline is missing on upper-left & bottom-right corners of button Experiment B: - Load testcase 2 - Hit tab 2 times, to focus a radio button - Hit uparrow, to switch radio buttons Result: * Both radio buttons are only partly repainted with their new state.
(In reply to comment #10) > One thing I noticed on testcase 2: the form controls aren't always repainting > correctly when I navigate through them with the keyboard. One more related experiment: Experiment C: - Load testcase 2 - Hit tab 1 time, to focus the combobox - Press downarrow. Result: - Combobox doesn't update to show new selected choice ("Last digit is 1"), until I click the mouse somewhere inside the window (which apparently forces a repaint) Also, one nitpick: The description at the top of the page says "Current Test: Skew matrix: skewy(20deg) with form elements.", but the page source shows "rotate(-10deg);" is the actual transform you're using.
I looked at the patch briefly -- didn't get very far in -- but I noticed in the very first change, you're changing the IID of the wrong interface (you should change nsIDOMNSCSS2Properties, not nsIDOMCSS2Properties).
Created attachment 328703 [details] [diff] [review] WIP Patch #3 WIP Patch #3 fixes several bugs from WIP Patch #2. Fixes: 1. The patch now correctly changes the IID of the nsIDOMNSCSS2Properties interface. 2. The -moz-transform-origin property now works correctly. 3. The -moz-transform property now correctly works on top-level elements, not just elements contained in a block-level element. Known Issues: 1. When changing the -moz-transform property for an element using JavaScript, only the old overflow rectangle is invalidated. 2. When part of the overflow rectangle for the transform region is invalidated, the transformed area is not invalidated. 3. Clicking-and-dragging the transformed part of a transformed element doesn't trigger the "floating drag" feature. 4. Click detection for transformed elements doesn't work. 5. Changing the -moz-transform-origin property in JavaScript corrupts the -moz-transform property. 6. The computed style value for the -moz-transform property contains extra whitespace. 7. Nested elements using the -moz-transform property don't render correctly.
Created attachment 328911 [details] [diff] [review] WIP Patch #4 WIP Patch #4 Fixes: 1. Changing the transform property of an element now correctly invalidates the new overflow area. 2. Click detection now works for transformed objects. Known Issues: 1. The transformed area isn't invalidated correctly (e.g. scrolling or selecting items in the transform area doesn't issue a redraw correctly). 2. Cannot drag items except from their original bounding rectangles (for the drag-and-drop feature) 3. Changing the -moz-transform-origin property of a non-block level element corrupts the -moz-transform property. 4. The serialized version of the -moz-transform property has extra whitespace not present in the original 5. Webkit's proposed specs allow the translate family of properties to accept percentages, but the current implementation doesn't allow this.
Created attachment 328915 [details] [diff] [review] WIP Patch #4 Correction #1
Created attachment 328922 [details] [diff] [review] WIP Patch #4 Correction 2
(In reply to comment #14) > Known Issues: > 1. The transformed area isn't invalidated correctly (e.g. scrolling or > selecting > items in the transform area doesn't issue a redraw correctly). You probably want to look at nsFrame::InvalidateInternal and (for invalidation from style changes) at the ApplyRenderingChangeToTree function in nsCSSFrameConstructor.cpp. > 3. Changing the -moz-transform-origin property of a non-block level element > corrupts the -moz-transform property. I suspect this is due to bugs in nsRuleNode::ComputeDisplayData. The nsRuleNode::Compute*Data functions must not touch the style data if there's nothing specified, since they're sometimes computing a set of partial data on top of a copy of the computed data for what they're building on (aStartStruct). I'd also note that there are many places where the patch isn't following local whitespace / indentation conventions, which it should do. (This includes the use of tabs in files that don't use tabs.)
Comment on attachment 328922 [details] [diff] [review] WIP Patch #4 Correction 2 >diff -r bf20f88e6916 layout/reftests/bugs/433700-ref.html >--- a/layout/reftests/bugs/433700-ref.html Thu Jul 10 09:36:24 2008 -0400 >+++ b/layout/reftests/bugs/433700-ref.html Thu Jul 10 09:56:55 2008 -0700 >@@ -34,7 +34,6 @@ fieldset div { padding-left:60px; } > .legend { display:block; } >- Also -- looks like you accidentally deleted a newline from this unrelated reftest.
+/* The box is opaque iff the transform is perfectly rectangular. This happens only when we have either + * the identity transform or a complete rotation. For simplicity, though, we'll just assume neither + * is true, since it's unlikely that either case will occur. A pure scale seems like it might be common, and that can be opaque, but don't worry about it for now. +/* Helper function that, given a rectangle and a matrix, returns the smallest rectangle containing the + * image of the source rectangle. Can't you use gfxMatrix::TransformBounds? const nsIFrame *const aFrame I don't find the second 'const' here valuable, and we don't generally do this, so you should probably remove it. + nsIFrame *ReferenceFrame() + { + return const_cast<nsIFrame *>(static_cast<const nsDisplayListBuilder *>(this)->ReferenceFrame()); + } I don't understand why this is necessary, can't you just return mReferenceFrame? The reason for TryMerge is not so much to increase performance as to ensure correctness in certain situations. For example if you have a transformed inline element that breaks across line boundaries and it has opacity:0.5, we need to render the whole element as a single unit before applying opacity. If each frame is wrapped in an nsDisplayTransform wrapper we won't be able to do that. You should revert your changes to nsBlockFrame.cpp The spec says it applies to inline and block elements but your comments make it sound like it only applies to block elements. TransformedItem should be IsTransformed or something like that. You'll need to have some code to make the transformed element an abs-pos and fixed-pos containing block. The fixed-pos containing block is going to be a little bit of work since currently it's hardwired to the viewport. I would use nsTransformFunction instead of nsXfrmFunction. The days of 'creat' are over. I have some patches in the bling branch that clean up invalidation so everything goes through InvalidateInternal, which you need to hook. I'll pick those patches out and submit them, you'll need them.
Created attachment 329091 [details] [diff] [review] WIP Patch #5 WIP Patch #5 Fixes from last patch: 1. The coordinate system for rotations is now corrected to match WebKit's implementation. 2. Changing the -moz-transform-origin property no longer corrupts the -moz-transform property. 3. The transformed area is no longer clipped based on the untransformed position of the element. 4. Transformed frames now correctly invalidate themselves. 5. Click-and-drag to highlight text now correctly works for transformed elements. 6. Frames split by multiple continuations now have the transform applied to all continuation frames as a batch, not to each frame individually. 7. Various style fixes from the previous patch. Known Issues: 1. Drop-down menus in form controls still drop down relative to the original position. 2. Drop-down menus in form controls don't transform their drop-down list. 3. Rotated scrollbars don't respond to mouse events correctly (e.g. always assume left-right and up-down axes.) 4. Transformed scrollbars cause graphical glitches (waiting for roc's patches to land) 5. Nested transforms result in a larger-than-necessary overflow rectangle. 6. Absolutely-positioned elements aren't factored into the transform overflow rectangle (possibly not a bug... looking into WebKit's implementation) 7. WebKit supports a "skew" transform that isn't handled by the -moz-transform property. 8. Elements with continuations sometimes don't display when the page is loaded. 9. Can't drag a transformed object for the "drag-and-drop" feature except when dragging from the object's original rectangle. 10. The serialized version of the transform property has extra whitespace. 11. The translate family of transforms don't accept percentages, though the proposed spec allows this.
Created attachment 329117 [details] Test case for continuations This is a test case that showcases a known issue with continuation frames. Currently, the -moz-transform property is designed to consider all continuation frames as the same frame for the purposes of the transform origin. For example, if there were a multi-line 'span' element with a transform applied, even if the span spanned multiple lines, all of the text of the span would be transformed relative to the same point. The problem is that the -moz-transform-origin property can accept percentage values that indicate the fraction of the frame rectangle to move over when placing the origin. When the page is initially being reflowed, text elements that will have multiple continuations have each continuation compute its overflow rectangle, accounting for the transform. The problem is that as more and more continuations are added, the newer continuations will use different bounding rectangles than the original elements, since the union of all of the continuations' bounding rectangles changes as more continuations are added. However, the older continuations don't have their overflow rectangles adjusted. The net result is that earlier continuations sometimes don't display during initial page load, though in later reflows they tend to be correct. This test case has a transformed span and a script that appends text children. If you run this test case with the current WIP patch and then click "click to add text" several times, you can see that although the transform changes, the old overflow rectangles do not and there will be some extra text artifacts. roc: Do you have any suggestions about how to solve this problem? Is there an elegant and efficient way to have old continuations recompute their overflow rects as new elements are applied?
Actually now that you mention it, I ran into this in my SVG effects work. There isn't really a good solution except to make drastic changes to the way overflow areas work, which we don't want to do for 1.9.1. In my case, the problem is hardly noticeable since the overflow area I compute for each continuation frame of an element with 'filter' tries to cover the whole filter result, so normally includes the overflow areas of all continuations we know about "so far"; in particular the last continuation's overflow area covers all the right stuff, so when things change we usually end up repainting enough. Your case is different however. I'll think about whether we can hack something in, but for now don't worry about it. You could perhaps disable support for percentage -moz-transform-origin on inlines, or just disable percentages altogether, to hide the problem if it bothers you.
Can this also happen with blocks in multi-column, or are there no cases where we'll skip reflow of the first column?
It could happen with blocks in multi-column, but that's a lot more of an edge case.
Created attachment 329495 [details] [diff] [review] WIP Patch #6 WIP Patch #6 Fixes from WIP Patch #5: 1. The 'skew' transform is now defined and works like WebKit's skew. 2. TryMerge is now defined for nsDisplayTransform. 3. Various style cleanups - _should_ have cleaned up tabs, hopefully this is fixed here. Also renamed functions based on roc's suggestions and eliminated stray #includes. Of the bugs mentioned in the previous patch, based on advice from roc and dbaron, the following bugs cannot easily be resolved in this patch and will be left as unfixed unless stated otherwise: 1. Nested transforms result in a larger-than-necessary overflow rectangle. This has to do with some general issues with the overflow system that will be addressed in a larger patch. 2. Elements with continuations have their overflow rects computed incorrectly. This has to do with how continuation frames compute overflow rectangles and cannot be resolved easily. 3. System-level widgets aren't affected by transforms. It may be possible to fix this, but from what I've gathered this may not be easily resolved. Known Issues: 1. Rotated scrollbars don't respond to mouse events correctly (e.g. assume vertical and horizontal motion necessary even when transformed). This may be a symptom of a larger issue in which events are always expressed relative to the default coordinate system and not the transformed system. 2. Transformed scrollbars cause graphical glitches (scrolling causes overdraw of scrollbars and doesn't fully invalidate the frames) 3. Elements with the -moz-transform property should be abs-pos and fixed-pos containing blocks. 4. Cannot use the click-and-drag feature on transformed elements except when clicking in the union of the transformed area and the original area. This may be related to (1). 5. The serialized version of the -moz-transform property has extra whitespace relative to the original. 6. The 'translate' family of transforms don't accept percentages, though the proposed spec allows this.
(In reply to comment #25) > 3. System-level widgets aren't affected by transforms. It may be possible to > fix this, but from what I've gathered this may not be easily resolved. What do you mean by "system-level widgets"? > 1. Rotated scrollbars don't respond to mouse events correctly (e.g. assume > vertical and horizontal motion necessary even when transformed). This may be a > symptom of a larger issue in which events are always expressed relative to the > default coordinate system and not the transformed system. Not sure what you mean here? Can you give a concrete example? > 2. Transformed scrollbars cause graphical glitches (scrolling causes overdraw > of scrollbars and doesn't fully invalidate the frames) The issue here is that we still try to do bitblit scrolling and even if we decided not to do that, we invalidate through the view system which is wrong. We shouldn't even try to do view-based scrolling in this case, we should just invalidate the frame. We can do this by adding a new view bit which says "I have a transform or other effect on me, use frame invalidation for any scroll operation on my descendants".
A few minor notes, while they're in my head: * nsTransformFunction.cpp should probably begin with the same short comment as nsTransformFunction.h, since the single-line comments that show up before any #includes are there because they show up as the description in * nsTransformFunction.h 's include guard is messed up following the rename
Created attachment 329770 [details] [diff] [review] WIP Patch #7 WIP Patch #7 More progress than before, but still a good ways to go. The main issues fixed here: 1. The event system now sends event coordinates to frames in the correct coordinate space, so transformed items will always receive events in the proper Cartesian plane. As a side-effect, click-and-drag detection is now operational. 2. Overflow rectangles are now computed correctly for unsized elements (elements without fixed sizes through CSS) 3. Transformed elements with scrollbars now display correctly. New known issues: 1. Drop-down menus for combo boxes respond to input in the transformed coordinate space, but draw their menus in untransformed coordinate space. 2. Autocomplete menus respond to input and appear in the untransformed coordinate space. 3. Absolute-positioned and fixed-positioned elements aren't factored into the overflow rectangle. 4. Click-and-dragging transformed elements draws the graphics incorrectly or in the wrong place. 5. Elements with overflow scrollbars sometimes don't redraw correctly when the main window is scrolled. 6. Rounding errors in matrix calculations lead to graphical aberrations. 7. Changing properties unrelated to -moz-transform forces a recalculation of the property and, consequently, a frame reconstruction.
Created attachment 330157 [details] [diff] [review] WIP Patch #8 WIP Patch #8 Numerous fixes from the earlier version of the patch, including: 1. The translate family of transforms now accepts percentage values in addition to absolute lengths. 2. Absolutely-positioned elements are now factored into the overflow rectangle for transformed elements. 3. Transformed elements are now abs-pos containing blocks. 4. Zooming no longer breaks transforms. 5. Transformed elements with scrollbars no longer have graphical glitches. New Known Issues: 1. Transformed elements aren't fixed-pos containing blocks. 2. Zooming during "Print Preview" causes graphical glitches with text in IFRAMEs. 3. IFRAMEs scaled up beyond a factor of 2 stop responding to user input. I've traced this to nsDisplayTransform::HitTest, but don't know exactly what's causing it. I think it has to do with a change of app unit/CSS pixel ratios for elements scaled beyond a certain point, but I'm not sure. 4. nsComptuedStyleValue for -moz-transform doesn't factor in the effects of percentage translations. I am still working on this one, so it should be resolved soon. 5. Some of the reftests with percentage translations versus absolute translations fail, even though they are ostensibly equal. There's probably a rounding error in here somewhere... I am interested in making system-level widgets subject to transformations the way that all other elements are, but I don't know how to do this. Any suggestions would be appreciated. Also, if you have any ideas why an IFRAME scaled by a factor of 2.00 will respond to user input while an IFRAME scaled by a factor of 2.01 will not, please let me know.
(In reply to comment #29) > 2. Zooming during "Print Preview" causes graphical glitches with text in > IFRAMEs. This sounds like bug 444448..
(In reply to comment #31) >. > Thanks for the tip, I'll rework it a bit. May I ask if there's any reason that we want to distance the view and frame modules??
(In reply to comment #32) > Thanks for the tip, I'll rework it a bit. May I ask if there's any reason that > we want to distance the view and frame modules? It reduces complexity. I think. I'm not 100% sure but we're going to get rid of views anyway and I don't want to change the architecture now. >? There is, but I strongly urge you to leave this alone for now and work on it as a follow-up issue.
Created attachment 331018 [details] [diff] [review] WIP Patch #9 WIP Patch #9 This patch is getting close to being finished. The following are known issues that I will try to have ready for the final version of the patch: 1. Transformed frames that can't handle abs-pos child lists (e.g. tables) that have abs-pos or fixed-pos children cause those children not to display. 2. Transformed elements with scrollbars that are scaled up by a factor greater than two stop responding to user input. 3. Some of the attached reftests should pass but don't because of small pixel differences. 4. Transformed elements with continuations don't render correctly on page load or have their overflow rectangles computed correctly. 5. The serialized version of the -moz-transform property has unnecessary whitespace. The following are known issues that will not be addressed in this patch, either because they represent known architecture issues or because they should be addressed in a supplemental patch: 1. Widgets aren't transformed. 2. Nesting transforms cause the overflow rectangle of the topmost transform to be too large... this is because the overflow rectangles accumulate errors since they are always in the canonical basis instead of the transformed basis. 3. Transformed elements with abs-pos children have those abs-pos children overdraw the continuations of the transformed element. This is related to Bug #444448. All comments are welcome and appreciated.?
Tables are block-level. There are a few options: a) make transform only apply to display:block and display:inline-block (nsBlockFrame) b) make transform only apply to display:block, display:inline-block and display:inline (nsBlockFrame and nsPositionedInlineFrame) c) add nsAbsoluteContainingBlock functionality to nsTable(Outer?)Frame and make transform apply to tables, blocks and inlines (which would make us correct per-spec AFAIK) d) refactor abs-pos logic so that any element can be an abs-pos container and then make transforms apply to anything e) make transforms apply to tables, blocks and inlines, but don't bother making tables abs-pos containers, just push an null abs-pos containing block so that abs-pos children are not positioned. a) would avoid most problems with continuations. b) is closest to what you already have and is about as close to the spec as Webkit is, from what you say, although from a different direction. d) is more work than we want to do for an initial landing of this feature. c) probably isn't much harder than e) and brings us very close to the spec so I think c) makes the most sense ... unless making tables abs-pos containers is harder than I think. The patch in bug 438987 should be a good template.
(In reply to comment #36) > c) add nsAbsoluteContainingBlock functionality to nsTable(Outer?)Frame FWIW, these bugs seem related: bug 320865 bug 391153 (on making tables absolute/fixed-position containers, in the specific case of the root element).
(In reply to comment #35) >? > We'll fix the spec. Basically we expect transforms to be applicable to inline-block, inline-table, inline-box, inline replaced elements, but not to inline flows like spans.
If you disagree with this conclusion, then we basically need to to decide how a transform works on objects that can span multiple lines. You basically have to decide whether each line box transforms independently, or whether you use the entire bounding box, etc. (for stuff like transform-origin). For now we just punted on it and didn't bother supporting it. Be warned that our implementation in WebKit is very much under construction as well. We don't really work right with iframes or frames for example, or with native controls like scrollbars. (AppKit itself doesn't work with transforms.) If you use NSViews, you're going to run into trouble on the Mac with this like we have.
We do use NSViews, but we have tricks. We already transform iframes and native controls pretty well using SVG <foreignobject>, even on Mac. (And we will get rid of NSViews later to make stuff like this easier.) It sounds like you want to disallow transform on elements that create multiple CSS boxes. But then how should transforms interact with vertical breaking, such as columns? I know you don't create multiple boxes there but we do, and conceptually the CSS spec does too.
We punted on columns. That will need to be worked out. I suspect we may end up needing a property similar to the -break properties in CSS3 Borders and Backgrounds that will say whether the transform applies to the entire bounding box or to individual boxes.
If we're going to do that for blocks, then why not for inlines as well?
Transforming blocks split across columns or inlines split across lines is pretty much the same problem. It seems to me like the ideal behavior would be having the origin of the transform be determined using the union of all the boxes involved (although that is a little difficult for us to implement). That's what I suggested to Keith a few weeks ago, anyway. We want to refactor stuff to allow anything to be an abs-pos container anyway; we have existing (old) bugs on that (e.g., a relatively positioned table or internal table element should be an abs pos container). But I'd like to keep that separate from this patch.
Created attachment 332449 [details] [diff] [review] Potential Patch #1 Potential Patch #1 This patch addresses all issues marked above that aren't marked nofix, plus a few others. I've attempted to clean up the code and make sure the comments match the code. Asking for review by dbaron.
Comment on attachment 332449 [details] [diff] [review] Potential Patch #1 roc should review this as well
+ /* We want the element to be an absolute containing block if it's positioned. + * Normally, we'd want it to also be an abs-pos containing block, but until + * all frame types are capable of supporting abs-pos lists, this results in + * buggy behavior. Consequently, we'll check both that the element is positioned + * and that it's not transformed. + if (display->IsPositioned() && !display->mTransformPresent) { aState.PushAbsoluteContainingBlock(newFrame, absoluteSaveState); I'm confused. What is this trying to say? What's the difference between an "abs-pos containing block" and an "absolute containing block"? Why are we making positioned+transformed elements *not* be an abs-pos containing block when it would have been an abs-pos containing block if it had no transform? + // Create the inline frame. If it has a transform, make a positioned frame. + // Otherwise, just make a regular frame. + newFrame = (aDisplay->mTransformPresent ? NS_NewPositionedInlineFrame(mPresShell, aStyleContext) : + NS_NewInlineFrame(mPresShell, aStyleContext)); Why not just reuse the existing positioned-inline-frame path? + NS_ASSERTION(aState->mItemBuffer.Length() == static_cast<PRUint32>(itemBufferStart), Slightly simpler to use a constructor-style cast. Is nsUnitConverter really necessary? The only data that needs to be encapsulated is a scale factor. I'd prefer to just pass that around and use helper functions if necessary. Seems like you could be using gfxRect::RoundOut here. + return floorf(static_cast<float>(aCoord * aVal)); +#else + return static_cast<PRInt32>(aCoord * aVal); Constructor casts + const nscoord dx = (NSCoordMultiplyGfxFloat(bounds.width, disp->mTransformX[0]) + + NSCoordMultiplyGfxFloat(bounds.height, disp->mTransformY[0])); + const nscoord dy = (NSCoordMultiplyGfxFloat(bounds.width, disp->mTransformX[1]) + + NSCoordMultiplyGfxFloat(bounds.height, disp->mTransformY[1])); Why go through NSCoordMultiplyGfxFloat which casts to PRInt32, losing precision? Seems like we can do everything with gfxFloat here (double), and not worry about losing precision. In fact we can drop NSCoordMultiplyGfxFloat and do multiplication. +/* Returns the smallest rectangle containing a frame and all of its continuations. + * For example, if there is a <span> element with several continuations split over + * several lines, this function will return the rectangle containing all of those + * continuations. This rectangle is relative to the origin of the frame's local + * coordinate space. + */ Should mention what really happens when UNIFIED_CONTINUATIONS is not defined. + result.UnionRect(result, nsRect(delta.x + localRect.x, delta.y + localRect.y, + localRect.width, localRect.height)); Use "localRect + delta" + static nsIFrame* GetCrossDocParentFrame(nsIFrame* aFrame) + { + return const_cast<nsIFrame *>(GetCrossDocParentFrame(static_cast<const nsIFrame *const>(aFrame))); + } I'm not sure if it makes sense to try to make a meaningful distinction between "const nsIFrame*" and "nsIFrame*". One problem is that there are so many other pieces of data hanging off frames, making the frame itself const doesn't do anything to protect invariants. In any case, your second static_cast seems unnecessary. + /* First things first - if we're supposed to invalidate the frame on a scroll, + * just do it and skip the rest of the logic here. + */ + if(aScrolledView->IsInvalidateFrameOnScroll()) + { + /* First, invalidate ourselves. */ + GetViewManager()->GetViewObserver()->InvalidateFrameForView(aScrolledView); + + /* Next, adjust child widgets. */ + nsPoint offsetToWidget; + GetNearestWidget(&offsetToWidget); + AdjustChildWidgets(aScrolledView, offsetToWidget, aP2A, PR_TRUE); Can you use the existing path that does this? I've just done a very quick skim over most of it. I'll do some more shortly. Overall it seems very good.
So I got a bit done on the plane as well -- just a quick skim, really -- but I might not get back to it for a few days, so I'll post what I have so far, although I really don't have a whole lot of substantive comments yet. I'll start by skimming some parts of the style system changes, so this is a bit out-of-order: In nsCSSPropList.h, the new properties should sort (alphabetically) between top and unicode-bidi. > \ No newline at end of file You should have the newline at the end of file. (Unix convention uses LF as a line terminator; Windows convention uses CR-LF as a line separator. A number of Unix-ish tools are unhappy when there's anything following the last newline, and version control systems complain.) > diff -r 6c4ee89ddeb8 layout/reftests/moz-transform/reftest.list~ Don't add your backup files. > + * The Initial Developer of the Original Code is > + * Keith Schwarz <kschwarz@mozilla.com> > + * > + * Contributor(s): > + * Keith Schwarz <kschwarz@mozilla.com> I think the Initial Developer should be Mozilla Corporation. Then you can add "(original author)" after your name in the contributors list if you want. > +namespace > +{ What's the point of the unnamed namespace block here? > +#ifndef nsTransformFunction_h___ Don't use "__" in your include guards; all identifiers with "__" in them are reserved for the implementation. > + enum nsMatrixIndex{X_SCALE, X_SKEW, Y_SKEW, Y_SCALE, X_TRANSLATE, Y_TRANSLATE, NUM_ENTRIES}; Wrap this line at less than 80 characters, and use a space before the "{". Likewise for wrapping the declaration of SetToValue just below. > + nsStyleCoord &GetCoord(const PRInt32 index) Put the & before the space rather than after (or space on both sides). Throughout, you should have a space between "if", "for", or "switch" and the "(" following. (But there should be no space for function calls; leave those as you have them.) There are also a bunch of places where you indent the "{" two spaces and then its contents an additional two spaces. Only the contents of the braces should be indented; the braces themselves should be at the same indent as what comes before. Furthermore, in most parts of the code (including, I think, all the ones that you touch), the opening brace should only be on its own line if it's the opening brace for a class or function definition. For example, the switch in nsTransformFunction.cpp, a bunch of things in nsStyleStruct.cpp, nsRuleNode.cpp, etc. That is, do: if (foo) { bar(); } rather than: if (foo) { bar(); } But it's ok to have extra indent inside switches if you want, i.e.: switch (condition) { case 1: bar(); break; case 2: baz(); } (depending on local style in the file) +PRBool CSSParserImpl::ParseFunctionInternals(nsTArray<nsCSSValue> &aOutput, + PRInt32 aAllowedTypes, nsresult &aErrorCode) We tend to put out parameters after inputs. So aAllowedTypes should be first. It should probably also be called aVariantMask for consistency with ParseVariant.
The loops in TotalUntransformPoint and TotalTransformRect seem like a problem. They don't take into account SVG foreignobjects, but adding foreignobject checks there seems like a modularity violation. It seems like we need something more generic than GetOffsetTo that handles foreignobjects and transforms and future stuff. Indeed, in general everywhere we have GetOffsetTo, we should be thinking about what happens if there's a transform in the way! We might want to even warn if GetOffsetTo finds a transform in the way... I'm not sure what that API should look like. Perhaps // Compute a matrix which transforms from this frame's coordinate system to aFrame's coordinate system gfxMatrix GetMatrixTo(nsIFrame* aFrame); // Compute a matrix which transforms from this frame's coordinate system to the root frame's coordinate system virtual gfxMatrix GetMatrixToRoot(nsIFrame* aFrame); ? Then GetMatrixTo can make two calls to GetMatrixToRoot, or possibly use some fast path for the common case where there are no transforms around. Using GetMatrixToRoot instead of GetMatrixToParent to possibly speed things up and also to allow foreignobject to skip over the meaningless SVG frames between itself and its enclosing <svg>. Does that make any sense? I haven't thought about it deeply yet. I suspect things would be better if we made nsDisplayTransform be a leaf display item, whose paint method actually constructs display items for its frame subtree and whose HitTest method does something similar. That would avoid breaking the assumption that the items in a display list are all in the same coordinate system, so GetOffsetTo is safe to use for frames in the same display list. That would mean ClipListToRange doesn't work well for ranges that have one end inside transformed content and the other end outside, but I think we can live with that! + PRBool IsTransformed() const { return GetStyleDisplay() && + GetStyleDisplay()->mTransformPresent; } share the result of GetStyleDisplay(), it's not free!
Nitpick: Watch out for random newlines / edits to existing blank lines that you might have added in your patch. (Attachment 332449 [details] [diff] has blank-line-edits in nsPresShell.cpp, nsFrame.cpp, nsCSSParser.cpp, nsView.cpp, and nsViewManager.cpp)
Keith, it would probably be useful if you posted a new patch that addressed the comments so far; I suspect this will probably require a few iterations in the end.
I'm currently working on replacing calls to GetOffsetTo and the like with a more generic nsLayoutUtils function to change coordinate systems, as per roc's suggestion. I'll try to get some of those changes made and tested, and will hopefully have a revised patch posted by the end of the day.
Created attachment 335962 [details] [diff] [review] Potential Patch #2 Fixes in response to roc's and dbaron's comments about the previous patch. Some syntax cleanup to match the module stuff. The biggest changes are some changes to nsIFrame that unified code paths between SVG foreign objects and elements with the -moz-transform property which seem to make the code a whole lot prettier. Also, I've moved much of the content of nsDisplayTransform into nsLayoutUtils where (I believe) it's better suited.
David, if you like, I can take responsibility for reviewing everything except the style system changes. (Which I don't plan to even look at.) // See if it's relatively positioned "or transformed" rv = ConstructBlock(aState, aDisplay, aContent, aParentFrame, nsnull, aStyleContext, &newFrame, - aFrameItems, PR_FALSE); + aFrameItems, aDisplay->mTransformPresent); Can we actually get here if there's a transform present? Seems to me we'd have taken the "relative positioned" path. rv = ConstructInline(aState, aDisplay, aContent, - aParentFrame, aStyleContext, PR_FALSE, newFrame); + aParentFrame, aStyleContext, aDisplay->mTransformPresent, newFrame); Ditto. + const nsRect localRect = currFrame->GetRect(); + const nsPoint delta = currFrame->GetOffsetTo(aFrame) - currFrame->GetPosition(); + + result.UnionRect(result, localRect + delta); This can simplify to result.UnionRect(result, nsRect(currFrame->GetOffsetTo(aFrame), currFrame->GetSize()); + const gfxPoint newOrigin(nsLayoutUtils::AppUnitsToGfxUnits(aOrigin.x, aFactor) + toMozOrigin.x, + nsLayoutUtils::AppUnitsToGfxUnits(aOrigin.y, aFactor) + toMozOrigin.y); newOrigin = ... + toMozOrigin; ? + inline const nsIFrame* GetUnderlyingFrame() const Honestly, I think adding const-friendly code here is not worth it. See what dbaron thinks. + static nsRect TransformRect(const nsRect &untransformedBounds, aUntransformedBounds (occurs several times) + const nsRect *const boundsOverride = nsnull); aBoundsOverride + * TransformPoint takes in as parameters a point (in app space) and returns the transformed What is "app space"? Do TransformPoint and TransformRect really need an aOrigin parameter? It seems the caller could just as easily subtract aOrigin from the input point/rect. + /* Although most nsDisplay* constructors that take in an nsDisplayList + * modify that list somehow, the nsDisplayTransform constructor DOES + * NOT. Instead, the caller relinquishes control of the list to the + * nsDisplayTransform, which from that point forward is responsible + * for it. This is a bit odd, but it eliminates problems where inside + * the constructor the nsDisplayTransform finds that it doesn't have + * enough memory to allocate a new list for itself. Why can't nsDisplayTransform just contain an nsDisplayList as a direct member? + /** + * Given a frame, computes the net bounding rectangle for that frame. The net bounding + * rectangle is the rectangle, where (0, 0) is the frame's origin, containing the frame and + * all of its continuations. + */ + static nsRect GetNetFrameBounds(const nsIFrame *const aFrame); You should mention the effect of UNIFIED_CONTINUATIONS. Also, I'm not super happy about the name GetNetFrameBounds, but I can't think of a better one right now. +nsLayoutUtils::RoundGfxRectToAppRect(const gfxRect &aRect, const PRInt32 aFactor) Might as well make aFactor a double, since it will have to be converted anyway. Same for all these helpers. + /* Now just typecast everything! */ + return nsRect(nscoord(scaledRect.pos.x), nscoord(scaledRect.pos.y), + nscoord(scaledRect.size.width), nscoord(scaledRect.size.height)); Might want some assertions here to check that everything's in range. + /* Since we apply transforms from the innermost transform to the outermost transform, + * we need to invert the transforms from the outermost transform to the innermost transform. + * We'll do this by ascending upward and finding all of the transform frames, storing them in + * a stack, and inverting in reverse order. + */ Why not build up the global transform matrix, then invert it at the end and apply to the point? In fact, why not have one function that calculates the global transform matrix, and then have this function invert it and apply to the point? + for (PRInt32 index = static_cast<PRInt32>(frameStack.Length() - 1); index >= 0; --index) { Constructor cast + static PRBool HasMozTransform(const nsIFrame *const aFrame) + { + const nsStyleDisplay *const disp = aFrame->GetStyleDisplay(); + return disp && disp->mTransformPresent; + } You don't need to null-check disp. I'm not sure if this is worth having, "nsLayoutUtils::HasMozTransform(frame)" isn't much shorter than "aFrame->GetStyleDisplay()->HasTransform()". (Especially if GetStyleDisplay is already available in a local variable in callers.) And it shouldn't be called MozTransform in any case; there's no need to use vendor prefixes in the names in our own code. + static inline nscoord GfxUnitsToAppUnits(const gfxFloat aPos, const PRInt32 aFactor) + { + return nscoord(NSToIntRound(float(aPos)) * aFactor); + } What's wrong with NSFloatPixelsToAppUnits? + /* Converts a number of app units into graphics pixels. */ + static inline gfxFloat AppUnitsToGfxUnits(const nscoord aPos, const PRInt32 aFactor) + { + return gfxFloat(aPos) / aFactor; + } Why not use NSAppUnitsToFloatPixels? +PRBool +nsIFrame::IsTransformed() const +{ + return nsLayoutUtils::HasMozTransform(this); +} Might as well just return GetStyleDisplay()->IsTransformed() here. + if (nsLayoutUtils::HasMozTransform(this)) { I'm not sure how this will look in the next iteration, but we should probably use a local variable to cache isTransformed. Your changes to Invalidate and FinishAndStoreOverflow are going to conflict with similar changes in my patch in bug 450340. I'm not sure who's going to land first, but someone will have to clean up a little bit. I think it shouldn't be a problem, just giving you a heads-up. +PRBool +nsIFrame::NeedsView() +{ + return nsLayoutUtils::HasMozTransform(this); You could probably move this up to where we call NeedsView (IIRC there's only one call site), we probably already have the display style struct there. Up to nsGfxScrollFrame.cpp
+ /** + * Returns a transformation matrix that converts points in a frame's canonical + * coordinate system (e.g. the coordinate system inhabited by its frame rect) + * into points in its actual coordinate system. The matrix is expressed such + * that points transformed by the matrix should be expressed in device pixels. This still reads a bit ambiguous to me. I don't think "canonical" is the right word. You might want to explain that this can only be called for frames that return true for IsTransformed(), and maybe give an example of how this should be used. GetTransformMatrix is a weird API because it gives us a matrix whose meaning is unclear. I think it should probably return not just a matrix but also the ancestor frame that the matrix transforms coordinates to. For foreignObject that ancestor is the SVG subtree root (the nsSVGOuterSVGFrame), for CSS transforms it should be the parent frame (with the matrix adjusted to suit). Does that sound reasonable? +++ b/layout/reftests/moz-transform/abspos-1-ref.html The test directory should be called "transform". + /* Regrettably, we have to do a const_cast here to strip the constness off of ourselves. + * Even though GetTMIncludingOffset is semantically const, because it goes through + * nsCOMPtr and getter_AddRefs, we cannot have the method marked const. This is unsightly + * and hopefully, when we const-correct everything, this will go away. I kinda feel this isn't worth the effort and we should make GetTransformMatrix non-const. + const nsPoint delta = child ? -GetOffsetTo(child) : nsPoint(0, 0); I think the child's always at 0,0 so we don't need this. + /** + * Foreign objects can return a transform matrix, obtained by + * converting the stored SVGMatrix into a gfxMatrix. Not really "stored", so I'd just remove the second clause. IsInvalidateFrameOnScroll isn't a great name. I'd call it NeedsInvalidateOnScroll. Also, I don't know why we need to check it from CanScrollWithBitBlit as well as nsScrollPortView::Scroll, why is that? That's all I have! Amazing work!
Comment on attachment 335962 [details] [diff] [review] Potential Patch #2 I think the parsing code for -moz-transform-origin should just reuse the parsing code for background-origin. (background-origin is a value pair, which I think -moz-transform-origin should be as well. And the spec currently has some slight differences in the grammar, but I think they ought to be the same. We should probably discuss this with the WebKit folks.) (Does WebKit not implement the Z component either?) ===== >+ * Returns whether the frame is transformed from the -moz-transform property. >+ */ >+ static PRBool HasMozTransform(const nsIFrame *const aFrame) >+ { >+ const nsStyleDisplay *const disp = aFrame->GetStyleDisplay(); >+ return disp && disp->mTransformPresent; >+ } GetStyleDisplay is guaranteed never to return null, so you can just return GetStyleDisplay()->mTransformParent (despite comment 48). And it seems like this could stay a method on nsIFrame rather than in nsLayoutUtils (not sure whether it should be inline -- probably not). ===== nsCSSFrameConstructor.cpp: >+ nsAbsoluteItems& GetFixedItems() >+ { >+ return const_cast<nsAbsoluteItems &>(static_cast<const nsFrameConstructorState *>(this)->GetFixedItems()); >+ } >+ const nsAbsoluteItems& GetFixedItems() const >+ { >+ return mFixedPosIsAbsPos ? mAbsoluteItems : mFixedItems; >+ } >+ I think it would be cleaner just to repeat the contents of the method twice. If you really don't want to do that, I think your static_cast can be a const_cast. >+ /* Positioned elements should act as abs-pos containing blocks. >+ * Normally, we treat transformed elements as though they're >+ * abs-pos containers, but because most frame classes can't >+ * support abs-pos lists, we'll ignore that for now because >+ * the other functions (e.g. ConstructBlock / ConstructInline) >+ * will handle abs-pos containment correctly. >+ */ >+ if (display->IsPositionedIgnoringTransform()) { This seems like an odd asymmetry. We do enter this case for positioned elements that don't support being an absolute containing block, but we don't for transformed ones? Why the difference? (I'm not saying your way is necessarily wrong, but the difference seems wrong.) That said, I suspect currently nothing constructed in ConstructHTMLFrame supports being an abs pos container. So maybe it's just the comment that's wrong. nsDisplayList.cpp: >-'m assuming you were just fixing a compiler warning.) >+// Write #define UNIFIED_CONTINUATIONS here to have the transform property try to transform >+// content with continuations as one unified block instead of several smaller ones. >+// This is currently disabled because it doesn't work correctly, since when the frames >+// are initially being reflowed, their continuations all compute their bounding rects >+// independently of each other and consequently get the wrong value. >+// Write #define DEBUG_HIT here to have the nsDisplayTransform class dump out a bunch of >+// information about hit detection. Traditionally we actually have the #define in question either: * written, commented out, or * written as an #undef (and generally with two spaces after the undef so that it can be replaced by define plus one space) >+#ifdef DEBUG >+/* Helper function to print out a rectangle and some extra info about it. */ >+static void PrintRect(const nsIFrame *const aFrame, const char *const aMsg, const nsRect &newRect) >+{ >+ printf("Frame %p: '%s': (%f, %f), (%f, %f)\n", >+ dynamic_cast<const void *>(aFrame), aMsg, >+ nsPresContext::AppUnitsToGfxCSSPixels(newRect.x), >+ nsPresContext::AppUnitsToGfxCSSPixels(newRect.y), >+ nsPresContext::AppUnitsToGfxCSSPixels(newRect.x + newRect.width), >+ nsPresContext::AppUnitsToGfxCSSPixels(newRect.y + newRect.height)); >+} >+#endif Not sure if this is still needed. You have two versions of it in two different places, but no code that uses them. If you want to leave it in, it should probably just be DEBUG_kschwarz (or even DEBUG_kschwarz_off). >+static gfxMatrix GetTransformMatrix(const nsIFrame *const aFrame, >+ const PRInt32 aScaleFactor, >+ const nsRect *const aBoundsOverride) You should definitely document what aScaleFactor is, and probably what aBoundsOverride is. (It's not obvious to me why aScaleFactor is used in some places and not others.) >+#ifdef UNIFIED_CONTINUATIONS It's probably clearer to #ifdef the whole function and just return nsRect(nsPoint(0, 0), aFrame->GetSize()) in the not-UNIFIED_CONTINUATIONS case. In GetDeltaToMozTransformOrigin (and GetResultingTransformMatrix), I wonder whether you'd be better off combining the transform and the transform origin here, since transform origin can be just specified as a pre- and post- transform. Can you reuse other code for multiplying matrices and for handling the percentage transforms? >+ * 1. Check for degenerate cases. I don't see any such checks, and I'm not sure what they would be. It also might be a clearer commenting style to scatter the numbered list through the code that it's describing. This makes it a little less likely that the comments will get out of date (as comments often do). >+/* The transform is opaque iff the skewX and skewY components of the matrix are >+ * both zero and the wrapped list is opaque. Referring to skewX and skewY components confused me a little. The b and c components in | a c e | | b d f | | 0 0 1 | are used to represent both skew and rotation. Maybe it's clearer to say "if the transform represents only scaling and translation". Also, given the way that the overflow rect contains both the transformed and untransformed overflow rects, aren't IsUniform and IsOpaque incorrect for scale transformations that scale smaller? (Or is that not the latest solution for the overflow rect?) In nsDisplayTransform::UntransformRect: >+ if (matrix.IsSingular()) >+ return nsRect(0, 0, 0, 0); maybe you just want "return nsRect()", to distinguish that you really want empty, not 0,0,0,0, which is what we currently use to represent empty? (But you don't want the same for nsPoint, where the default constructor produces unitialized rather than empty. I wonder whether UntransformPoint needs to produce some sort of error result.) Then again, I don't see any callers of nsDisplayTransform::TransformPoint and UntransformPoint. Maybe they should just be removed instead? (If not, why do we need them?) nsDisplayList.h: >+ /* In case someone wants to refcount us, let them support it. */ This comment doesn't make any sense to me. nsLayoutUtils.cpp: For PrintMatrix, see comment about PrintRect above. >- // If it is, or is a descendant of, an SVG foreignobject frame, >- // then we need to do extra work >+ /* If it is, or is a descendant of, an SVG foreignobject frame, >+ * then we need to do extra work. Also, as we're going up the chain >+ * to the root frame, if we find anything that has the -moz-transform >+ * property, we should take note of this for later. >+ */ >+ /* Crawl up the frame tree to find the root frame. If at any point we encounter >+ * a transformed element, we need to mark that, since we'll end up inverting the >+ * transform at the end. >+ */ I'm not sure you need to be this verbose. Perhaps just: // We need to do extra work if the frame or one of its ancestors is // an SVG foreignobject frame or has a transform. >+ /* These are translation matrices from world-to-origin of relative frame and >+ * vice-versa. Although I could use the gfxMatrix::Translate function to accomplish >+ * this, I suspect that this is faster since it doesn't involve any matrix multiplication >+ * behind the scenes. >+ */ This comment seems false, given that you do two matrix multiplications a few lines below. Is "basis" a technical term here, or should ChangeMatrixBasis be called something more like "CombineOriginIntoTransform"? (And I think I asked for the creation of a function much like this one in my comments above.) Somebody should look at the new functions in nsLayoutUtils more closely, but I'm hoping roc will. nsPresShell.cpp: You should not have a blank line between the declarations of WillPaint and InvalidateFrameForView -- this makes it clear that InvalidateFrameForView is an nsIViewObserver method. InvalidateFrameForView could use nsLayoutUtils::GetFrameFor (which does the cast). nsFrame.cpp: PrintRect again (see above). >+ if(!newList) >+ return NS_ERROR_OUT_OF_MEMORY; Space between "if" and "(". >+ /* Make the transform, fail if we can't. */ >+ nsDisplayTransform *transform = new (aBuilder) nsDisplayTransform(this, newList); >+ if (!transform) >+ return NS_ERROR_OUT_OF_MEMORY; In theory, you should delete newList before this early return. >+ /* The image is composited if it's partially transparent or if there's a >+ * transform applied to it (since the rotation might have us drawing over >+ * whatever's below us. >+ */ I'm not sure I'd use the word "image" here. I haven't reviewed the tests. However, I'd name the directory "transform" rather than "moz-transform", since the former makes more sense in the long run and is sensible now. layout/style/Makefile.in: >+ nsTransformFunction.h\ Stick a space before the backslash like other lines. >+ -I$(srcdir)/../../gfx/thebes/public \ Why is this needed? (I'd note that the transform function names don't need to be nsCSSKeywords, but what you did looks OK and I suppose doesn't hurt anything.) nsCSSParser.cpp: In ParseProperty, could you put your new cases after the text_shadow case, and move the box_shadow case up before clip? In ParseSingleValueProperty, could you put your new cases after text_shadow (and not inside the MOZ_SVG ifdef)? >+PRBool CSSParserImpl::ParseFunctionInternals(PRInt32 aVariantMask, nsTArray<nsCSSValue> &aOutput, nsresult &aErrorCode) Could you wrap at under 80 characters? >+{ >+ /* Keep looping until we get something interesting. */ This comment doesn't add anything useful; remove it. >+ while(PR_TRUE) { Space between while and "(". >+ /* Try to read a comma. If we can, then all's well. Otherwise, we need >+ * to see if we just read a closing parenthesis. >+ */ >+ if (!ExpectSymbol(aErrorCode, ',', PR_TRUE)) { >+ /* Push the token back, then try reading a ')'. If we can, >+ * then we're done reading. >+ */ >+ if (ExpectSymbol(aErrorCode, ')', PR_TRUE)) >+ return PR_TRUE; >+ >+ /* Otherwise, very bad things happened. */ >+ return PR_FALSE; >+ } I don't think the comments here add much, except perhaps for having "// parse error" at the end of the "return PR_FALSE" line. The "Push the token back" comment is wrong (since ExpectSymbol already did that for you). >+ * On error, the return value is nsnull. You should say this includes parse errors (for which aErrorCode is untouched) and allocation failures (aErrorCode set to NS_ERROR_OUT_OF_MEMORY). However, I think it would be better to have an nsCSSValue& out parameter (rather than using heap allocation) and make the return value PRBool, like much of the rest of the parser. (Especially given that all the caller does with it is assign another CSS value the value *newCell.) (Alternatively, it could take an nsRefPtr<nsCSSValue::Array>&.) >+ * @param aAllowedTypes The types of CSS values that are allowed to be in this function. >+ * Reading an element that's not of the allowed type will result in the function >+ * returning nsnull. >+ * @param aMinElems Minimum number of elements to read. Reading fewer than this many >+ * elements will result in the function returning nsnull. >+ * @param aMaxElems Maximum number of elements to read. Reading more than this many Wrap this at less than 80 characters. >+ const arrlen_t MAX_ALLOWED_ELEMS = 0xFFFE; // 2^16 - 2, so that if we have 2^16 - 2 transforms >+ // plus the name, we have exactly 2^16 - 1 elements. Wrap this at less than 80 characters. (It doesn't need to be to the right of the code.) >+ const nsString functionName = aFunction; It might be better to use the constructor rather than operator=. >+ /* Now, convert this nsTArray into an nsCSSValue::Array object. We'll need N + 1 spots, >+ * one for the function name and the rest for the arguments. In case the user has given >+ * us more than 2^16 - 2 transform functions, we'll truncate them at 2^16 - 2 functions. >+ */ This is about the number of arguments, not the number of transform functions. >+ const PRUint16 numTransforms = (foundValues.Length() <= MAX_ALLOWED_ELEMS ? >+ foundValues.Length() + 1 : MAX_ALLOWED_ELEMS); You should probably call |numTransforms| |numElements|, since ParseFunction isn't really transform-specific, and it's really one more than the number of arguments. Also, it's probably clearer to do: PRUint16 numElements = foundValues.Length() + 1; if (numElements > MAX_ALLOWED_ELEMENTS) numElements = MAX_ALLOWED_ELEMENTS; >+static PRBool IsInvalidMatrix(const nsCSSValue &aValue) It's probably generally easier to reason about code if you call this function IsValidMatrix and reverse the return values and the handling in the callers. It also seems like it would be clearer to just write for (PRUint16 index = 0; index < 4; ++index) { // check for number } for (PRUint16 index = 4; index < 6; ++index) { // check for length/percent } I think parsing all 6 values of the matrix function with the same variant mask is a mistake. I think it's better to make the data structure obey stricter invariants -- and thus I think you should either extend ParseFunction to take an array of variant masks (of length aMaxElems) or not use ParseFunction to parse matrix values. Then you don't have to worry about the zero-lengths issue here (and probably in other places). >+ /* Here's how this is going to work: >+ * 1. Read a token in from the stream. This SHOULD be a transform function. >+ * 2. If this isn't a transform function, report an error. >+ * 3. Otherwise, based on the transform function, read in the correct >+ * data (including type and number). >+ * 4. If unable to do so, fail. >+ * 5. Chain the final data object on to the end of the list. >+ * 6. Report success! It worked! >+ */ I'd skip the introductory comment. >+ /* First, read in a token and see if we found a transform function. >+ * If we can't read in anything, then we've hit a serious problem. >+ */ s/serious problem/end of file/. But, actually, I'd just skip the comment. >+ /* Check to make sure that we've read a function. Identifiers work too >+ * since the only difference is the placement of the parenthesis. >+ */ >+ if (mToken.mType != eCSSToken_Function && mToken.mType != eCSSToken_Ident) { >+ UngetToken(); >+ return PR_FALSE; >+ } I don't think identifiers should work too; CSS mandates that there's no space between the name of the function and its opening parenthesis. Does WebKit do otherwise? If so, we should probably file a WebKit bug. >+ case eCSSKeyword_translatex: >+ newCell = ParseFunction(mToken.mIdent, VARIANT_LENGTH | VARIANT_PERCENT, >+ static_cast<PRUint16>(1), static_cast<PRUint16>(1), aErrorCode); >+ break; Throughout, I'd just use "1U" rather than static_cast<PRUint16>(1), and just rely on constant folding to reduce the unsigned int to an unsigned short. This will also fix the fact that the lines are wrapped at longer than 80 characters. >+ /* Finally, chain it to the end of the list. */ >+ if (!list) >+ list = toAppend.forget(); >+ else { >+ /* Traverse down the list until we hit the last cell. */ >+ // TODO: This is inefficient. Rewrite it. >+ nsCSSValueList *curr = list; >+ for(; curr->mNext != nsnull; curr = curr->mNext) >+ ; >+ curr->mNext = toAppend.forget(); >+ } You've got a literal tab on the second line, which you shouldn't. But you can simplify this to: nsCSSValueList **tail = &list; while (*tail) tail = &(*tail)->mNext; *tail = toAppend.forget(); and get rid of the if/else. However (to address your "TODO"), it's probably better to modify the caller and change the function parameter to being the list tail (probably nsCSSValueList** rather than nsCSSValueList*&), so that it's O(N) rather than O(N^2). Your comment at the start of HasMoreTransforms that it doesn't modify the stream isn't quite true, since it does eat up whitespace. However, I didn't review the rest of HasMoreTransforms since I think you should just remove the entire function, and replace its callsite with a call to !ExpectEndProperty(aErrorCode) -- and then remove the call to ExpectEndProperty a few lines below. (Though really you should probably refactor ExpectEndProperty so that all but the error reporting are in a sub-function called CheckEndProperty, and then make your caller, and the callers in ParseBorderColors and ParseContent and ParseMarks and ParseDasharray, the second caller in ParseCounterData, and the first caller in ParseQuotes , use CheckEndProperty instead.) You should remove the function HandleKeywordMozTransform and replace the call to it (and the operationSucceeded variable) with a call to ParseVariant, with the mask set to VARIANT_INHERIT | VARIANT_NONE. (That also gets rid of your next TAB literal.) Once you've removed operationSucceeded (which is currently written once and checked twice) and HasMoreTransforms, you should also be able to remove autoCleanupTransformList, since there will be only one early return, and you can just do a manual delete there. ParseMozTransformOrigin can also be simplified a lot by using ParseVariant. However, I think you can basically get rid of the whole thing by slightly refactoring the split between ParseBackgroundPosition and ParseBackgroundPositionValues (so that the nsCSSValuePair to store into is passed in) so that you can reuse ParseBackgroundPositionValues. In turn, -moz-transform-origin should be stored in nsCSSDisplay as a value pair rather than as a value list, with appropriate changes to the code in nsRuleNode.cpp, nsCSSPropList.h, and nsCSSStruct. nsComputedDOMStyle.cpp: You should be able to simplify GetMozTransformOrigin a lot by using SetValueToCoord. You'd need to write a (width, height) pair of percentage-base getters that use GetNetFrameBounds. >+ if(!valueList) Missing space. Also a bunch of 80th-column violations in this function. >+/* If we're dealing with a keyword transform, hands it back "as-is." Otherwise, computes >+ * the aggregate transform matrix and hands it back in a "matrix" wrapper. >+ */ Not sure what you mean by a "keyword transform", but you should really say that it returns either 'none' or a 'matrix()' function. Also, this comment goes past the 80th column, as do some others in this function. >+ resultString.Append(NS_LITERAL_STRING("px")); >+ >+ /* Tack on the closing ) character. */ >+ resultString.Append(')'); No need for these to be two separate appends. nsRuleNode.h: >+ // Expose this so nsTransformFunctions can use it. >+ static nscoord CalcLength(const nsCSSValue& aValue, >+ nsStyleContext* aStyleContext, >+ nsPresContext* aPresContext, >+ PRBool& aInherited); The last three lines should line up with the parameter on the line before. (jumping to review nsStyleStruct before nsRuleNode) nsStyleStruct.h: nsStyleCoord is a union type; every use in nsStyleStruct.h is labeled with the union types allowed for that variable. You should do this for mTransformOrigin, which I think should say "coord, percent". However, for mTransform, I don't think you should be using nsStyleCoord at all. The first four elements are always floats; the last two are always coords. Therefore, you should replace nsStyleCoord mTransform[6] with gfxFloat mTransformFactors[4] and nscoord mTransformCoords[2], or something like that. This will require updating all the places that use mTransform, but I don't think it should take too much time. However, if you want to do it in a followup patch, that's ok. However, probably even better would be to group all four of these arrays (mTransformFactors, mTransformCoords, mTransformX, mTransformY) in a single struct (perhaps called nsStyleTransformMatrix, although that's a bit long), that would also be used for the storage inside nsTransformFunction (and returned, const, from its getters). You should also have a comment showing a transformation matrix: | a c e | | b d f | | 0 0 1 | and saying how the a..f values are related to the different member variables (i.e., a..d from mTransformFactors, e..f from mTransformCoords, mTransformX, mTransformY, and the dimensions of the frame). nsStyleStruct.cpp: >+ if (mTransformPresent != aOther.mTransformPresent) >+ NS_UpdateHint(hint, nsChangeHint_ReconstructFrame); Looks like a TAB snuck in. nsRuleNode.cpp: > static nscoord CalcLength(const nsCSSValue& aValue, > nsStyleContext* aStyleContext, > nsPresContext* aPresContext, > PRBool& aInherited) This existing function should now also be marked as inline. In ReadTransforms, I think you should assign aList->mValue to a temporary (const nsCSSValue&) for the 4 places you use it. And I think it's probably better to use a separate variable for the list iteration rather than changing aList. >+ const PRInt32 NUM_ENTRIES = 6; >+ const PRInt32 FACTOR_THRESHOLD = 4; >+ const PRInt32 NUM_DELTA_ENTRIES = 2; >+ for (PRInt32 index = 0; index < NUM_ENTRIES; ++index) { >+ /* Clear the X and Y percent matrices. */ >+ if (index < NUM_DELTA_ENTRIES) >+ aDisplay->mTransformX[index] = aDisplay->mTransformY[index] = 0.0f; >+ >+ if (index < FACTOR_THRESHOLD) >+ aDisplay->mTransform[index].SetFactorValue((index == 0 || index == 3) ? 1.0f : 0.0f); >+ else >+ aDisplay->mTransform[index].SetCoordValue(static_cast<nscoord>(0)); >+ } This would probably be simpler as a loop up to 2, and then 4 assignments (1.0f, 0.0f, 0.0f, 1.0f, for the 4 factors). Given the above suggestion about a common transform matrix struct, I'm not sure nsTransformFunctions need to exist as objects with virtual function pointers; the transform matrix struct could have methods for SetToIdentity and then a setter method for each of the functions, and the switch in nsRuleNode could call those methods (which would match the current virtual method) instead of creating an object. But again, that could also be in a followup patch. You should remove your ValueToCoord function and use the existing SetCoord which does the same thing (and more). >+ /////////////////////////////////////////////// >+ // -moz-transform parsing >+ // This isn't parsing. You should instead use a comment like the ones before all the other properties. Same for -moz-transform-origin. >+ if(parentDisplay->mTransformPresent) { missing space before "(" I think it's clearer to set display->mTransformPresent to PR_TRUE at the callsite of ComputeTransformMatrix than inside it. You should change ReadTransforms to take the array as a parameter so you don't have to do array copying. nsStyleStruct.h, again: >+ /* We're positioned if we're absolutely positioned or there's a transform in effect. */ s/absolutely positioned/positioned/, since this returns true for relative as well. Then again, the comment then seems to be stating something that doesn't make much sense, so you probably also want s/We're positioned/Returns true/. nsTransformFunction.cpp: What's the use of the unnamed namespace block? Maybe just mark the functions and constants as static instead? Missing spaces before parentheses in the following lines (spread throughout the file): >+ if(cosTheta >= 0 && cosTheta < kEpsilon) >+ else if(cosTheta < 0 && cosTheta >= -kEpsilon) >+ switch(aValue.GetUnit()) >+ switch(aValue.GetUnit()) >+ for(nsMatrixIndex index = static_cast<nsMatrixIndex>(0); index < NUM_ENTRIES; >+ if(index < FACTOR_CUTOFF) >+ if(aData->Item(static_cast<PRUint16>(index + 1)).GetUnit() == eCSSUnit_Percent) >+ if(index == DELTA_X_POS) >+ if(aData->Item(1).GetUnit() != eCSSUnit_Percent) >+ if(aData->Item(1).GetUnit() != eCSSUnit_Percent) >+ if(dx.GetUnit() == eCSSUnit_Percent) >+ if(dy.GetUnit() == eCSSUnit_Percent) >+ switch(aValue.GetUnit()) >+ { But the opening brace on the same line as the switch, and the close brace lined up with the switch (CSSToRadians, SetToValue, and on ifs and elses in ProcessData, in which some bodies should only get one set of two-space indent rather than two). It seems like SetToValue ought to work only for X_TRANSLATE and Y_TRANSLATE and not have the number case, and the nsScale*Function callers should just call SetFactorValue directly (or just assign the correct value, if you've changed the data types). But once you do that, all the callers are also doing the Percent test and the SetCoordX / SetCoordY calls, so it seems like SetToValue *should* handle that part. (And if X_TRANSLATE is 0 and Y_TRANSLATE is 1, that can be easier.) Can you avoid having to add the known failures in the layout/style/test mochitests by adding a prerequisites line for -moz-transform-origin (probably setting display: block, width: 123px, height: 78px)? nsViewManager.cpp: >+ /* If we're supposed to invalidate the frame on a scroll, don't blt. */ s/blt/blit/ r=dbaron with those comments addressed
(In reply to comment #55) > >- think the cast is about as good, especially since it's in debug code. > Also, given the way that the overflow rect contains both the transformed > and untransformed overflow rects, aren't IsUniform and IsOpaque > incorrect for scale transformations that scale smaller? (Or is that not > the latest solution for the overflow rect?) I think you're right. > Somebody should look at the new functions in nsLayoutUtils more closely, > but I'm hoping roc will. I've looked at them, some comments above.
Created attachment 337685 [details] [diff] [review] Potential Patch #3 Revised to address review comments from roc and dbaron. Integrates better with the style system, pushed a good deal into nsLayoutUtils, unified code paths a bit with SVG foreignObjects.
Created attachment 337941 [details] [diff] [review] Potential Patch #4 Update to Potential Patch #3 that allows the patch to be cleanly applied to mozilla-central, following the recent patch to the CSS parser.
Created attachment 338177 [details] [diff] [review] Potential Patch #5 Yet another update. This addresses the addition of SVG effects. No other changes.
// See if it's relatively positioned - else if ((NS_STYLE_POSITION_RELATIVE == aDisplay->mPosition) && + else if ((NS_STYLE_POSITION_RELATIVE == aDisplay->mPosition || + aDisplay->HasTransform()) && Comment still needs to be fixed + (newOrigin + toMozOrigin, + disp->mTransform.GetThebesMatrix(bounds, aFactor)); Fits on one line +nsRect nsDisplayTransform::TransformRect(const nsRect &untransformedBounds, +nsRect nsDisplayTransform::UntransformRect(const nsRect &untransformedBounds, + static nsRect TransformRect(const nsRect &untransformedBounds, + static nsRect UntransformRect(const nsRect &untransformedBounds, aUntransformedBounds + const nsRect *const + aBoundsOverride = nsnull); One line >. + if (IsTransformed()) { + *aOutAncestor = nsLayoutUtils::GetCrossDocParentFrame(this); This line happens whether IsTransformed() is true or not, so hoist it out above the 'if'. + /*") Honestly, I think making local variables and parameters 'const' is a waste of time and since we generally don't do it, I'd like you to take them out. It's generally easy to see where a local variable or a parameter is modified in a function, so most of the time 'const' is just adding stuff that people have to read for no particular benefit. (Pointers and references to const are OK and make a lot more sense since they promise the caller that this function won't modify through the pointer/reference.) + const nsIFrame* ReferenceFrame() const { return mReferenceFrame; } I don't think we need to mess around with const nsDisplayListBuilders. + static const nsIFrame* GetCrossDocParentFrame(const nsIFrame *aFrame, + nsPoint* aCrossDocOffset = nsnull) Or this. I think I mentioned before that const nsIFrames are really not very useful, because you're usually interested in a whole frame tree and const doesn't have any way of extending transitively to the parent or children. +(). GetTransformMatrix is very nice, thanks! + if(f->IsTransformed()) Space between 'if' and '(' + gfxMatrix worldToOrigin(static_cast<gfxFloat>(1.0), + static_cast<gfxFloat>(0.0), + static_cast<gfxFloat>(0.0), + static_cast<gfxFloat>(1.0), + -aOrigin.x, -aOrigin.y); + gfxMatrix originToWorld(static_cast<gfxFloat>(1.0), + static_cast<gfxFloat>(0.0), + static_cast<gfxFloat>(0.0), + static_cast<gfxFloat>(1.0), + aOrigin.x, aOrigin.y); You can just write 1.0 etc here. I don't think the static_cast is doing anything useful. (And if it was, a constructor cast would look better.) Now I suppose I need to check how you addressed dbaron's comments...
The following comments mostly arose from me looking at David's comments and how you addressed them in the latest patch. Overall I think you did what he intended, although I can't be 100% sure in some cases. IsPositionedIgnoringTransform is actually unused and you should remove it. It would be good if you could write a test that combines CSS transforms with an SVG filter effect on the same element. I think it will work, but the combination is a little tricky. layout/reftests/svg-integration has tests that might help. + for (;;) { + /* Might be transformed; stop iterating. */ + if ((*aOutAncestor)->mState & NS_FRAME_MAY_BE_TRANSFORMED) + break; It might make sense to write this as while (!((*aOutAncestor)->mState & NS_FRAME_MAY_BE_TRANSFORMED) > Also, it's probably clearer to do: > > PRUint16 numElements = foundValues.Length() + 1; > if (numElements > MAX_ALLOWED_ELEMENTS) > numElements = MAX_ALLOWED_ELEMENTS; You still need to fix this. + nsCSSValueList *transformList = nsnull; I'd move this down to just before we declare 'tail', so it's clear this isn't used until then. How does nsCSSDisplay::mTransform not leak? + if (mTransformPresent != aOther.mTransformPresent) + NS_UpdateHint(hint, nsChangeHint_ReconstructFrame); + + /* Otherwise, if we've kept the property lying around and we already had a + * transform, we need to see whether or not we've changed the transform. + * If so, we need to do a reflow and a repaint. The reflow is to recompute + * the overflow rect (which probably changed if the transform changed) + * and to redraw within the bounds of that new overflow rect. + */ + else if(mTransformPresent) { + if (mTransform != aOther.mTransform) + NS_UpdateHint(hint, NS_CombineHint(nsChangeHint_ReflowFrame, + nsChangeHint_RepaintFrame)); + + for (PRUint8 index = 0; index < 2; ++index) + if (mTransformOrigin[index] != aOther.mTransformOrigin[index]) { + NS_UpdateHint(hint, NS_CombineHint(nsChangeHint_ReflowFrame, + nsChangeHint_RepaintFrame)); + break; + } + } Reformat this with braces around the first "if" clause so it's clear the else is associate with the if. > nsStyleCoord is a union type; every use in nsStyleStruct.h is labeled > with the union types allowed for that variable. You should do this > for mTransformOrigin, which I think should say "coord, percent". Seems like you should still do this. +nsStyleTransformMatrix::SetToTransformFunction(const nsCSSValue::Array *const + aData, + nsStyleContext*const aContext, + nsPresContext*const aPresContext) Tabs in here need to be replaced with spaces. Still a few occurrences of "if(" hanging around. + /* This section is for the full property. Uncomment it when you're ready. */ What does this mean? It is uncommented already.
Seems like the remaining issues here are almost entirely cosmetic. One more revision should have us pretty much done. I'd like David to have a chance to check how his comments were addressed, but assuming nothing major crops up there, we should be able to land this. If it doesn't land tomorrow, I can clean up whatever else is needed and land it next week. Thanks!!!!!!!!!!
>>. The main reason I pass everything around as a PRInt32 is so that I can use NSFloatPixelsToAppUnits, which has very nice rounding behavior, without running into problems with float <-> int rounding issues. Would you still recommend changing it? >+ /*. I'll try to get an updated patch posted ASAP.. Does this analysis seem correct? If so, should I leave it for bug 452496 to fix, or should I try to find a workaround?
(In reply to comment #63) > The main reason I pass everything around as a PRInt32 is so that I can use > NSFloatPixelsToAppUnits, which has very nice rounding behavior, without > running into problems with float <-> int rounding issues. Would you still > recommend changing it? Actually I think the best thing to do is to make NSFloatPixelsToAppUnits's aAppUnitsPerPixel parameter be a float. It gets coerced to a float internally anyway. > >+ /*? The assertion would only be inside the IsTransformed() case. Since root frames can't be transformed, it should never fire no matter what the caller does. Am I missing something? > >+? Yes, you're absolutely right. > >>? Yeah, you're right. Good call. > . OK.
(In reply to comment #64) >. OK. Just leave it for now, but file a bug on that specific issue. I'll work on that myself.
Created attachment 338395 [details] [diff] [review] Potential Patch #6 Updates to address more review comments. Another step in the asymptotic progression towards perfection!
+nsIFrame::IsTransformed() const +{ + return GetStyleDisplay()->HasTransform(); I actually want to check the state bit here. + static nsRect TransformRect(const nsRect &untransformedBounds, + static nsRect UntransformRect(const nsRect &untransformedBounds, These still need to be fixed to aUntransformedBounds. Other than that everything looks good! I'll make those changes myself and try landing the patch.
Pushed b827e694565d!
There were test failures on Linux: REFTEST TEST-UNEXPECTED-FAIL | | REFTEST TEST-UNEXPECTED-FAIL | | REFTEST TEST-UNEXPECTED-FAIL | | These look like subtle rasterization differences. and on Windows: REFTEST TEST-UNEXPECTED-FAIL | | REFTEST TEST-UNEXPECTED-FAIL | | REFTEST TEST-UNEXPECTED-FAIL | | These are off by one pixel for some reason. Since these are platform dependent, and minor, and not regressions, we just marked the tests random for now. Bug 455138 tracks those.
The translate*.html failures also happened on Mac (but not the percent-*.html failures).
I notice you left two unused variables: PRInt32 cssPxWidth = 0, cssPxHeight = 0; in nsComputedDOMStyle.cpp. I'm not sure if you intended to use them or if they can be taken out.
I've noticed a change in the behavior of attachment 328422 [details]. The change happened between the 2008-09-20 and 2008-09-21 nightly. My guess is that it's caused by bug 455403, but there was some other transform checkins that day so I'm not sure. My question is, is it an intended change or not.
The change in the behavior is due to bug 455403. The testcase was written when the bug had not yet been fixed, so it would have appeared to have the correct behavior. With the fix checked in, the test case no longer works as initially expected, but still operates as it should.
/tests/layout/base/tests/test_bug435293-interaction.html, /tests/layout/base/tests/test_bug435293-scale.html and /tests/layout/base/tests/test_bug435293-skew.html are all passing on all OSs: Firefox 9.0 (Beta 2): Firefox 10.0 (Aurora): Firefox 11.0 (Nightly):
|
https://bugzilla.mozilla.org/show_bug.cgi?id=435293
|
CC-MAIN-2017-26
|
refinedweb
| 12,317
| 56.15
|
48977/python-code-to-send-an-email-message-from-gmail-to-many-others
Hey @Varsha, you can try out the following code:
import smtplib
# list of email_id to send the mail
list = ["abc@gmail.com", "def@gmail.com"]
for i in range(len(list)):
s = smtplib.SMTP('smtp.gmail.com', 587)
s.starttls()
#sender emai id and email password
s.login("email-id", "id-password")
message = "Message-to-send"
s.sendmail("sender_email_id", list[i], message)
Hi @Deb, try out the following code:
import ...READ MORE
Hi @Vipul, try out this code. I've ...READ MORE
OR
Already have an account? Sign in.
|
https://www.edureka.co/community/48977/python-code-to-send-an-email-message-from-gmail-to-many-others
|
CC-MAIN-2020-16
|
refinedweb
| 102
| 62.75
|
Who Is Your Favorite Child?
When we put some content inside the open and close tags of one of our
components, we get access to it as the
children prop.
const Parent = ({children}) => { return ( <React.Fragment> <p>These are my favorites:</p> {children} </React.Fragment> ); } const App = () => ( <div> <Parent> Greg and Marsha </Parent> </div> );
What happens if we also provide an explicit
children prop to
Parent?
const App = () => ( <div> <Parent children={"Jan and Peter"}> Greg and Marsha </Parent> </div> );
Which will take precedence when we destructure
children in the parent
component?
In the example above, we’ll still see
Greg and Marsha rendered. The
content placed inside the tags will take precedence over the explicit
children prop.
See a live example here.Tweet
|
https://til.hashrocket.com/posts/sxyluohv9t-who-is-your-favorite-child
|
CC-MAIN-2018-13
|
refinedweb
| 123
| 66.84
|
Thanks for your effort. It was really a silly mistake of mine. But the code still gives wrong answer and i am still not able to find whats going wrong.
Thanks for your effort. It was really a silly mistake of mine. But the code still gives wrong answer and i am still not able to find whats going wrong.
Getting WA even if it works for all my test cases. Please provide some test cases where my code fails or point the incorrectness in code. Thanks for your time.
#include<bits/stdc++.h>using namespace std;#define ll long longint main(){ll t,n,ar[101],dp[101][101],i,j,k,m;cin>>t;/*dp[i][j] means ar[i] is last element of increasing sequence and ar[j] is last element in decreasing sequencedp[i][0] means ar[i] is last element in decreasing sequence and there are no elements in the decreasing sequencedp[0][j] means there are no elements in the increasing sequence and ar[j] is last element in decreasing sequence*/while(t--){cin>>n;
tried O(n^2) solution but getting WA in many test cases. Must be missing some important construct. Please help!
#include<bits/stdc++.h>using namespace std;#define ll long long/*//moving from bottom right to top left// answer will be at wa[1][1] as addition of its 4 fields- right right's ways,right bottom,bottom right and bottom bottomwa[r][c][2][2]---for each wa[i][j]0 10 right cell's right right cell's bottom1 bottom cell's right bottom cell's bottom*/int main(){ll r,c,d;
Isn't there any better solution than O(n^2) ?
I just needed increase size of array to 10^6 . Sorry for any troubles.
Been stuck at it since past 3 days...Please help!
Getting RE( SIGSEGV ) for last 9 test cases
#include<bits/stdc++.h>using namespace std;#define ll long longint main(){int n,k,ar[3001],lsum[3001],maxr,cost[2],temp,i;//lsum stores cost of going from cell#index to cell#1//maxr will store maximum profitcin>>n>>k;ar[0]=lsum[0]=-2000;lsum[1]=0;for(i=1;i<=n;i++)cin>>ar[i];for(i=2;i<=n;i++)lsum[i]=max(lsum[i-1]+ar[i-1],lsum[i-2]+ar[i-2]);cost[0]=-2000;cost[1]=0;maxr=lsum[k];for(i=k+1;i<=n;i++){temp=ar[i]+max(cost[0],cost[1]);if(temp+lsum[i]>maxr)maxr=temp+lsum[i];cost[0]=cost[1];cost[1]=temp;}cout<<maxr;return 0;}
i thought it to be 8C2 combinations of ai,aj pair and for each pair 8 values of k, i.e., from k=1 to 8
therefore o/p = 8C2 x 8
So is it that there will be just one pair and 8 values of k for it ?
|
https://www.commonlounge.com/profile/5d9d0bc4d658483695fe146844ba9fd9
|
CC-MAIN-2019-35
|
refinedweb
| 491
| 53.61
|
guide to use Active Qt to create ocx file for windowse.
Hi.I want to create OCX ( com object ). when I read this article : I found there are two type of active qt.
QAxContainer and QAxServer.
My purpose is to make OCX file .I have a TCP class make client to connect to server and read data and write data in LAN network.I want this OCX can attach to any application. (like Delphi or other program language).
This class functions are: connect to server , read data from server and send it to application by output and write data on network by getting data from input.
I wand to change this class to OCX file to attach to any applications. I do not know which one is select? (QAxContainer and QAxServer.)
- mrjj Qt Champions 2016
Hi
As far as I know, u want the QAxServer
I have not seen many samples but this one explains a bit
Using ActiveX on Windows
They make a small com object and show in browser.
Its almost the same as the
@mrjj thank you for answer. I read that book but I did not get anything.
I do like this site and make an example:
1- create widget application. (first made by console app but the Qt creator alert show it is not good in console app so I create widget app)
2- in .pro :
#------------------------------------------------- # # Project created by QtCreator 2016-08-20T12:52:04 # #------------------------------------------------- QT += core gui greaterThan(QT_MAJOR_VERSION, 4): QT += widgets TARGET = Test2ActiveXServer TEMPLATE = app SOURCES += main.cpp\ mainwindow.cpp \ test.cpp HEADERS += mainwindow.h \ test.h FORMS += mainwindow.ui #this is make QAXserver CONFIG += axserver RC_FILE = qaxserver.rc
in main.cpp:
#include "mainwindow.h" #include <QApplication> #include <QAxFactory> int main(int argc, char *argv[]) { QApplication a(argc, argv); if ( !QAxFactory::isServer() ) { MainWindow w; w.show(); } return a.exec(); }
then create a class and its name is test.
test.h:
#ifndef TEST_H #define TEST_H #include <QObject> #include <QAxBindable> #include <QWidget> #include <QAxFactory> class test : public QObject , public QAxBindable { Q_OBJECT Q_PROPERTY( int value READ value WRITE setValue ) QAXFACTORY_BEGIN("{3304a730-4230-4a82-ab61-334a2e93c803}", "{23667206-d900-4f1a-a14f-30397fa5adc7}") QAXCLASS(MyWidget) QAXCLASS(MyWidget2) QAXTYPE(MySubType) QAXFACTORY_END() public: explicit test(QObject *parent = 0); int value() const; signals: void valueChange ( int a); public slots: void setValue (int a); private: int *val; }; #endif // TEST_H
and test.cpp :
#include "test.h" test::test(QObject *parent) : QObject(parent) { } int test::value() const { return val; } void test::setValue(int a) { val=a; emit valueChange(val); }
after that I dont know what am I do?
when run qmake just the qmake file create. when I build the project it shows me this error:
:-1: error: dependent '..\Test2ActiveXServer\qaxserver.rc' does not exist.
in that doc it says : ." . I don't understand it.
My question is :
Is that project is Active x server?
How to test it?
what am I do know?
How does it work?
|
https://forum.qt.io/topic/70413/guide-to-use-active-qt-to-create-ocx-file-for-windowse
|
CC-MAIN-2017-51
|
refinedweb
| 483
| 68.57
|
React Plotly.js in plotly.js
How to use the Plotly.js React component.
Use react-plotly.js to embed D3 charts in your React-powered web application. This React component takes the chart type, data, and styling as Plotly JSON in its data and layout props, then draws the chart using Plotly.js. See below about how to get started with react-plotly.js.
$ npm install react-plotly.js plotly.js
The easiest way to use this component is to import and pass data to a plot component:
import React from 'react'; import Plot from 'react-plotly.js'; class App extends React.Component { render() { return ( <Plot data={[ { x: [1, 2, 3], y: [2, 6, 3], type: 'scatter', mode: 'lines+points', marker: {color: 'red'}, }, {type: 'bar', x: [1, 2, 3], y: [2, 5, 3]}, ]} layout={ {width: 320, height: 240, title: 'A Fancy Plot'} } /> ); } }
For information on more advanced usage patterns such as State Management or Customizing the plotly.js bundle please see the ReadMe for react-plotly.js.
More information about Props and Event Handlers can be found in the ReadMe for react-plotly.js.
Click here for more information about Plotly Chart Types and Attributes.
|
https://plot.ly/javascript/react/
|
CC-MAIN-2019-30
|
refinedweb
| 196
| 66.23
|
24956/tf-reshape-vs-tf-contrib-layers-flatten!
All 3 options reshape identically:
import tensorflow as tf
import numpy as np
p3 = tf.placeholder(tf.float32, [None, 1, 2, 4])
p3_shape = p3.get_shape().as_list()
p_a = tf.contrib.layers.flatten(p3) # <-----Option A
p_b = tf.reshape(p3, [-1, p3_shape[1] * p3_shape[2] * p3_shape[3]]) # <---- Option B
p_c = tf.reshape(p3, [tf.shape(p3)[0], -1]) # <---- Option C
print(p_a.get_shape())
print(p_b.get_shape())
print(p_c.get_shape())
with tf.Session() as sess:
i_p3 = np.arange(16, dtype=np.float32).reshape([2, 1, 2, 4])
print("a", sess.run(p_a, feed_dict={p3: i_p3}))
print("b", sess.run(p_b, feed_dict={p3: i_p3}))
print("c", sess.run(p_c, feed_dict={p3: i_p3}))
This code yields the same result 3 times. Your different results are caused by something else and not by the reshaping.
The choice of the optimizer has a ...READ MORE
Python append() method adds an element to ...READ MORE
Compiled languages are written in a code ...READ MORE
append: Appends object at the end.
x = ...READ MORE
suppose you have a string with a ...READ MORE
if you google it you can find. ...READ MORE
Syntax :
list. count(value)
Code:
colors = ['red', 'green', ...READ MORE
can you give an example using a ...READ MORE
To answer your first question... .format just ...READ MORE
Using Python format() function is what the ...READ MORE
OR
|
https://www.edureka.co/community/24956/tf-reshape-vs-tf-contrib-layers-flatten
|
CC-MAIN-2019-39
|
refinedweb
| 229
| 72.73
|
Member
174 Points
Dec 28, 2012 07:34 PM|LINK
How can I read the contents of a text file using JavaScript?
Contributor
4812 Points
Dec 28, 2012 07:49 PM|LINK
fh = fopen('c:\\FileName.txt', 0); // Open the file for reading. if(fh!=-1) // Check if the file has been successfully opened. { length = flength(fh); // Get the length of the file. str = fread(fh, length); // Read in the entire file. fclose(fh); // Close the file. // Display the contents of the file. write(str); }
All-Star
29935 Points
Dec 28, 2012 07:51 PM|LINK
How many places are you going to ask this question? Spend some time and give details on your question. LIke where the file lives. Yikes.
Member
174 Points
Dec 29, 2012 04:55 AM|LINK
@jprochazka, I'm getting the following error message:
the value of the property 'fopen' is null or undefined, not a Function object.
What is the problem?
All-Star
42882 Points
MVP
Dec 29, 2012 06:18 AM|LINK
Hello,
With javascripts, it's not possible to do file IO operations. If it's possible to do with javascripts, there would be a major security breach for web applications.
You have to use a technology like silverlight.
For more about this, I suggest you to post your question on the silverlight forums if you decide to do it with silverlight.
My Tech blog | Twitter
Please 'Mark as Answer' if this post helps you.
All-Star
20119 Points
Dec 30, 2012 02:59 PM|LINK
Hi,
With JavaScript it's not possible, but HTML 5 has got few FileSystem API's which made reading of file in Client System possible!
NOTE: Works in HTML 5 compatible browsers!
Hope it helps u...
Member
442 Points
Dec 30, 2012 03:59 PM|LINK
use System.IO namespace
if u r using character != -1
if using string != null
Javascript should never ever be used
6 replies
Last post Dec 30, 2012 03:59 PM by Anil Srivastava
|
http://forums.asp.net/p/1869297/5253823.aspx/1?Re+Reading+the+contents+of+a+text+file+using+JavaScript
|
CC-MAIN-2013-20
|
refinedweb
| 333
| 84.68
|
Top 10+ Redux Interview Questions
Congratulations on being shortlisted for the interview! Now your focus should be on cracking your interview, we want to tell a bit about Redux and its scope. Redux works as an open-source JavaScript library that is used for managing the application state. It is written in JavaScript and it was created by Dan Abramov and Andrew Clark. Also, the scope of redux is increasing day by day as it makes the state changes in apps more predictable and hence results in the easy testing of the app. If you are looking for React Interview Questions then you can visit here.
Most Frequently Asked Redux Interview Questions
1. What are the core principles of Redux?
There are three core principles that Redux follows:
- Single source of truth: The global state of your app is put away in an object tree inside a single store.
- The state is read-only: State can only be changed by emitting an action, an object that explains what has happened.
- Changes are made with pure functions: This is to define how the state tree is being transformed by the actions, you have to write pure reducers.
2. Do you need to keep all component states in the Redux store?
You do not need to push everything in the redux store as you have to keep your application state as small as possible. You should only do it if it makes a difference to you to keep something there or maybe helping you in making your life easier while using Dev Tools.
3. What is Redux DevTools? Also, explain the features of Redux DevTools?
It is a time travel environment that allows live editing for Redux with action replay, hot reloading, and customizable UI. For your ease, you can also use the extension of Redux DevTools in any of your browsers, Chrome, or firefox.
4. What is an action in Redux?
Actions are the plain JavaScript objects which contain a type field. Action can be considered as an event that can describe something that has happened in the application.
Always remember actions should contain a small amount of information that is needed to mention what has happened.
If you want to read more Interview questions related to Redux then you can visit here.
5. What is “store” in redux?
The Redux “store” carries together all the states, reducers, and actions that create the app. The store has multiple responsibilities:
- It holds the state of the current application from inside
- With the help of store.getState(); it allows access to the current state.
- With the help of the store.dispatch(action); it allows the state to be updated.
- With the help of the store.subscriber(listener); it allows to register listener callbacks.
Store Methods
- getState()
- dispatch(action)
- subscribe(listener)
- replaceReducer(nextReducer)
Example
import { createStore } from 'redux'
const store = createStore(todos, ['Use Redux'])store.dispatch(addTodo('Read the docs'))
store.dispatch(addTodo('Read about the middleware'))
6. How to add multiple middlewares to Redux?
For adding multiple middlewares to Redux, you can use applyMiddleware by which the developer can pass each piece of middleware as the new or another argument. As per your preferences, you just need to pass every single piece of middleware.
For instance, one can add the Redux Thunk and the logger middleware as the argument just as below:
Example
import { createStore, applyMiddleware } from 'redux'
const createStoreWithMiddleware = applyMiddleware(ReduxThunk, logger)(createStore);
7. How to structure Redux top-level directories?
All the applications have multiple top-level directories as mentioned below:
- Components: it is used for “dumb” React components that are unfamiliar with Redux.
- Containers: It is used for “smart” React components which are connected to the Redux.
- Actions: It is used for all the action creators, where the file name should be corresponding to the part of the app.
- Reducers: It is used for all the reducers where the file name is corresponding to the state key.
- Store: it is used for store initialization. This directory works best in small and mid-level size apps.
8. What are the downsides of Redux compared to Flux?
Instead of downsides, there are few compromises of using Redux over Flux that is listed below:
- You need to learn the avoiding of mutations: Flux is un-opinionated about mutating the data, however, Redux does not like mutations, and most of the packages which are complementary to Redux should never alter the state.
- You have to carefully pick your packages: While Flux principle does not try to solve the problems such as undo or redo, persistence, or the forms where Redux has extension points like store enhancers and middleware.
- No nice Flow integration yet: Flux allows you to do impressive static type checks that Redux does not support yet.
9. What are reducers in redux?
The reducers in redux are the pure functions that take the previous state and an action, and then it returns to the next state.
(previousState, action) => newState
It is known as the reducer because they are the type of function that would pass to Array.prototype.reduce(reducer, ?initialValue). It is very essential to ensure that the reducer stays pure.
To maintain this, there are few things that you should never do inside the reducer:
- Modify its argument
- Make sure not to perform some side effects such as routing transitions and API calls
- Call non-pure functions, e.g. Date.now() or Math.random().
Example
}
10. What is the purpose of the constants in Redux?
When you use an IDE, the constants allow you to find all the usages of specific functionality across the project. It also prevents you from making silly mistakes which are usually caused by typos; in that case, you will receive a Reference Error immediately.
11. What is Redux Thunk?
Redux Thunk middleware is something that allows the developers to write the action creators that return functions, not actions. The redux-thunk can also be used for postponing the dispatch of action or to dispatch just if a specific condition is met. The inner function gets the “store” methods dispatch and getState() as the parameters.
Originally published at.
|
https://bestinterviewquestion.medium.com/top-10-redux-interview-questions-fdee8954b461?responsesOpen=true&source=user_profile---------1----------------------------
|
CC-MAIN-2021-43
|
refinedweb
| 1,022
| 64
|
![if gte IE 9]><![endif]>
The MVC widgets are registered automatically when there is an attribute added before each controller class
[ControllerToolboxItem(Name =
"MyWidget1"
, Title =
, SectionName =
"MvcWidgets"
)]
public
class
MyWidget1Controller : Controller
Telerik.Sitefinity.Mvc.Proxy.MvcControllerProxy
Hey David, thanks for helping out!
I created brand new projects, with different URLs 3 times. I then created the MVC controller and corresponding views, with proper namespace and attribute decorated onto the controller. All 3 times, the widget did not show up *until* I created at least one page in the web app.
In fact, it didn't show up at all unless I created a page first. My workflow was=> Create custom template based a default Sitefinity one => attempt to add my widget to a container...no widget!
Once I created a page using a default template, then followed the above workflow, I was able to see my widget.
BUGS!
Hi guys,
I had the same issue as Gary and later Brenton. So firstly I couldn't register my own mvc widget to toolbox. As Stanislav wrote the cause was custom project name without word "SitefinityWebApp" so automatic registration is not working. I've done it manually according to his instructions and it appears on my toolbox. Nice ! ... not !
It didn't work beacuse I had the same problem as Brenton. My widget was returning NullReferenceException and I didn't know why ?
A few hours later... Me and my friend (Thx Rafal) figured out that manual way needs more information. When you have created node with your widget, you have to add into "Toolbox item parameters" new parameter called "ControllerName" and put there Key "ControllerName" and Value with the same string as before in field "Controller CLR Type" ( so path to your controller).
And this is how it should works !
P.S: At least for me ;)
All the best !
|
https://community.progress.com/community_groups/sitefinity/bugsandissues/f/297/t/45693
|
CC-MAIN-2019-09
|
refinedweb
| 307
| 65.01
|
I'm having a heckuva time figuring out how to create a persistent/permanent group objects that I can select from randomly to race each other over a preset course.
Right now, my racers are primitive spheres (NavMeshAgents). Right now, I can create five or ten spheres and assign them random speed and acceleration numbers and have them race. No problem there. I know how to do all that.
But, at some point, I want to create hundreds of objects and randomly select a few each time to race over one of my courses.
I know how to create the race course and the nav mesh and obstacles and bake them and all that...absolutely no problem with any of that stuff...I already have all that in place and it works fine. I can even manually create the racers and give them random speed/acceleration and have them race...that works fine too.
BUT...
I don't know how to create the List or Class or Group of racers. Once created, their speed and acceleration should remain the same...those should never change. The only thing that changes is that in one race, I might randomly select Racers 5, 17, and 22 to compete. In the next race, I might select 4, 13 and 26.
I've been looking up all sorts of stuff and I can't seem to figure out how to accomplish this.
Answer by ForbiddenSoul
·
Dec 29, 2016 at 09:09 AM
There are a whole bunch of ways to do this. You could for example have a scrip loop 100 times create a racer, and then add them to something like GameObject array, they will keep their initial properties, and you can just pick them by index number.
// Total number of racers you want
int numberOfRacers = 100;
// An array to store them all in
GameObject[] racers;
void MakeRacers()
{
// This will loop through as many times as you have set "numberOfRacers" to.
for (int i = 0; i < numberOfRacers; i++)
{
racers[i] = // the GameObject of a racer that your other code is making with the random values etc.
}
}
int numberOfRacersToSelect = 12;
GameObject racerToRace;
void RandomSelection()
{
// This will loop through as many times as you have set "numberOfRacersToSelect" to.
for (int i = 0; i < numberOfRacersToSelect; i++)
{
// Get a random float from 0 to the total number of racers you have, and have Mathf.RoundToInt make sure it
// is a usuable integer for our array
racerToRace = racers[Mathf.RoundToInt(Random.Range(0, numberOfRacers))];
// Note that there is nothing in place to stop you from rolling the same random number multiple times.
// CODE FOR WAHT TO DO WITH racerToRace GOES HERE
}
}
Edit: Indexers don't seem to be playing nice with Unity> I would use something else for now.
Thank you too. I'm going to look at what you have immediately. I appreciate you taking the time to help me.
Thanks for the heads up on selecting something twice. I've done this in C# outside of Unity...I'm just struggling with it inside the Unity Engine...I'm missing something basic and It's not clear what it is to me just yet. Outside Unity, I had put in an additional field to indicate if it was selected so it would not get selected twice. Once I figure out how to get all this into Unity, I don't think I'll have a problem selecting anything twice.
I'm still looking at this.
I'm still looking at this. I'm able to create the full universe of random racers...I think I'm doing fine at creating any number of these...and I list them up on a GUI to display them. No problem.
However, when I select three at random (or whatever number) I'm having great difficulty in figuring out how to assign one of the racers from the universe to my selection of current racers.
I've been trying a lot of ways to do this but I can't quite get it figured.
Sorry I took so long to reply, I have been busy over the holidays. Glad your making headway.
How is your universe setup? You say you can reference the racers in your GUI, can you not do the same or similar to reference them to your current racers?
I don't know if this will help at this point, but I rewrote the code to account for duplicate random numbers and handled racer selection a little differently.
The code is too many characters to be a comment so here is a link: RandomRacers.cs
Answer by allenallenallen
·
Dec 29, 2016 at 02:12 AM
How about creating a class with all the information stored as variables? And then have a list of that class saved. Whenever you want to select the racers, just recreate them again from the list.
Something like this?
using UnityEngine;
using System.Collections;
public class Racer: MonoBehaviour // Probably don't even need MonoBehaviour depending on what you do
{
public float speed;
public float acceleration;
// This is where you are making a new car.
public Racer (float s, float a){
speed = s;
acceleration = a;
}
}
And somewhere else, you create a list to store the Racers.
List<Racer> racers = new List<Racer>();
racers.Add( new Racer(4.0f, 5.0f)); // Obviously, you should make a for loop and use random values
And later you can reassign the values to an actual race car GameObject.
float mySpeed = racers[0].speed;
float myAccel = racers[0].acceleration;
Thanks. I'm going to look into your suggestion right now. I asked this question yesterday but I haven't got a chance to look at any answers yet.
Wow. I'm really looking hard at that List idea. Very strong. Occasionally, I would like to eliminate some racers and add others. Your list suggestion might be a way to simplify that greatly. It will take me some time to dissect your response. Thanks for your help.
$$anonymous$$y initial impression is that I would almost have to create this class and list outside monobehaviour in a "creation" script and then pull data from the class/list and insert it into my nav mesh agents (which are the racers) in a "selection" script that would inherit from monobehaviour.
...or something like that.
I don't know how to do it other than with two separate scripts like that but my thought is that should be workable. I'll be trying that today as I go along. It will take me some time as I'm working on other things too but I will post something up if it doesn't work.
I'm still looking at this. Haven't forgotten to Accept an answer.
I'm thinking I won't be able to get around inheriting from monobehaviour so I'll need to access these with "AddComponent" etc. But I'm having some difficulty (syntax, or something) in how to code it properly so I can use the racer from the universe pool in my current race.
I've been looking over the documentation for this and doing some searches, but I'm not having much luck as yet in figuring out how to write to store a class as a variable
1
Answer
A node in a childnode?
1
Answer
Removing a class object from a list(.remove not working) C#
1
Answer
Serialization - Variables won't change on original construction
1
Answer
Can I make a list of hashtables or classes with pragma strict
2
Answers
|
https://answers.unity.com/questions/1291526/how-do-i-create-5-random-racers.html
|
CC-MAIN-2021-04
|
refinedweb
| 1,257
| 73.17
|
SFTP is a simple and fairly reliable way to share the information within the organization. Let’s look at the situation when you need to pick up some files from a remote host with authorization by public key. And after that, let’s see how to use it with in python.
Moreover, let’s see how to work with SSH using python and execute any commands on the remote host. For example. if we need it to collect versions of installed packages and a version Linux distribution for further vulnerability analysis (see “Vulnerability Assessment without Vulnerability Scanner“). 😉
Generating public key:
cd ~ mkdir .ssh chmod 700 .ssh $ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/vmuser/.ssh/id_rsa): [PRESS ENTER] Enter passphrase (empty for no passphrase): [INPUT p@$phr4se PRESS ENTER] Enter same passphrase again: [INPUT p@$phr4se PRESS ENTER] Your identification has been saved in /home/vmuser/.ssh/id_rsa. Your public key has been saved in /home/vmuser/.ssh/id_rsa.pub. The key fingerprint is: 52:14:a4:33:71:0a:b9:46:25:73:a0:96:94:b3:3b:03 vmuser@localhost.localdomain The key's randomart image is: +--[ RSA 2048]----+ | ..=++.=. | | .ooo= * | | ++ .= . | |E.. o + | | . o . S | | + . | | o | | | | | +-----------------+
Here is the public key id_rsa.pub, which we send to the server owner, who will add it to the list of known keys:
$ ls /home/vmuser/.ssh/ id_rsa id_rsa.pub known_hosts
Once he does this, we can go to that host by SSH:
[vmuser@localhost pycharm-community-2017.1.3]$ ssh 192.168.56.20 Enter passphrase for key '/home/vmuser/.ssh/id_rsa': Last login: Mon Sep 3 10:50:10 2017 from 192.168.56.101 ...
And how download the files on the host from the python script? You will need to install pysftp:
# sudo pip install pysftp
To connect and download files from the ‘data/’ directory on the remote server to the local directory ‘data/’ we need to do something like this:
import pysftp sftp = pys(host = '192.168.56.20', private_key = '/home/vmuser/.ssh/id_rsa', private_key_pass = 'p@$phr4se') for file in s("data"): s(remotepath = "data/" + file, localpath="data/" + file )
Ok, we dealt with the downloading. Now let’s see how to execute commands with SSH in python using paramiko module. When we installed pysftp we also installed paramiko by dependencies. We can use authentication by keys and by password (commented):
#!/usr/bin/env python import paramiko command = 'cat /etc/centos-release; rpm -qa' try: client = paramiko.SSHClient() client.load_system_host_keys() client.set_missing_host_key_policy(paramiko.WarningPolicy) client.connect(hostname = '192.168.56.20', key_filename='/home/vmuser/.ssh/id_rsa', password = 'p@$phr4se') #client.connect(hostname = '192.168.56.20', username="vmuser", password="1") stdin, stdout, stderr = client.exec_command(command) And having a list of packages we can check them for vulnerabilities using your own scanner or Vulners API. print(stdout.read()) print(stderr.read()) finally: client.close()
The output will be like this:
CentOS Linux release 7.3.1611 (Core) NetworkManager-team-1.4.0-12.el7.x86_64 centos-release-7-3.1611.el7.centos.x86_64 NetworkManager-wifi-1.4.0-12.el7.x86_64 filesystem-3.2-21.el7.x86_64 lvm2-2.02.166-1.el7.x86_64 ncurses-base-5.9-13.20130511.el7.noarch kexec-tools-2.0.7-50.el7.x86_64 bind-license-9.9.4-37.el7.noarch rdma-7.3_4.7_rc2-5.el7.noarch ...
And having a list of packages we can check them for vulnerabilities using your own scripts or Vulners Audit API.
>.
…or you can use Ansible for that 😉
Pingback: Making Expect scripts for SSH Authentication and Privilege Elevation | Alexander V. Leonov
|
https://avleonov.com/2017/09/05/ssh-sftp-public-key-authentication-and-python/
|
CC-MAIN-2022-40
|
refinedweb
| 606
| 58.89
|
Classical cryptography and stenography are very fun to program. They are often used in various capture the flag programmer events. Classical substitution ciphers, like Caesar Cipher, are particularly fun, because they are simple enough to understand and crack with just a little bit of knowledge. One can often find puzzles, called Cryptograms or Cryptoquotes, in newspapers or online that challenge the user to figure out a popular phrase given the ciphertext version of it.
Caesar Cipher
Caesar Cipher is one of the simplest forms of substitution ciphers, because it is just a shift of the alphabet by a certain number of characters to create the ciphertext. Julius Caesar, for whom this cipher is named after, apparently used this cipher a lot with a shift of 3 (key = 3). With a key of 3, the letter 'a' becomes 'd', 'b' becomes 'e', 'c' becomes 'f', etc. You can learn more about Caesar Cipher on Wikipedia and Practical Cryptography.
Computer Science Assignment - Caesar Cipher Class
My computer science course asked me to write a class in Python that encrypts and decrypts messages using Caesar Cipher. There are numerous ways to solve this solution in Python. I developed a few solutions to this problem, but turned in this one, which passed all tests.
import string class CaesarCipher: """ Encrypt and decrypt messages using Caesar Cipher. Specify the number of letters to shift ( key ). All alphabetical characters will be encrypted and decrypted using key. All non-alphabetical characters will be included as-is in the ciphertext and plaintext. If key is not provided, it uses key = 3 by default. """ def __init__(self, key=3): """ Initializes Caesar Cipher using specified key. :param int key: The number of letters to shift. Default is 3. """ self.key = key % 26 # dict used for encryption - { plaintext letter : ciphertext letter, ... } self.e = dict(zip(string.ascii_lowercase, string.ascii_lowercase[self.key:] + string.ascii_lowercase[:self.key])) self.e.update(dict(zip(string.ascii_uppercase, string.ascii_uppercase[self.key:] + string.ascii_uppercase[:self.key]))) # dict used for decryption - { ciphertext letter : plaintext letter, ... } self.d = dict(zip(string.ascii_lowercase[self.key:] + string.ascii_lowercase[:self.key], string.ascii_lowercase)) self.d.update(dict(zip(string.ascii_uppercase[self.key:] + string.ascii_uppercase[:self.key], string.ascii_uppercase))) def encrypt(self, plaintext): """Converts plaintext to ciphertext. :param str plaintext: The message to encrypt. :return: The ciphertext. :rtype: str """ return ''.join([self.e[letter] if letter in self.e else letter for letter in plaintext]) def decrypt(self, ciphertext): """ Converts ciphertext to plaintext. :param str ciphertext: The message to decrypt. :return: The plaintext. :rtype: str """ return ''.join([self.d[letter] if letter in self.d else letter for letter in ciphertext])
The majority of the code is Docstring documentation. I am learning to properly document functions and classes in Python so bear with me. PyCharm is assisting me quite well in this area.
I chose this solution, because I am trying to learn more about Python libraries, built-in functions, and special language features. In particular, I liked the use of string constants, zip function, dictionaries, and list comprehensions in this version. There are simpler solutions to Caesar Cipher, but this was the most fun to create.
Python String Constants
One of the really useful things I love about Python is its string constants. I find these extremely useful in my course assignments. As you can see in the
CaesarCipher Class, I am using the string constants: string.ascii_lowercase and string.ascii_uppercase to build the cipher map. The Caesar Cipher only converts alphabetical characters and these constants provide a nice list of the characters. There is also string.ascii_letters, string.punctuation, and string.whitespace that I use in my unit tests to make sure non-alphabetical characters are passed through as-is and encryption is working correctly for all letters. I realize this is a basic feature of Python, but I find I use string constants a lot in Python.
Zip Function in Python
The zip function in Python is particularly useful when combined with dictionaries, but is useful anytime you have multiple iterable sequences and you want to combine them to form a single list of tuples. I am using the
zip function to build the cipher map for the Caesar Cipher. For encryption,
zip combines the following lists into a list of tuples for a Caesar Cipher key of 3. It does this for both lower and uppercase letters and for decryption as well, but I am just showing it for encryption of lowercase letters.
Before Zip: ['a','b','c','d', ... ,'x','y','z'] and ['d','e','f','g', ... ,'a','b','c']
After Zip: [('a','d'),('b','e'),('c','f'),('d','g'), ... ,('x','a'),('y','b'),('z','c')]
Python Dictionaries
Python dictionaries can be created from a sequence of key-value pairs, and that is exactly what the
zip function is providing. I build a dictionary based on the list of tuples created by the
zip function in Python. The encryption dictionary is created as such.
e = dict(('a','d'),('b','e'),('c','f'),('d','g'), ... ,('x','a'),('y','b'),('z','c'))
e = {'a':'d','b':'e','c':'f','d':'g', ... , 'x':'a','y':'b','z':'c'}
This provides the encryption lookup for each lowercase letter in the plaintext message.
List Comprehensions
List comprehensions are really cool in Python and I still don't have their use memorized. I keep a cheat sheet near my desk. The actual encryption and decryption are done using list comprehensions. Here is the list comprehension for encryption.
def encrypt(self, plaintext): return ''.join([self.e[letter] if letter in self.e else letter for letter in plaintext])
This boils down to looping through each character in the plaintext and checking to see if the character is in the encryption dictionary. If it is, it adds the encrypted value to the ciphertext. If not, it just passes the character as-is into the ciphertext. The list comprehension creates a list and
''.join(...) creates a string based on that list. Really cool!
Unit Tests for Caesar Cipher
In addition to learning about documenting my modules, classes, and functions, I am also currently learning to write unit tests and Doctests. I don't have full coverage on my
CaesarCipher class, but I'm pretty confident it works well. I separated my tests into 1) key tests, 2) encryption tests, and 3) decryption tests. Key tests test that for all 25 unique keys I get the expected results. For example, Caesar Cipher with a key of 0 provides no encryption at all.
def test_key_0(self): c = CaesarCipher(0) plaintext = string.ascii_letters ciphertext = c.encrypt(plaintext) self.assertEqual(ciphertext, string.ascii_letters)
You can see where I am using a new Python string constant mentioned earlier: string.ascii_letters.
Encryption and decryption tests test to make sure those functions are working accordingly. There is some overlap here, but an example test is that punctuation is passed through as-is.
def test_encrypting_punctuation(self): ciphertext = self.cipher.encrypt(string.punctuation) self.assertEqual(ciphertext, string.punctuation)
Again, if I just pass in a string of punctuation characters for the plaintext, I should get the same output as my ciphertext.
I actually love writing unit tests, because as I was refactoring my code, my unit tests were continually making sure the correctness did not change.
About This Solution
Many of the Caesar Cipher solutions convert everything to lowercase, but I use both lowercase and uppercase so you are free to use both in the plaintext and ciphertext messages. Many solutions also remove non-alphabetical characters, like punctuation and whitespace, but I pass them through as-is. The reason for this is because I had the option in the assignment, and I thought this would be more fun when trying to solve them manually. It makes the ciphertext more like a Cryptogram or Cryptoquote and could be fun to send to people to solve.
Conclusion
Creating a Python class to perform encryption and decryption using the Caesar Cipher was a lot of fun. I enjoy classical cryptography and stenography, and it's fun to explore cool features in Python that make programming elegant. I have some other examples of Caesar Cipher in Python as well as other classical ciphers (rail-fence cipher, etc.) that I will share later. I also have solutions to help crack the Caesar Cipher, and that will be fun to share, too.
I hope to see you on twitter. I am @KoderDojo. Best wishes!
|
https://www.koderdojo.com/blog/caesar-cipher-in-python-classical-cryptography
|
CC-MAIN-2022-21
|
refinedweb
| 1,400
| 57.98
|
Low Ceremony, High Value: A Tour of Minimal APIs in .NET 6
In this post, let's take a tour of Minimal APIs in .NET 6.
This post was originally published on the Telerik Blog.
When developing APIs in ASP.NET Core, you're traditionally forced into using ASP.NET Core MVC. Going against many of the core tenets of .NET Core, MVC projects give you everything and the kitchen sink. After creating a project from the MVC template and noticing all that it contains, you might be thinking: all this to get some products from a database? Unfortunately, with MVC it requires so much ceremony to build an API.
Looking at it another way: if I'm a new developer or a developer looking at .NET for the first time (or after a long break), it's a frustrating experience—not only do I have to learn how to build an API, I have to wrap my head around all I have to do in ASP.NET Core MVC. If I can build services in Node with just a few lines of code, why can't I do it in .NET?
Starting with .NET 6 Preview 4, you can. The ASP.NET team has rolled out Minimal APIs, a new, simple way to build small microservices and HTTP APIs in ASP.NET Core. Minimal APIs hook into ASP.NET Core's hosting and routing capabilities and allow you to build fully functioning APIs with just a few lines of code. This does not replace building APIs with MVC—if you are building complex APIs or prefer MVC, you can keep using it as you always have—but its a nice approach to writing no-frills APIs.
In this post, I'll give you a tour of Minimal APIs. I'll first walk you through how it will work with .NET 6 and C# 10. Then, I'll describe how to start playing with the preview bits today. Finally, we'll look at the path forward.
Write a Minimal API with Three Lines of Code
If you want to create a Minimal API, you can make a simple GET request with just three lines of code.
var app = WebApplication.Create(args); app.MapGet("/", () => "Hello World!"); await app.RunAsync();
That's it! When I run this code, I'll get a
200 OK response with the following:
HTTP/1.1 200 OK Connection: close Date: Tue, 01 Jun 2021 02:52:42 GMT Server: Kestrel Transfer-Encoding: chunked Hello World!
How is this even possible? Thanks to top-level statements, a welcome C# 9 enhancement, you can execute a program without a namespace declaration, class declaration, or even a
Main(string[] args method. This alone saves you nine lines of code. Even without the
Main method, we can still infer arguments—the compiler takes care of this for you.
You'll also notice the absence of
using statements. This is because by default, in .NET 6, ASP.NET Core will use global usings—a new way to declare your
usings in a single file, avoiding the need to declare them in individual source files. I can keep my global usings in a devoted
.usings file, as you'll see here:
global using System; global using System.Net.Http; global using System.Threading.Tasks; global using Microsoft.AspNetCore.Builder; global using Microsoft.Extensions.Hosting; global using Microsoft.Extensions.DependencyInjection;
If you've worked with Razor files in ASP.NET Core, this is similar to using a
_Imports.razor file that allows you to keep
@using directives out of your Razor views. Of course, this will be out-of-the-box behavior but doesn't have to replace what you're doing now. Use what works best for you.
Going back to the code, after creating a
WebApplication instance, ASP.NET Core uses
MapGet to add an endpoint that matches any
GET requests to the root of the API. Right now, I'm only returning a string. I can use lambda improvements to C# 10 to pass in a callback—common use cases might be a model or an Entity Framework context. We'll provide a few examples to show off its flexibility.
Use HttpClient with Minimal APIs
If you're writing an API, you're likely using
HttpClient to consume APIs yourself. In my case, I'll use the
HttpClient to call off to the Ron Swanson Quotes API to get some inspiration. Here's how I can make a
async call to make this happen:
var app = WebApplication.Create(args); app.MapGet("/quote", async () => await new HttpClient().GetStringAsync("")); await app.RunAsync();
When I execute this response, I'll get a wonderful quote that I will never disagree with:
HTTP/1.1 200 OK Connection: close Date: Fri, 04 Jun 2021 11:27:47 GMT Server: Kestrel Transfer-Encoding: chunked ["Dear frozen yogurt, you are the celery of desserts. Be ice cream or be nothing. Zero stars."]
In more real-world scenarios, you'll probably call
GetFromJsonAsync with a model, but that can be done just as easily. Speaking of models, let's take a look to see how that works.
Work with Models
With just an additional line of code, I can work with a
Person record. Records, also a C# 9 feature, are reference types that use value-based equality and help enforce immutability. With positional parameters, you can declare a model in just a line of code. Check this out:
var app = WebApplication.Create(args); app.MapGet("/person", () => new Person("Bill", "Gates")); await app.RunAsync(); public record Person(string FirstName, string LastName);
In this case, the model binding is handled for us, as we get this response back:
HTTP/1.1 200 OK Connection: close Date: Fri, 04 Jun 2021 11:36:31 GMT Content-Type: application/json; charset=utf-8 Server: Kestrel Transfer-Encoding: chunked { "firstName": "Bill", "lastName": "Gates" }
As we get closer to the .NET 6 release, this will likely work with annotations as well, like if I wanted to make my
LastName required:
public record Person(string FirstName, [Required] string LastName);
So far, we haven't passed anything to our inline lambdas. If we set a
POST endpoint, we can pass in the
Person and output what was passed in. (Of course, a more common ideal real-world scenario would be passing in a database context. I'll leave that as an exercise for you, as setting up a database and initializing data is outside the scope of this post.)
var app = WebApplication.Create(args); app.MapPost("/person", (Person p) => $"We have a new person: {p.FirstName} {p.LastName}"); await app.RunAsync(); public record Person(string FirstName, string LastName);
When I use a tool such as Fiddler (wink, wink), I'll get the following response:
HTTP/1.1 200 OK Connection: close Date: Fri, 04 Jun 2021 11:36:31 GMT Content-Type: application/json; charset=utf-8 Server: Kestrel Transfer-Encoding: chunked We have a new person: Ron Swanson
Use middleware and dependency injection with Minimal APIs
Your production-grade APIs—no offense, Ron Swanson—will need to deal with dependencies and middleware. You can handle this all through your
Program.cs file, as there is no
Startup file out of the box. When you create a
WebApplicationBuilder, you have access to the trusty
IServiceCollection to register your services.
Here's a common example, when you want only to show exception details when developing locally.
var builder = WebApplication.CreateBuilder(args); var app = builder.Build(); if (app.Environment.IsDevelopment()) { app.UseDeveloperExceptionPage(); } // endpoints
There's nothing against you creating a
Startup file yourself as you always have, but you can do it right here in
Program.cs as well.
Try Out Minimal APIs Yourself
If you'd like to try out Minimal APIs yourself right now, you have two choices: live on the edge or live on the bleeding edge.
Live On The Edge: Using the Preview Bits
Starting with Preview 4, you can use that release to explore how Minimal APIs work, with a couple of caveats:
- You can't use global usings
- The lambdas will be casted
Both of these are resolved with C# 10, but the Preview 4 bits use C# 9 for now. If you want to use Preview 4, install the latest .NET 6 SDK—I'd also recommend installing the latest Visual Studio 2019 Preview. Here's how our first example would look. (I know, six lines of code. What a drag.)
using System; using Microsoft.AspNetCore.Builder; using Microsoft.Extensions.Hosting; var app = WebApplication.Create(args); app.MapGet("/", (Func<string>)(() => "Hello World!")); await app.RunAsync();
If you want to start with an app of your own, you can execute the following from your favorite terminal:
dotnet new web -o MyMinimalApi
Living on the Bleeding Edge: Use C# 10 and the latest compiler tools
If you want to live on the bleeding edge, you can use the latest compiler tools and C# 10.
First, you'll need to add a custom
nuget.config to the root of your project to get the latest tools:
<?xml version="1.0" encoding="utf-8"?> <configuration> <packageSources> <!--To inherit the global NuGet package sources remove the <clear/> line below --> <clear /> <add key="nuget" value="" /> <add key="dotnet6" value="" /> <add key="dotnet-tools" value="" /> </packageSources> </configuration>
In your project file, add the following to use the latest compiler tools and enable the capability for the project to read your global usings from a
.usings file:
<ItemGroup> <PackageReference Include="Microsoft.Net.Compilers.Toolset" Version="4.0.0-2.21275.18"> <PrivateAssets>all</PrivateAssets> <IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets> </PackageReference> </ItemGroup> <ItemGroup> <Compile Include=".usings" /> </ItemGroup>
Then, you can create and update a
.usings file, and you are good to go! I owe a debt of gratitude to Khalid Abuhakmeh and his CsharpTenFeatures repo for assistance. Feel free to refer to that project if you have issues getting the latest tools.
What does this mean for APIs in ASP.NET Core?
If you're new to building APIs in ASP.NET Core, this is likely a welcome improvement. You can worry about building APIs and not all the overhead that comes with MVC.
If you've developed ASP.NET Core APIs for a while, like me, you may be greeting this with both excitement and skepticism. This is great, but does it fit the needs of a production-scale API? And when it does, will it be hard to move over to the robust capabilities of ASP.NET Core MVC?
With Minimal APIs, the goal is to move out core API building capabilities—the ones that only exist in MVC today—and allow them to be used outside of MVC. When extracting these components away to a new paradigm, you can rely on middleware-like performance. Then, if you need to move from inline lambdas to MVC and its classes and controllers, the ASP.NET team plans to provide a smooth migration for you. These are two different roads with a bridge between them.
If you think long-term, Minimal APIs could be the default way to build APIs in ASP.NET Core—in most cases, it's better to start off small and then grow, rather than starting with MVC and not leveraging all its capabilities. Once you need it, it'll be there.
Of course, we've only scratched the service in all you can do with Minimal APIs. I'm interested in what you've built with them. What are your thoughts? Leave a comment below.
|
https://www.daveabrock.com/2021/06/09/low-ceremony-high-value-a-tour-of-minimal-apis-in-net-6/
|
CC-MAIN-2021-39
|
refinedweb
| 1,924
| 66.54
|
LoPy stops detecting BLE advertisements
- papasmurph last edited by papasmurph
Product: LoPy
Firmware: 1.6 something (updated a few days ago)
My test code logs all detected iBeacons (that use BLE). After a seemingly arbitrary time, but after maybe around an hour or so, it no longer detects any beacons. After a reset it works again.
get_adv returns, yet with nothing, so it doesn't hang. I'm aware of the max length of 8 for the buffer, but there's plenty of slack, so it doesn't fill up.
It also seems it misses detections while it works: If I use a beacon that pings each second, it will detect such pings for a while, then stop for a few seconds, then do it again. Hopefully there's no filtering or time window used. The beacon is very close, so it's always well within -60 dBm. At the same time I have many beacons out of range.
I've just added a timeout for silence, that calls start_scan again (if now that will work), but at that time I might have missed out on several detections already.
I don't believe it's hardware related, considering it immediately works again after reboot, but who am I to say.
from network import Bluetooth import pycom import time pycom.heartbeat(False) bluetooth = Bluetooth() bluetooth.start_scan(-1) timeSleep = 0.05 timeout = 0 while True: pycom.rgbled(0x202020) time.sleep(timeSleep) device = bluetooth.get_adv() if device != None: timeout = 0 d = [b for b in device.data] rssi = device.rssi if rssi > -60 and d[5] == 0x4c and d[6] == 0x00 and d[7] == 0x02 and d[8] == 0x15: majorid = d[25] * 256 + d[26] minorid = d[27] * 256 + d[28] power = d[29] pycom.rgbled(0x008000) time.sleep(0.10) print(str(rssi) + '/' + str(majorid) + '/' + str(minorid) + '/' + str(power)) else: pycom.rgbled(0x000020) print('!') else: pycom.rgbled(0x200000) print('?') timeout += timeSleep + timeSleep if timeout > 10: bluetooth.start_scan(-1) timeout = 0 time.sleep(timeSleep)
My lopy4 arrived and i tested with it, too, although i was pretty sure it wouldn't make a difference. It didn't, it does the same thing.
I left my lopy running since yesterday noon and from 270 readings, 223 were correct. So that's around 17-18% loss.
Hello,
I'm having the same problems with reading the BT advs. My setup is a lopy with a deepsleep shield and 3 ruuvi tags that adv every second. The ruuvis are placed about 3 meters apart. The lopy is on 1.17.0.b1.
The process is: read BT data, go to deepsleep for 15 sec, wake up, read BT data, deepsleep, etc.
I have tried first with the BT example in the pycom docs. The scan was set to 20 seconds. In about 40% of the cases the lopy would not read the data from all the 3 ruuvis.
Then i tried with the deinit/init. Scan for 10 sec, if not all 3 ruuvis read, deinit/init, scan for another 10 sec. It reduced the loss to only about 10%.
In the real case the deepsleep interval will be 1 hour. So, if it doesn't read all the ruuvis, some results will be 2 hours apart or even more, depending on luck, i guess :)
Is there any plan or update on this problem?
Thank you
only the firmware :-)
I did not touch anything in my code because I was pretty sure it was working as is.
Now I am patiently waiting for the death (hope not) of two WiPy 3 with the very latest firmware that are running since 47 hours.
Yet another update:
This is a WiPy 1.0 with a not-so-recent firmware. Running since two weeks and counting. BLE, WiFi and MQTT are running without hiccups :D
{"ts":1517336479,"pmac":"24xxxxxxxxxx","up":1302802,"fw":"1.12.0.b1","ip":"192.168.0.111"}
Follow up:
I restarted both WiPy 3 giving them another chance without modifying code at all, and this time was better: more than 24 hours of normal behavior without reboots or BLE lockups. Still, BLE acts randomly in the long run.
BTW there is an IDF related issue on Github that could lead to weird results on the BLE side, as I read.
Been there, done that. Unfortunately the culprit
is notdoes not seem lack of memory (one WiPy3 still working, the other went south like all WiPy 2).
Anyway, I have collection in code as soon as a ble scan cycle ends.
@duffo64
If it survived only on wipy3, this can point me that memory is here problem (only guess).
Did you tried track it down and print
gc.mem_free()in the code to see memory consumption?
And do you have
gc.collect()in the code or you have enabled automatic garbage collector?
@alanm101
Despite my initial enthusiasm, I must say that the situation is slightly better using 1.10.2.b1. Basically I had two issues:
- my beacons were not recognized, and this is solved
- callback for advertisements stops firing after a while, but I know that the program is not frozen somewhere. This still unsolved
The very same script is loaded on four WiPy 2.0, two WiPy 3.0 and one LoPy. Only one WiPy 3.0 survived after 18 hours. Sigh.
Thanks! I will test this weekend.
Wow...
On 1.10.1.b1 it's really, really, really better !
...did I say "really" ?
@alanm101
This is a small Arduino sketch on ESP32 Thing (Sparkfun) running without a glitch since this morning.
Moreover, it's happily seeing even my EMBC02 beacons. So, should I presume that in the end it's a micropython issue ?
@jmarcelino This hack is working for me so far. If I don't detect a BLE event within 10 seconds, I machine.reset().
from network import Bluetooth import machine import time import binascii import gc beaconlist = ['d3e0f6cd4ca9','e9f1db4814ef'] beaconevents = [] timelastdata = time.time() def new_adv_event(event): global beaconlist, beaconevents, timelastdata if event.events() == Bluetooth.NEW_ADV_EVENT: anydata = True while anydata: adv = bluetooth.get_adv() if adv != None: timelastdata = time.time() devid = binascii.hexlify(adv[0]).decode('utf-8') rssi = str(adv[3]*-1) if devid in beaconlist: if len(beaconevents) > 5: beaconevents.pop(0) beaconevents.append([devid, rssi]) else: anydata = False print('Starting BLE scan') bluetooth = Bluetooth() bluetooth.callback(trigger = Bluetooth.NEW_ADV_EVENT, handler = new_adv_event) bluetooth.init() bluetooth.start_scan(-1) cycles = 0 p_in = machine.Pin('G17',machine.Pin.IN, pull=machine.Pin.PULL_UP) while True: if p_in() == 0: print('pin') bluetooth.stop_scan() break cycles += 1 # Run garbage collector every 20 cycles. if cycles%20 == 0: gc.collect() # If no BLE event for 10 seconds, hard reset if time.time() - timelastdata > 10: machine.reset()
- jmarcelino last edited by
@alanm101
Starting on it tonight! I got this to be partially funded by my current client so I’ll have more time for it :)
@jmarcelino Hi. Have you had a chance to work on this?
Regards,
Alan.
I'm no longer convinced that this is a Lopy/uPython issue. I hacked raw C code and ran on a generic ESP32 using newly acquired knowledge. The code failed. It could be my hacks were wrong or perhaps the ESP32 or the IDF is screwed. I'm now trying new uPython code on the Lopy.
Test1: Single beacon using ESP32 at one second intervals.
Changes to ble_adv/main/app_bt.c
Lines 157-158:
uint16_t adv_intv_min = 256x3; // 160ms
uint16_t adv_intv_max = 256x3; // 160ms
<The 'x' is replaced with an asterisk. Some issue with uploading.>
Result: 2 hours with no problems.
Test2: Single beacon using ESP32 at one second intervals.
Changes to ble_adv/main/app_bt.c
Lines 157-158:
uint16_t adv_intv_min = 256x2; // 160ms
uint16_t adv_intv_max = 256x2; // 160ms
<The 'x' is replaced with an asterisk. Some issue with uploading.>
Result: 50 minutes, advs no longer detected.
@alanm101 My ignorance knows no bounds. The adv is broadcast on 3 channels, hence the multiple events per broadcast. My new tests are <cough> compensating for that.
|
https://forum.pycom.io/topic/1255/lopy-stops-detecting-ble-advertisements/123
|
CC-MAIN-2019-22
|
refinedweb
| 1,330
| 77.43
|
ActiveReportJS Viewer component includes
ActiveReportsJS API allows to print a report programmatically. Here is the example of code that loads, runs, and prints a report:
import { Core } from "@grapecity/activereports"; const report = new Core.PageReport(); await report.load("/reports/text-only.rdlx-json"); const doc = await report.run(); doc.print();
Alternatively a report can be exported to the PDF or HTML document with
autoPrint option is set to
true. In that case the print dialog of the document viewer appears when the document is opened. Check Export page for more information on how to use Export functionality of ActiveReportsJS.
Submit and view feedback for
|
https://www.grapecity.com/activereportsjs/docs/v2/DeveloperGuide/ActiveReportsJSViewer/Print
|
CC-MAIN-2022-05
|
refinedweb
| 104
| 51.04
|
We
Duration″, ” l2j1a9 “, “l2j1a9 “,
“a l2j1a9 “, ” l2j1a9 a”, “lwj1a9″, ” ‘{0}’ matches the requirement”, s);
} else {
Console.WriteLine(“String ‘]));
}
}
It depends, but I’d have to say in most cases performance isn’t the issue. Particular in validating user input, which doesn’t happen that frequently. So RTI’ing string validation code to shave off a few clock ticks is probably a waste of a programmer’s time.
Programmers should instead focus on which technique is clearer to understand for other programmers. Regexs have a reputation of being hard to read. But at least, in your simple example…
new Regex(@"wdwdwd");
…I find that very easy to read. Easier then the StringMatch method. So I’d probably use the Regex.
Isn’t this quite unfair to the regexes? Of course, they tend to be slower, but there are a couple of things here:
1. You create a new RegEx each time. True, they’re cached internally, but for simple expressions, the "new" line there will use much of the time.
2. If I don’t miss anything, you don’t compile the regexes to IL, which is basically what you do by writing a StringMatch method on your own. If you’re going to run 20000 iterations of an expression that is constant enough that you consider hardcoding it in C#, the costs of compiling the regex also seem low.
I know you tried to make a point here, but there are certainly cases where just turning the verification into C# is a bit more complicated and the regex still may be slow.
It would be interesting to see what numbers you get if you keep one static, compiled Regex instance.
Ah, I went ahead and tried it on my own.
Standard test gives 2188984.
"Unoptimzed" regex, compiled, externally gives 3439832. (only a factor 1.6, compare to your 18!)
"Optimized" regex, compiled, externally gives 3752544. The trim call and length check actually made things slower!
I then tried the unoptimized regex with new instances like your original version, but still only get a factor 7 difference, so I guess that my system (with the 1.1 framework) simply behaves differently.
Also, marking the two groups as non-capturing brought the factor down to 1.3 on my machine, but it is highly dependent on low timing resolution…
Wow, thanks for jumping on me quickly all, that’s great. Without a shadow of a doubt, there’s ways to make RegEx do this way better (CN points this out pretty well by showing that compiling the Regex can help significantly: just remeber, you don’t want too many precompiled regexes, it ends up being a bind), but I was mostly trying to illustrate that if you can do things cheaply and easily yourself, go ahead. I love that people were leaping up in defense of RegEx, but know when to try something else: if performance is critical especially (in this case). We do this in plenty of places in the framework, where we need to optimize the common scenario above others. One example is providing overloads for members that take base types (check out Console.WriteLine as an example), to help ensure that operations for those specific types (the most commonly used) are as optimized as possible. I would urge a similar approach here the reality is.. somewhat less exciting.
Well, I think the most important lesson is that the optimized Regex, relying on String.Trimming first, was actually inferior to the "unoptimized" regex, while both are inferior to C#.
Of course, there are good reasons to optimize for the common case, but it’s just as important to check that the optimization really gives you a benefit. I’ve seen things like keeping WeakReferences to a lot of objects to allow aggressive GC if needed. The only problem with the code was that the data protected was a simple combination of a DateTime and a primary ID from the database. That made the WeakReference itself just as heavy as the object referenced…
Just my $ 0.02.
FWIW, here’s what I get on an Athlon FX-53:
Unoptimized = 7x slower
Optimized = 6.6x slower
(move Regex r up to outer loop)
Unoptimized = 2.4x slower
I think most of the perf penalty is just instantiating the Regex object over and over.
> "We do this in plenty of places in the framework"
And I agree, you should. But I dislike working with other developers who have delusions of grandeur and think they’re working on the framework too. Hint: they’re not.
If I move the Regex declaration up there to be static and put in RegexOptions.Compiled, the performance of RegEx will be much better only 50% slower. (1093750 vs 1562500)
The numbers gets even closer than 50% if you do a better Trimmming of your strings 🙂
You call trim to much man!
Also call the regex once before timing, so the dll is compiled and loaded.
My number are now
781260 vs 1093764 on a P4 2.8
I’ve made some measurements with an edited version of the code you posted. Specifically, I made some changes:
* Made the regex shared and compiled (to give it similar advantages to C# code)
* Warmed up both loops before timing them
* Made the number of iterations configurable on the command line
(I’m not posting it here for brevity, and because indenting gets lost in the blog comment formatter).
I’ve timed the code for iterations between 100000 and 1000000 (that is, 100 thousand and 1 million) in 10 steps of 100 thousand.
Here’s the raw data as it is on my machine:
100000,3281271,5937538,0.552631579
200000,5937538,11250072,0.527777778
300000,8906307,17031359,0.52293578
400000,11875076,22812646,0.520547945
500000,14687594,28593933,0.513661202
600000,17812614,35781479,0.497816594
700000,20781383,41406515,0.501886792
800000,24531407,45469041,0.5395189
900000,26718921,51406579,0.519756839
1000000,29531439,56719113,0.520661157
The leftmost column is iterations, the second column is the C# way, the third column is the Regex way, and the fourth column is the 2nd divided by the 3rd.
Some points:
1. The .Ticks field has low granularity: the number 5937538 occurs twice, (see 100000 and 200000)
2. The scaling relationship is roughly flat (at 600000 it looks like we might have an upset because the Regex looks like it keeps on improving, but then things get jittery and randomness creeps in)
3. The scaling factor is roughly 50%
Conclusions:
1. Don’t use Ticks for measuring code performance, unless relatively large time periods are involved (that is, at the very least a couple of seconds).
2. There is no algorithmic difference between the two methods.
3. The performance gain of the C# method versus its inherent complexity increase / readability loss / flexibility loss is not sufficient to warrant its use.
In summary: don’t use this C# code. Stick to REs.
static bool StringMatch2(string s) {
if (s.Trim().Length != 6)
return false;
return (Char.IsLetter(s, 0) && Char.IsDigit(s, 1) &&
Char.IsLetter(s, 2) && Char.IsDigit(s, 3) &&
Char.IsLetter(s, 4) && Char.IsDigit(s, 5));
}
Well I’ve never seen such a good dose of feedback on an issue: awesome. Obviously I’ll need to be more careful on making the right code available next time. I think Jeff had an especially good point on making sure we’re aware of reality in these situations
I lost the reference, but one of your colleagues once wrote a blog entry on the RegexOptions.Compiled flag. To summarize, there are basically a few different modes of use and re-use for regexes.
Disclaimer: this is straight from memory, there may be errors…
First, the one-shot use, like in your post. This has a significant overhead cost of creating the regex. Instead of creating the Regex object on each call, you can as well call the static overloads of the methods in the Regex class. Creating a Regex instance in this case is mere overhead that makes your code less readable.
Second, the uncompiled, but cached use. Create the Regex instance once, but without the RegexOption.Compiled flag and store it somewhere. Then reuse that same regex instance. For many practical situations, putting it in a static field for the class using the regex works well. In other situations (typical for regexes that are not known at compile time), use nonstatic fields. This works faster than your solution in all situations. From my experience, this way of using Regexes is the best in most practical situations.
Third, the compiled and cached use. Create the regex instance once, but with the RegexOptions.Compiled flag. Your regex will be faster (about 30% IIRC), but at a very significant extra startup cost (you are creating a new dynamic assembly for each regex). Initially I used this option a lot in my code, but I noticed that the overhead cost is often more than it is worth! Measuring performance is the key to find out if this route is right for you.
Fourth: the Regex class provides options to compile a set of Regexes to a ‘real’ assembly, which can be referenced from your project. This is probably the fastest option for runtime, but too much of a hassle at design/compile time in most cases.
Kit George is the program manager for the .NET Base Class Library team. Kit recently posted an entry on the BCL blog describing a solution to a customer problem: We recently got asked this question by a customer: "In…
PingBack from
PingBack from
|
https://blogs.msdn.microsoft.com/bclteam/2005/02/21/knowing-when-not-to-use-regex-to-match-strings-system-text-regularexpressions-kit-george/
|
CC-MAIN-2016-50
|
refinedweb
| 1,599
| 73.07
|
Docs |
Forums |
Lists |
Bugs |
Planet |
Store |
GMN |
Get Gentoo!
Not eligible to see or edit group visibility for this bug.
View Bug Activity
|
Format For Printing
|
XML
|
Clone This Bug
dev-libs/apr-1.2.11 was released (tagged on 2007-09-04).
dev-libs/apr-util-1.2.10 was released (tagged on 2007-09-04).
Changes with APR-util 1.2.10
*) Support BerkeleyDB 4.6. [Arfrever Frehtes Taifersar Arahesis]
*) Test improvements to validate testmd4 and testdbm, unattended.
[Bojan Smojver]
Changes with APR-util 1.2.9
*) Ensure that an apr_reslist shrinks back to SMAX via the TTL by
reorganising the resource list from a queue to a stack.
PR 40348. [Christian BOITEL <christian_boitel yahoo.fr>]
*) Fix Solaris 2.8+ fdatasync() detection. The fdatasync() function
is marked as part of the Realtime library functions.
PR 37343. [Davi Arnaut]
*) Change configure's expat detection to use standard linker-based
tests, fixing build on e.g. biarch Linux systems. PR 28205.
[Joe Orton, Ruediger Pluem]
*) Portably implement testdate's long-time constants to solve
compilation faults where #LL isn't valid. [Curt Arnold]
*) APR_FIND_APU macro no longer checks /usr/local/apache2/.
PR 42089. [Colm MacCárthaigh]
*) Fix handling of attribute namespaces in apr_xml_to_text() when
a namespace map is provided. PR 41908. [Joe Orton]
Changes for APR 1.2.11
*) Win32 apr_file_read; Correctly handle completion-based read-to-EOF.
[Steven Naim <steven.naim googlemail.com>]
*) Fixed Win32 regression of stdout inheritance in apr_proc_create.
[William Rowe]
Changes for APR 1.2.10
*)). [Erik Huelsmann
<ehuels gmail.com>]
*) Fix day of year (tm_day) calculation for July. The bug only affects
Windows builds. PR 42953. [Davi Arnaut]
*) Fix LFS detection when building over NFS. The mode must be
specified when O_CREAT is in the flags to open().
PR 42821. [Rainer Jung <rainer.jung kippdata.de>]
*) Avoid overwriting the hash_mutex table for applications that
incorrectly calls apr_atomic_init(). PR 42760. [Davi Arnaut]
*) Allow IPv6 connectivity test to fail, avoiding a potentially fatal
error. [Davi Arnaut]
*) The MinGW Windows headers effectively redefines WINADVAPI from
__stdcall to empty which results in a link failure when wincrypt.h
is placed after an include to apr_private.h.
PR 42293. [Curt Arnold]
*). Fixes broken sendfile with LFS support on HP-UX.
PR 42261. [Davi Arnaut]
in cvs
|
http://bugs.gentoo.org/191733
|
crawl-002
|
refinedweb
| 375
| 62.24
|
Smart Image Cropping with katna
.
The best way to learn about Katna smart cropping module is to actually use it.
Installation
Katna can be installed either via PyPI or directly from source.
PyPI — Installation via PyPI is pretty straight forward
- Install python 3
- Install with pip
pip install katna
Install from source — Follow the steps below
- Install git.
- Install python 3.
- Clone the git repo.
git clone
Change the current working directory to the folder where you have cloned Katna repo.
cd <<path_to_the_folder_repo_cloned>>
If you use the anaconda python distribution then create a new conda environment. Keeping environment separate is a good practice.
conda create --name katna python=3
source activate katna
Run the setup
python setup.py install
How to use katna image module
Import the video module from the katna library
from Katna.image import Image
Instantiate the image class.
img_module = Image()
Image class offers different cropping methods for different inputs. here is a quick list of different cropping methods
- crop_image: — To crop an image file. This function is useful when you have bunch of images to be cropped.
- crop_image_from_cvimage: — To crop an in-memory image file. The in-memory image needs to be numpy array. This is useful for workflows integrations.
- crop_image_with_aspect: — To crop an image based on aspect ratio. Please note the difference from crop_image method. crop_image crops using width and height whereas crop_aspect_ratio uses aspect ratio for cropping.
Let us look at the function definitions now.
crop_image — . This method accepts 6 parameters and returns a list of images as numpy 2D array. Below are the six parameters of the function.
- file_path: image file path from which crop has to be extracted.
- crop_width: width of crop to extract.
- crop_height: height of crop to extract.
- no_of_crops_to_return: number of crops rectangles to be extracted
- filters: You can use this optional parameter to specify image parts/objects that should be retained in the cropped image. At the moment only “text” retention filter is present. A text filter attempts to retain the texts in the image. Newer retention filters will be added in future. By default, filters are not applied.
- down_sample_factor: You can use this optional feature to specify the downsampling factor. For large images consider increasing this parameter for fast image cropping. By default input images are downsampled by factor of 8 before processing.
# number of images to be returned
no_of_crops = 3
# crop dimensions
crop_width = 1000
crop_height = 600
# Filters
filters = ["text"]sampling_factor = 12image_file_path = <Path where the image is stored>
crop_list = img_module.crop_image(
file_path=image_file_path,
crop_width=crop_width,
crop_height=crop_height,
num_of_crops= no_of_crops,
filters=filters,
down_sample_factor = sampling_factor
)
crop_image_from_cvimage — It accepts opencv image as image source, rest of the parameters are same as crop_image function.
- input_image: in-memory image as numpy array.
- crop_width
- crop_height
- no_of_crops_to_returned
- filters
- down_sample_factor
crop_image_with_aspect — It accepts the aspect ratio for cropping dimension. Rest of the parameters are same as crop_image
- file_path
- crop_aspect_ratio: aspect ratio in string format (for e.g. ‘4:3’ or ‘16:9’)by which crops need to be extracted.
- no_of_crops_to_returned
- filters
- down_sample_factor
How katna image module works
All possible crops from the input image for the crop size is selected and passed through set of filters — The rule of third, Saliency, face detection and edge detection. Images shown below are filter output of an image. Each of the filter gives a score to the crops. If the text retention filter is switched on then it filters out crops that cuts the text. The final emerging crops are sorted and then numbers of crops requested by the caller are returned.
What’s next
We plan to add more filters in future like violence, nudity etc. to make it more robust.
We are thankful to open source community and project smartcrop especially that enabled us to reuse and the take good ideas forward. If you find the tool useful please do share the project.
That’s it!! You can find a complete application here.
|
https://aloksaan.medium.com/smart-image-cropping-with-katna-1b4c0341ed24?source=post_page-----1b4c0341ed24--------------------------------
|
CC-MAIN-2021-21
|
refinedweb
| 646
| 58.99
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.